var/home/core/zuul-output/0000755000175000017500000000000015117126401014523 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015117132156015473 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log0000644000000000000000021140706315117132147017703 0ustar rootrootAug 13 19:43:52 crc systemd[1]: Starting Kubernetes Kubelet... Aug 13 19:43:54 crc kubenswrapper[4183]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 19:43:54 crc kubenswrapper[4183]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Aug 13 19:43:54 crc kubenswrapper[4183]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 19:43:54 crc kubenswrapper[4183]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 19:43:54 crc kubenswrapper[4183]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 19:43:54 crc kubenswrapper[4183]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.177165 4183 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182423 4183 feature_gate.go:227] unrecognized feature gate: BareMetalLoadBalancer Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182470 4183 feature_gate.go:227] unrecognized feature gate: InsightsConfig Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182483 4183 feature_gate.go:227] unrecognized feature gate: CSIDriverSharedResource Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182492 4183 feature_gate.go:227] unrecognized feature gate: MetricsCollectionProfiles Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182501 4183 feature_gate.go:227] unrecognized feature gate: MixedCPUsAllocation Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182509 4183 feature_gate.go:227] unrecognized feature gate: OpenShiftPodSecurityAdmission Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182517 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallPowerVS Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182526 4183 feature_gate.go:227] unrecognized feature gate: GatewayAPI Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182534 4183 feature_gate.go:227] unrecognized feature gate: InstallAlternateInfrastructureAWS Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182542 4183 feature_gate.go:227] unrecognized feature gate: MachineConfigNodes Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182551 4183 feature_gate.go:227] unrecognized feature gate: NetworkLiveMigration Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182559 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAzure Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182567 4183 feature_gate.go:227] unrecognized feature gate: HardwareSpeed Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182576 4183 feature_gate.go:227] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182584 4183 feature_gate.go:227] unrecognized feature gate: AutomatedEtcdBackup Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182592 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstall Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182600 4183 feature_gate.go:227] unrecognized feature gate: ExternalCloudProvider Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182608 4183 feature_gate.go:227] unrecognized feature gate: ExternalOIDC Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182617 4183 feature_gate.go:227] unrecognized feature gate: BuildCSIVolumes Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182624 4183 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderAzure Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182633 4183 feature_gate.go:227] unrecognized feature gate: InsightsOnDemandDataGather Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182641 4183 feature_gate.go:227] unrecognized feature gate: NetworkDiagnosticsConfig Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182650 4183 feature_gate.go:227] unrecognized feature gate: SigstoreImageVerification Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182658 4183 feature_gate.go:227] unrecognized feature gate: AlibabaPlatform Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182666 4183 feature_gate.go:227] unrecognized feature gate: OnClusterBuild Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182733 4183 feature_gate.go:227] unrecognized feature gate: ImagePolicy Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182748 4183 feature_gate.go:227] unrecognized feature gate: NewOLM Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182757 4183 feature_gate.go:227] unrecognized feature gate: DNSNameResolver Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182765 4183 feature_gate.go:227] unrecognized feature gate: VSphereDriverConfiguration Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182858 4183 feature_gate.go:227] unrecognized feature gate: VolumeGroupSnapshot Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182874 4183 feature_gate.go:227] unrecognized feature gate: UpgradeStatus Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182883 4183 feature_gate.go:227] unrecognized feature gate: AdminNetworkPolicy Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182891 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAWS Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182900 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallGCP Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182908 4183 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderGCP Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182918 4183 feature_gate.go:227] unrecognized feature gate: ExternalRouteCertificate Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182926 4183 feature_gate.go:227] unrecognized feature gate: ManagedBootImages Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182934 4183 feature_gate.go:227] unrecognized feature gate: MetricsServer Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182943 4183 feature_gate.go:227] unrecognized feature gate: VSphereMultiVCenters Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182951 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallNutanix Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182959 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallVSphere Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182967 4183 feature_gate.go:227] unrecognized feature gate: Example Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182975 4183 feature_gate.go:227] unrecognized feature gate: NodeDisruptionPolicy Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182984 4183 feature_gate.go:227] unrecognized feature gate: PinnedImages Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182992 4183 feature_gate.go:227] unrecognized feature gate: PlatformOperators Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.183018 4183 feature_gate.go:227] unrecognized feature gate: AzureWorkloadIdentity Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.183026 4183 feature_gate.go:227] unrecognized feature gate: ChunkSizeMiB Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.183034 4183 feature_gate.go:227] unrecognized feature gate: GCPClusterHostedDNS Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.183042 4183 feature_gate.go:227] unrecognized feature gate: GCPLabelsTags Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.183051 4183 feature_gate.go:227] unrecognized feature gate: InsightsConfigAPI Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.183060 4183 feature_gate.go:227] unrecognized feature gate: MachineAPIProviderOpenStack Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.183069 4183 feature_gate.go:227] unrecognized feature gate: PrivateHostedZoneAWS Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.183078 4183 feature_gate.go:227] unrecognized feature gate: SignatureStores Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.183088 4183 feature_gate.go:227] unrecognized feature gate: VSphereControlPlaneMachineSet Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.183097 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallIBMCloud Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.183107 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallOpenStack Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.183116 4183 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderExternal Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.183125 4183 feature_gate.go:240] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.183134 4183 feature_gate.go:227] unrecognized feature gate: EtcdBackendQuota Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.183145 4183 feature_gate.go:227] unrecognized feature gate: VSphereStaticIPs Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183412 4183 flags.go:64] FLAG: --address="0.0.0.0" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183522 4183 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183535 4183 flags.go:64] FLAG: --anonymous-auth="true" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183543 4183 flags.go:64] FLAG: --application-metrics-count-limit="100" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183609 4183 flags.go:64] FLAG: --authentication-token-webhook="false" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183620 4183 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183630 4183 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183638 4183 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183645 4183 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183652 4183 flags.go:64] FLAG: --azure-container-registry-config="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183659 4183 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183667 4183 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183679 4183 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183688 4183 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183695 4183 flags.go:64] FLAG: --cgroup-root="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183701 4183 flags.go:64] FLAG: --cgroups-per-qos="true" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183708 4183 flags.go:64] FLAG: --client-ca-file="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183715 4183 flags.go:64] FLAG: --cloud-config="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183721 4183 flags.go:64] FLAG: --cloud-provider="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183727 4183 flags.go:64] FLAG: --cluster-dns="[]" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183740 4183 flags.go:64] FLAG: --cluster-domain="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183750 4183 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183757 4183 flags.go:64] FLAG: --config-dir="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183764 4183 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183771 4183 flags.go:64] FLAG: --container-log-max-files="5" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183835 4183 flags.go:64] FLAG: --container-log-max-size="10Mi" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183849 4183 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183858 4183 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183865 4183 flags.go:64] FLAG: --containerd-namespace="k8s.io" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183872 4183 flags.go:64] FLAG: --contention-profiling="false" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183879 4183 flags.go:64] FLAG: --cpu-cfs-quota="true" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183886 4183 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183893 4183 flags.go:64] FLAG: --cpu-manager-policy="none" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183904 4183 flags.go:64] FLAG: --cpu-manager-policy-options="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183916 4183 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183923 4183 flags.go:64] FLAG: --enable-controller-attach-detach="true" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183929 4183 flags.go:64] FLAG: --enable-debugging-handlers="true" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183939 4183 flags.go:64] FLAG: --enable-load-reader="false" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183946 4183 flags.go:64] FLAG: --enable-server="true" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183953 4183 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183970 4183 flags.go:64] FLAG: --event-burst="100" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183978 4183 flags.go:64] FLAG: --event-qps="50" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183984 4183 flags.go:64] FLAG: --event-storage-age-limit="default=0" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183992 4183 flags.go:64] FLAG: --event-storage-event-limit="default=0" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183998 4183 flags.go:64] FLAG: --eviction-hard="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184007 4183 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184013 4183 flags.go:64] FLAG: --eviction-minimum-reclaim="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184024 4183 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184035 4183 flags.go:64] FLAG: --eviction-soft="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184043 4183 flags.go:64] FLAG: --eviction-soft-grace-period="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184051 4183 flags.go:64] FLAG: --exit-on-lock-contention="false" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184058 4183 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184067 4183 flags.go:64] FLAG: --experimental-mounter-path="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184075 4183 flags.go:64] FLAG: --fail-swap-on="true" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184083 4183 flags.go:64] FLAG: --feature-gates="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184100 4183 flags.go:64] FLAG: --file-check-frequency="20s" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184107 4183 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184114 4183 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184121 4183 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184128 4183 flags.go:64] FLAG: --healthz-port="10248" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184136 4183 flags.go:64] FLAG: --help="false" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184143 4183 flags.go:64] FLAG: --hostname-override="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184157 4183 flags.go:64] FLAG: --housekeeping-interval="10s" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184164 4183 flags.go:64] FLAG: --http-check-frequency="20s" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184171 4183 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184177 4183 flags.go:64] FLAG: --image-credential-provider-config="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184183 4183 flags.go:64] FLAG: --image-gc-high-threshold="85" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184190 4183 flags.go:64] FLAG: --image-gc-low-threshold="80" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184270 4183 flags.go:64] FLAG: --image-service-endpoint="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184285 4183 flags.go:64] FLAG: --iptables-drop-bit="15" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184301 4183 flags.go:64] FLAG: --iptables-masquerade-bit="14" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184308 4183 flags.go:64] FLAG: --keep-terminated-pod-volumes="false" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184315 4183 flags.go:64] FLAG: --kernel-memcg-notification="false" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184323 4183 flags.go:64] FLAG: --kube-api-burst="100" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184330 4183 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184336 4183 flags.go:64] FLAG: --kube-api-qps="50" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184342 4183 flags.go:64] FLAG: --kube-reserved="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184355 4183 flags.go:64] FLAG: --kube-reserved-cgroup="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184366 4183 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184373 4183 flags.go:64] FLAG: --kubelet-cgroups="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184380 4183 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184387 4183 flags.go:64] FLAG: --lock-file="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184394 4183 flags.go:64] FLAG: --log-cadvisor-usage="false" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184401 4183 flags.go:64] FLAG: --log-flush-frequency="5s" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184408 4183 flags.go:64] FLAG: --log-json-info-buffer-size="0" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184432 4183 flags.go:64] FLAG: --log-json-split-stream="false" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184440 4183 flags.go:64] FLAG: --logging-format="text" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184446 4183 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184455 4183 flags.go:64] FLAG: --make-iptables-util-chains="true" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184462 4183 flags.go:64] FLAG: --manifest-url="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184468 4183 flags.go:64] FLAG: --manifest-url-header="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184486 4183 flags.go:64] FLAG: --max-housekeeping-interval="15s" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184493 4183 flags.go:64] FLAG: --max-open-files="1000000" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184502 4183 flags.go:64] FLAG: --max-pods="110" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184508 4183 flags.go:64] FLAG: --maximum-dead-containers="-1" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184516 4183 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184523 4183 flags.go:64] FLAG: --memory-manager-policy="None" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184529 4183 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184541 4183 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184550 4183 flags.go:64] FLAG: --node-ip="192.168.126.11" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184557 4183 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184575 4183 flags.go:64] FLAG: --node-status-max-images="50" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184581 4183 flags.go:64] FLAG: --node-status-update-frequency="10s" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184588 4183 flags.go:64] FLAG: --oom-score-adj="-999" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184595 4183 flags.go:64] FLAG: --pod-cidr="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184611 4183 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce0319702e115e7248d135e58342ccf3f458e19c39e86dc8e79036f578ce80a4" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184623 4183 flags.go:64] FLAG: --pod-manifest-path="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184630 4183 flags.go:64] FLAG: --pod-max-pids="-1" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184637 4183 flags.go:64] FLAG: --pods-per-core="0" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184644 4183 flags.go:64] FLAG: --port="10250" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184650 4183 flags.go:64] FLAG: --protect-kernel-defaults="false" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184657 4183 flags.go:64] FLAG: --provider-id="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184670 4183 flags.go:64] FLAG: --qos-reserved="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184681 4183 flags.go:64] FLAG: --read-only-port="10255" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184687 4183 flags.go:64] FLAG: --register-node="true" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184694 4183 flags.go:64] FLAG: --register-schedulable="true" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184701 4183 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184712 4183 flags.go:64] FLAG: --registry-burst="10" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184722 4183 flags.go:64] FLAG: --registry-qps="5" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184737 4183 flags.go:64] FLAG: --reserved-cpus="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184744 4183 flags.go:64] FLAG: --reserved-memory="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184752 4183 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184759 4183 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184765 4183 flags.go:64] FLAG: --rotate-certificates="false" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184878 4183 flags.go:64] FLAG: --rotate-server-certificates="false" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184890 4183 flags.go:64] FLAG: --runonce="false" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184903 4183 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184912 4183 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184919 4183 flags.go:64] FLAG: --seccomp-default="false" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184926 4183 flags.go:64] FLAG: --serialize-image-pulls="true" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184933 4183 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184940 4183 flags.go:64] FLAG: --storage-driver-db="cadvisor" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184947 4183 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184953 4183 flags.go:64] FLAG: --storage-driver-password="root" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184973 4183 flags.go:64] FLAG: --storage-driver-secure="false" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184982 4183 flags.go:64] FLAG: --storage-driver-table="stats" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184989 4183 flags.go:64] FLAG: --storage-driver-user="root" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184996 4183 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.185003 4183 flags.go:64] FLAG: --sync-frequency="1m0s" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.185010 4183 flags.go:64] FLAG: --system-cgroups="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.185017 4183 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.185038 4183 flags.go:64] FLAG: --system-reserved-cgroup="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.185045 4183 flags.go:64] FLAG: --tls-cert-file="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.185052 4183 flags.go:64] FLAG: --tls-cipher-suites="[]" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.185059 4183 flags.go:64] FLAG: --tls-min-version="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.185068 4183 flags.go:64] FLAG: --tls-private-key-file="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.185074 4183 flags.go:64] FLAG: --topology-manager-policy="none" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.185081 4183 flags.go:64] FLAG: --topology-manager-policy-options="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.185087 4183 flags.go:64] FLAG: --topology-manager-scope="container" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.185102 4183 flags.go:64] FLAG: --v="2" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.185116 4183 flags.go:64] FLAG: --version="false" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.185124 4183 flags.go:64] FLAG: --vmodule="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.185131 4183 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.185139 4183 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185244 4183 feature_gate.go:227] unrecognized feature gate: AutomatedEtcdBackup Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185258 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstall Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185265 4183 feature_gate.go:227] unrecognized feature gate: ExternalCloudProvider Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185272 4183 feature_gate.go:227] unrecognized feature gate: ExternalOIDC Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185280 4183 feature_gate.go:227] unrecognized feature gate: BuildCSIVolumes Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185295 4183 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderAzure Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185307 4183 feature_gate.go:227] unrecognized feature gate: InsightsOnDemandDataGather Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185314 4183 feature_gate.go:227] unrecognized feature gate: NetworkDiagnosticsConfig Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185322 4183 feature_gate.go:227] unrecognized feature gate: SigstoreImageVerification Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185329 4183 feature_gate.go:227] unrecognized feature gate: AlibabaPlatform Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185337 4183 feature_gate.go:227] unrecognized feature gate: OnClusterBuild Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185344 4183 feature_gate.go:227] unrecognized feature gate: ImagePolicy Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185354 4183 feature_gate.go:227] unrecognized feature gate: NewOLM Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185369 4183 feature_gate.go:227] unrecognized feature gate: DNSNameResolver Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185375 4183 feature_gate.go:227] unrecognized feature gate: VSphereDriverConfiguration Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185381 4183 feature_gate.go:227] unrecognized feature gate: VolumeGroupSnapshot Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185387 4183 feature_gate.go:227] unrecognized feature gate: ManagedBootImages Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185394 4183 feature_gate.go:227] unrecognized feature gate: UpgradeStatus Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185400 4183 feature_gate.go:227] unrecognized feature gate: AdminNetworkPolicy Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185406 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAWS Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185411 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallGCP Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185423 4183 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderGCP Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185432 4183 feature_gate.go:227] unrecognized feature gate: ExternalRouteCertificate Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185438 4183 feature_gate.go:227] unrecognized feature gate: MetricsServer Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185444 4183 feature_gate.go:227] unrecognized feature gate: VSphereMultiVCenters Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185450 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallNutanix Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185455 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallVSphere Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185463 4183 feature_gate.go:227] unrecognized feature gate: Example Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185470 4183 feature_gate.go:227] unrecognized feature gate: NodeDisruptionPolicy Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185476 4183 feature_gate.go:227] unrecognized feature gate: MachineAPIProviderOpenStack Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185482 4183 feature_gate.go:227] unrecognized feature gate: PinnedImages Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185494 4183 feature_gate.go:227] unrecognized feature gate: PlatformOperators Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185500 4183 feature_gate.go:227] unrecognized feature gate: AzureWorkloadIdentity Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185506 4183 feature_gate.go:227] unrecognized feature gate: ChunkSizeMiB Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185513 4183 feature_gate.go:227] unrecognized feature gate: GCPClusterHostedDNS Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185520 4183 feature_gate.go:227] unrecognized feature gate: GCPLabelsTags Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185527 4183 feature_gate.go:227] unrecognized feature gate: InsightsConfigAPI Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185537 4183 feature_gate.go:227] unrecognized feature gate: PrivateHostedZoneAWS Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185545 4183 feature_gate.go:227] unrecognized feature gate: SignatureStores Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185552 4183 feature_gate.go:227] unrecognized feature gate: VSphereControlPlaneMachineSet Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185559 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallIBMCloud Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185566 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallOpenStack Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185573 4183 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderExternal Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185581 4183 feature_gate.go:240] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185592 4183 feature_gate.go:227] unrecognized feature gate: EtcdBackendQuota Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185600 4183 feature_gate.go:227] unrecognized feature gate: VSphereStaticIPs Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185607 4183 feature_gate.go:227] unrecognized feature gate: BareMetalLoadBalancer Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185615 4183 feature_gate.go:227] unrecognized feature gate: InsightsConfig Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185622 4183 feature_gate.go:227] unrecognized feature gate: CSIDriverSharedResource Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185630 4183 feature_gate.go:227] unrecognized feature gate: MetricsCollectionProfiles Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185636 4183 feature_gate.go:227] unrecognized feature gate: MixedCPUsAllocation Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185642 4183 feature_gate.go:227] unrecognized feature gate: OpenShiftPodSecurityAdmission Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185647 4183 feature_gate.go:227] unrecognized feature gate: NetworkLiveMigration Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185655 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallPowerVS Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185661 4183 feature_gate.go:227] unrecognized feature gate: GatewayAPI Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185667 4183 feature_gate.go:227] unrecognized feature gate: InstallAlternateInfrastructureAWS Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185673 4183 feature_gate.go:227] unrecognized feature gate: MachineConfigNodes Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185678 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAzure Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185684 4183 feature_gate.go:227] unrecognized feature gate: HardwareSpeed Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185690 4183 feature_gate.go:227] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.185698 4183 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false ServiceAccountTokenNodeBindingValidation:false ServiceAccountTokenPodNodeInfo:false TranslateStreamCloseWebsocketRequests:false ValidatingAdmissionPolicy:false]} Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.214743 4183 server.go:487] "Kubelet version" kubeletVersion="v1.29.5+29c95f3" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.214852 4183 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.214895 4183 feature_gate.go:227] unrecognized feature gate: ImagePolicy Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.214906 4183 feature_gate.go:227] unrecognized feature gate: NewOLM Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.214914 4183 feature_gate.go:227] unrecognized feature gate: DNSNameResolver Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.214922 4183 feature_gate.go:227] unrecognized feature gate: VSphereDriverConfiguration Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.214932 4183 feature_gate.go:227] unrecognized feature gate: VolumeGroupSnapshot Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.214940 4183 feature_gate.go:227] unrecognized feature gate: AdminNetworkPolicy Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.214947 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAWS Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.214955 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallGCP Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.214962 4183 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderGCP Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.214970 4183 feature_gate.go:227] unrecognized feature gate: ExternalRouteCertificate Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.214978 4183 feature_gate.go:227] unrecognized feature gate: ManagedBootImages Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.214986 4183 feature_gate.go:227] unrecognized feature gate: UpgradeStatus Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215020 4183 feature_gate.go:227] unrecognized feature gate: MetricsServer Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215030 4183 feature_gate.go:227] unrecognized feature gate: VSphereMultiVCenters Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215038 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallNutanix Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215047 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallVSphere Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215054 4183 feature_gate.go:227] unrecognized feature gate: Example Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215064 4183 feature_gate.go:227] unrecognized feature gate: NodeDisruptionPolicy Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215070 4183 feature_gate.go:227] unrecognized feature gate: AzureWorkloadIdentity Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215077 4183 feature_gate.go:227] unrecognized feature gate: ChunkSizeMiB Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215084 4183 feature_gate.go:227] unrecognized feature gate: GCPClusterHostedDNS Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215091 4183 feature_gate.go:227] unrecognized feature gate: GCPLabelsTags Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215098 4183 feature_gate.go:227] unrecognized feature gate: InsightsConfigAPI Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215106 4183 feature_gate.go:227] unrecognized feature gate: MachineAPIProviderOpenStack Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215113 4183 feature_gate.go:227] unrecognized feature gate: PinnedImages Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215120 4183 feature_gate.go:227] unrecognized feature gate: PlatformOperators Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215127 4183 feature_gate.go:227] unrecognized feature gate: PrivateHostedZoneAWS Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215136 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallIBMCloud Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215145 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallOpenStack Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215154 4183 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderExternal Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215162 4183 feature_gate.go:240] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215171 4183 feature_gate.go:227] unrecognized feature gate: SignatureStores Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215180 4183 feature_gate.go:227] unrecognized feature gate: VSphereControlPlaneMachineSet Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215188 4183 feature_gate.go:227] unrecognized feature gate: EtcdBackendQuota Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215232 4183 feature_gate.go:227] unrecognized feature gate: VSphereStaticIPs Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215247 4183 feature_gate.go:227] unrecognized feature gate: BareMetalLoadBalancer Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215255 4183 feature_gate.go:227] unrecognized feature gate: InsightsConfig Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215263 4183 feature_gate.go:227] unrecognized feature gate: CSIDriverSharedResource Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215272 4183 feature_gate.go:227] unrecognized feature gate: MetricsCollectionProfiles Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215279 4183 feature_gate.go:227] unrecognized feature gate: MixedCPUsAllocation Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215288 4183 feature_gate.go:227] unrecognized feature gate: OpenShiftPodSecurityAdmission Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215296 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallPowerVS Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215305 4183 feature_gate.go:227] unrecognized feature gate: GatewayAPI Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215313 4183 feature_gate.go:227] unrecognized feature gate: InstallAlternateInfrastructureAWS Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215321 4183 feature_gate.go:227] unrecognized feature gate: MachineConfigNodes Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215333 4183 feature_gate.go:227] unrecognized feature gate: NetworkLiveMigration Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215341 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAzure Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215348 4183 feature_gate.go:227] unrecognized feature gate: HardwareSpeed Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215357 4183 feature_gate.go:227] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215365 4183 feature_gate.go:227] unrecognized feature gate: AutomatedEtcdBackup Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215373 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstall Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215382 4183 feature_gate.go:227] unrecognized feature gate: ExternalCloudProvider Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215390 4183 feature_gate.go:227] unrecognized feature gate: ExternalOIDC Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215399 4183 feature_gate.go:227] unrecognized feature gate: BuildCSIVolumes Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215407 4183 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderAzure Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215416 4183 feature_gate.go:227] unrecognized feature gate: InsightsOnDemandDataGather Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215424 4183 feature_gate.go:227] unrecognized feature gate: NetworkDiagnosticsConfig Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215432 4183 feature_gate.go:227] unrecognized feature gate: SigstoreImageVerification Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215440 4183 feature_gate.go:227] unrecognized feature gate: AlibabaPlatform Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215449 4183 feature_gate.go:227] unrecognized feature gate: OnClusterBuild Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.215458 4183 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false ServiceAccountTokenNodeBindingValidation:false ServiceAccountTokenPodNodeInfo:false TranslateStreamCloseWebsocketRequests:false ValidatingAdmissionPolicy:false]} Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215645 4183 feature_gate.go:227] unrecognized feature gate: BareMetalLoadBalancer Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215660 4183 feature_gate.go:227] unrecognized feature gate: InsightsConfig Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215669 4183 feature_gate.go:227] unrecognized feature gate: MixedCPUsAllocation Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215678 4183 feature_gate.go:227] unrecognized feature gate: OpenShiftPodSecurityAdmission Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215686 4183 feature_gate.go:227] unrecognized feature gate: CSIDriverSharedResource Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215695 4183 feature_gate.go:227] unrecognized feature gate: MetricsCollectionProfiles Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215703 4183 feature_gate.go:227] unrecognized feature gate: InstallAlternateInfrastructureAWS Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215712 4183 feature_gate.go:227] unrecognized feature gate: MachineConfigNodes Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215719 4183 feature_gate.go:227] unrecognized feature gate: NetworkLiveMigration Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215727 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallPowerVS Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215736 4183 feature_gate.go:227] unrecognized feature gate: GatewayAPI Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215744 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAzure Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215754 4183 feature_gate.go:227] unrecognized feature gate: HardwareSpeed Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215763 4183 feature_gate.go:227] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215832 4183 feature_gate.go:227] unrecognized feature gate: ExternalOIDC Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215847 4183 feature_gate.go:227] unrecognized feature gate: AutomatedEtcdBackup Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215855 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstall Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215864 4183 feature_gate.go:227] unrecognized feature gate: ExternalCloudProvider Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215873 4183 feature_gate.go:227] unrecognized feature gate: BuildCSIVolumes Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215881 4183 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderAzure Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215889 4183 feature_gate.go:227] unrecognized feature gate: InsightsOnDemandDataGather Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215897 4183 feature_gate.go:227] unrecognized feature gate: NetworkDiagnosticsConfig Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215904 4183 feature_gate.go:227] unrecognized feature gate: SigstoreImageVerification Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215913 4183 feature_gate.go:227] unrecognized feature gate: AlibabaPlatform Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215921 4183 feature_gate.go:227] unrecognized feature gate: OnClusterBuild Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215929 4183 feature_gate.go:227] unrecognized feature gate: ImagePolicy Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215937 4183 feature_gate.go:227] unrecognized feature gate: NewOLM Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215946 4183 feature_gate.go:227] unrecognized feature gate: VolumeGroupSnapshot Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215954 4183 feature_gate.go:227] unrecognized feature gate: DNSNameResolver Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215962 4183 feature_gate.go:227] unrecognized feature gate: VSphereDriverConfiguration Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215971 4183 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderGCP Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215979 4183 feature_gate.go:227] unrecognized feature gate: ExternalRouteCertificate Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215987 4183 feature_gate.go:227] unrecognized feature gate: ManagedBootImages Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215996 4183 feature_gate.go:227] unrecognized feature gate: UpgradeStatus Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216004 4183 feature_gate.go:227] unrecognized feature gate: AdminNetworkPolicy Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216012 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAWS Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216021 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallGCP Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216029 4183 feature_gate.go:227] unrecognized feature gate: MetricsServer Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216038 4183 feature_gate.go:227] unrecognized feature gate: VSphereMultiVCenters Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216048 4183 feature_gate.go:227] unrecognized feature gate: NodeDisruptionPolicy Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216056 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallNutanix Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216064 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallVSphere Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216073 4183 feature_gate.go:227] unrecognized feature gate: Example Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216081 4183 feature_gate.go:227] unrecognized feature gate: GCPLabelsTags Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216089 4183 feature_gate.go:227] unrecognized feature gate: InsightsConfigAPI Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216098 4183 feature_gate.go:227] unrecognized feature gate: MachineAPIProviderOpenStack Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216106 4183 feature_gate.go:227] unrecognized feature gate: PinnedImages Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216114 4183 feature_gate.go:227] unrecognized feature gate: PlatformOperators Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216122 4183 feature_gate.go:227] unrecognized feature gate: AzureWorkloadIdentity Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216130 4183 feature_gate.go:227] unrecognized feature gate: ChunkSizeMiB Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216141 4183 feature_gate.go:227] unrecognized feature gate: GCPClusterHostedDNS Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216149 4183 feature_gate.go:227] unrecognized feature gate: PrivateHostedZoneAWS Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216160 4183 feature_gate.go:240] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216169 4183 feature_gate.go:227] unrecognized feature gate: SignatureStores Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216177 4183 feature_gate.go:227] unrecognized feature gate: VSphereControlPlaneMachineSet Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216185 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallIBMCloud Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216227 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallOpenStack Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216244 4183 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderExternal Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216252 4183 feature_gate.go:227] unrecognized feature gate: VSphereStaticIPs Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216261 4183 feature_gate.go:227] unrecognized feature gate: EtcdBackendQuota Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.216270 4183 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false ServiceAccountTokenNodeBindingValidation:false ServiceAccountTokenPodNodeInfo:false TranslateStreamCloseWebsocketRequests:false ValidatingAdmissionPolicy:false]} Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.218639 4183 server.go:925] "Client rotation is on, will bootstrap in background" Aug 13 19:43:54 crc kubenswrapper[4183]: E0813 19:43:54.261135 4183 bootstrap.go:266] part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2025-06-27 13:02:31 +0000 UTC Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.264516 4183 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.268356 4183 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.269062 4183 server.go:982] "Starting client certificate rotation" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.269322 4183 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.270038 4183 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.305247 4183 dynamic_cafile_content.go:119] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.348409 4183 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.354284 4183 util_unix.go:103] "Using this endpoint is deprecated, please consider using full URL format" endpoint="/var/run/crio/crio.sock" URL="unix:///var/run/crio/crio.sock" Aug 13 19:43:54 crc kubenswrapper[4183]: E0813 19:43:54.355040 4183 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.383335 4183 remote_runtime.go:143] "Validated CRI v1 runtime API" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.383439 4183 util_unix.go:103] "Using this endpoint is deprecated, please consider using full URL format" endpoint="/var/run/crio/crio.sock" URL="unix:///var/run/crio/crio.sock" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.423604 4183 remote_image.go:111] "Validated CRI v1 image API" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.436425 4183 fs.go:132] Filesystem UUIDs: map[68d6f3e9-64e9-44a4-a1d0-311f9c629a01:/dev/vda4 6ea7ef63-bc43-49c4-9337-b3b14ffb2763:/dev/vda3 7B77-95E7:/dev/vda2] Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.436494 4183 fs.go:133] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} /var/lib/containers/storage/overlay-containers/b56e232756d61ee2b06c4c940f94dd2d9c1c6744eb2ba718b704bda5002ffdcc/userdata/shm:{mountpoint:/var/lib/containers/storage/overlay-containers/b56e232756d61ee2b06c4c940f94dd2d9c1c6744eb2ba718b704bda5002ffdcc/userdata/shm major:0 minor:43 fsType:tmpfs blockSize:0} overlay_0-44:{mountpoint:/var/lib/containers/storage/overlay/40b1512db3f1e3b7db43a52c25ec16b90b1a271577cfa32a91a92a335a6d73c5/merged major:0 minor:44 fsType:overlay blockSize:0}] Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.453677 4183 manager.go:217] Machine: {Timestamp:2025-08-13 19:43:54.449606963 +0000 UTC m=+1.142271741 CPUVendorID:AuthenticAMD NumCores:6 NumPhysicalCores:1 NumSockets:6 CpuFrequency:2800000 MemoryCapacity:14635360256 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:c1bd596843fb445da20eca66471ddf66 SystemUUID:b5eaf2e9-3c86-474e-aca5-bab262204689 BootID:7bac8de7-aad0-4ed8-a9ad-c4391f6449b7 Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:1463533568 Type:vfs Inodes:357308 HasInodes:true} {Device:/var/lib/containers/storage/overlay-containers/b56e232756d61ee2b06c4c940f94dd2d9c1c6744eb2ba718b704bda5002ffdcc/userdata/shm DeviceMajor:0 DeviceMinor:43 Capacity:65536000 Type:vfs Inodes:1786543 HasInodes:true} {Device:overlay_0-44 DeviceMajor:0 DeviceMinor:44 Capacity:85294297088 Type:vfs Inodes:41680368 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:7317680128 Type:vfs Inodes:1786543 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:2927075328 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85294297088 Type:vfs Inodes:41680368 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:7317680128 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:85899345920 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:52:fd:fc:07:21:82 Speed:0 Mtu:1500} {Name:br-int MacAddress:4e:ec:11:72:80:3b Speed:0 Mtu:1400} {Name:enp2s0 MacAddress:52:fd:fc:07:21:82 Speed:-1 Mtu:1500} {Name:eth10 MacAddress:c2:6f:cd:56:e0:cc Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:b6:dc:d9:26:03:d4 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:e6:a9:95:66:6b:74 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:14635360256 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:65536 Type:Data Level:1} {Id:0 Size:65536 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0} {Id:0 Threads:[1] Caches:[{Id:1 Size:65536 Type:Data Level:1} {Id:1 Size:65536 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1} {Id:0 Threads:[2] Caches:[{Id:2 Size:65536 Type:Data Level:1} {Id:2 Size:65536 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2} {Id:0 Threads:[3] Caches:[{Id:3 Size:65536 Type:Data Level:1} {Id:3 Size:65536 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3} {Id:0 Threads:[4] Caches:[{Id:4 Size:65536 Type:Data Level:1} {Id:4 Size:65536 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4} {Id:0 Threads:[5] Caches:[{Id:5 Size:65536 Type:Data Level:1} {Id:5 Size:65536 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.455115 4183 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.455278 4183 manager.go:233] Version: {KernelVersion:5.14.0-427.22.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 416.94.202406172220-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.464008 4183 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.465562 4183 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.465947 4183 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.465986 4183 container_manager_linux.go:304] "Creating device plugin manager" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.466525 4183 manager.go:136] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.468951 4183 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.470533 4183 state_mem.go:36] "Initialized new in-memory state store" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.471372 4183 server.go:1227] "Using root directory" path="/var/lib/kubelet" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.474413 4183 kubelet.go:406] "Attempting to sync node with API server" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.474458 4183 kubelet.go:311] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.475131 4183 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.475372 4183 kubelet.go:322] "Adding apiserver pod source" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.476751 4183 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.481718 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:54 crc kubenswrapper[4183]: E0813 19:43:54.482235 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.482139 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:54 crc kubenswrapper[4183]: E0813 19:43:54.482302 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.485825 4183 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="cri-o" version="1.29.5-5.rhaos4.16.git7032128.el9" apiVersion="v1" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.492543 4183 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.493577 4183 kubelet.go:826] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.495264 4183 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/azure-file" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.495561 4183 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.495608 4183 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/rbd" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.495724 4183 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.495888 4183 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.495980 4183 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.496094 4183 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.496285 4183 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/secret" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.496379 4183 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.496398 4183 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/cephfs" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.496535 4183 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.496614 4183 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/fc" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.496656 4183 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.496880 4183 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/projected" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.496980 4183 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.497815 4183 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/csi" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.500830 4183 server.go:1262] "Started kubelet" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.502655 4183 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.502841 4183 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.500836 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:54 crc systemd[1]: Started Kubernetes Kubelet. Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.506975 4183 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.517440 4183 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.518906 4183 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.525606 4183 server.go:461] "Adding debug handlers to kubelet server" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.660549 4183 volume_manager.go:289] "The desired_state_of_world populator starts" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.660966 4183 volume_manager.go:291] "Starting Kubelet Volume Manager" Aug 13 19:43:54 crc kubenswrapper[4183]: E0813 19:43:54.670638 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="200ms" Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.675547 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:54 crc kubenswrapper[4183]: E0813 19:43:54.675645 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:54 crc kubenswrapper[4183]: E0813 19:43:54.676413 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 192.168.130.11:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.185b6b18e7a3052c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,LastTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.676439 4183 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.718166 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="7d51f445-054a-4e4f-a67b-a828f5a32511" volumeName="kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.718472 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" volumeName="kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.718503 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" volumeName="kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.718520 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b5c38ff-1fa8-4219-994d-15776acd4a4d" volumeName="kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.718535 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="378552fd-5e53-4882-87ff-95f3d9198861" volumeName="kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.718551 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3e19f9e8-9a37-4ca8-9790-c219750ab482" volumeName="kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovnkube-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.718566 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" volumeName="kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.718582 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" volumeName="kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.718598 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b5c38ff-1fa8-4219-994d-15776acd4a4d" volumeName="kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.718624 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="b54e8941-2fc4-432a-9e51-39684df9089e" volumeName="kubernetes.io/projected/b54e8941-2fc4-432a-9e51-39684df9089e-bound-sa-token" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.718642 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ed024e5d-8fc2-4c22-803d-73f3c9795f19" volumeName="kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.718670 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa90b3c2-febd-4588-a063-7fbbe82f00c1" volumeName="kubernetes.io/projected/aa90b3c2-febd-4588-a063-7fbbe82f00c1-kube-api-access-v45vm" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.718691 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="475321a1-8b7e-4033-8f72-b05a8b377347" volumeName="kubernetes.io/configmap/475321a1-8b7e-4033-8f72-b05a8b377347-multus-daemon-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.718713 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" volumeName="kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.718729 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec1bae8b-3200-4ad9-b33b-cf8701f3027c" volumeName="kubernetes.io/configmap/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.718756 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" volumeName="kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.718823 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd556935-a077-45df-ba3f-d42c39326ccd" volumeName="kubernetes.io/empty-dir/bd556935-a077-45df-ba3f-d42c39326ccd-tmpfs" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.718855 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec1bae8b-3200-4ad9-b33b-cf8701f3027c" volumeName="kubernetes.io/configmap/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-auth-proxy-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.718875 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="51a02bbf-2d40-4f84-868a-d399ea18a846" volumeName="kubernetes.io/configmap/51a02bbf-2d40-4f84-868a-d399ea18a846-ovnkube-identity-cm" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.718988 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="7d51f445-054a-4e4f-a67b-a828f5a32511" volumeName="kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719013 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="2b6d14a5-ca00-40c7-af7a-051a98a24eed" volumeName="kubernetes.io/configmap/2b6d14a5-ca00-40c7-af7a-051a98a24eed-iptables-alerter-script" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719030 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" volumeName="kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719048 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" volumeName="kubernetes.io/projected/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-kube-api-access-rkkfv" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719074 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e" volumeName="kubernetes.io/projected/f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e-kube-api-access-d7jw8" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719094 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="297ab9b6-2186-4d5b-a952-2bfd59af63c4" volumeName="kubernetes.io/projected/297ab9b6-2186-4d5b-a952-2bfd59af63c4-kube-api-access-vtgqn" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719113 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="530553aa-0a1d-423e-8a22-f5eb4bdbb883" volumeName="kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719138 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" volumeName="kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719156 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa90b3c2-febd-4588-a063-7fbbe82f00c1" volumeName="kubernetes.io/secret/aa90b3c2-febd-4588-a063-7fbbe82f00c1-stats-auth" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719243 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf1a8b70-3856-486f-9912-a2de1d57c3fb" volumeName="kubernetes.io/projected/bf1a8b70-3856-486f-9912-a2de1d57c3fb-kube-api-access-6z2n9" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719274 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" volumeName="kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719293 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" volumeName="kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719332 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd556935-a077-45df-ba3f-d42c39326ccd" volumeName="kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719360 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c085412c-b875-46c9-ae3e-e6b0d8067091" volumeName="kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719377 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="410cf605-1970-4691-9c95-53fdc123b1f3" volumeName="kubernetes.io/secret/410cf605-1970-4691-9c95-53fdc123b1f3-ovn-control-plane-metrics-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719410 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="d0f40333-c860-4c04-8058-a0bf572dcf12" volumeName="kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719437 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="12e733dd-0939-4f1b-9cbb-13897e093787" volumeName="kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719456 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3f4dca86-e6ee-4ec9-8324-86aff960225e" volumeName="kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719472 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" volumeName="kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719488 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="530553aa-0a1d-423e-8a22-f5eb4bdbb883" volumeName="kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719513 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a5ae51d-d173-4531-8975-f164c975ce1f" volumeName="kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719531 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="b54e8941-2fc4-432a-9e51-39684df9089e" volumeName="kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719545 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" volumeName="kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719561 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" volumeName="kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719607 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="5bacb25d-97b6-4491-8fb4-99feae1d802a" volumeName="kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719624 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a23c0ee-5648-448c-b772-83dced2891ce" volumeName="kubernetes.io/projected/6a23c0ee-5648-448c-b772-83dced2891ce-kube-api-access-gsxd9" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719640 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" volumeName="kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719670 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="10603adc-d495-423c-9459-4caa405960bb" volumeName="kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719690 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9fb762d1-812f-43f1-9eac-68034c1ecec7" volumeName="kubernetes.io/configmap/9fb762d1-812f-43f1-9eac-68034c1ecec7-service-ca" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719724 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c085412c-b875-46c9-ae3e-e6b0d8067091" volumeName="kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719743 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" volumeName="kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719758 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="6d67253e-2acd-4bc1-8185-793587da4f17" volumeName="kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719987 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="378552fd-5e53-4882-87ff-95f3d9198861" volumeName="kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720022 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="475321a1-8b7e-4033-8f72-b05a8b377347" volumeName="kubernetes.io/projected/475321a1-8b7e-4033-8f72-b05a8b377347-kube-api-access-c2f8t" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720039 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="b54e8941-2fc4-432a-9e51-39684df9089e" volumeName="kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720066 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="6268b7fe-8910-4505-b404-6f1df638105c" volumeName="kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720083 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" volumeName="kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720101 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3f4dca86-e6ee-4ec9-8324-86aff960225e" volumeName="kubernetes.io/empty-dir/3f4dca86-e6ee-4ec9-8324-86aff960225e-catalog-content" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720124 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="530553aa-0a1d-423e-8a22-f5eb4bdbb883" volumeName="kubernetes.io/empty-dir/530553aa-0a1d-423e-8a22-f5eb4bdbb883-available-featuregates" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720150 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" volumeName="kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720166 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b5c38ff-1fa8-4219-994d-15776acd4a4d" volumeName="kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720221 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="120b38dc-8236-4fa6-a452-642b8ad738ee" volumeName="kubernetes.io/projected/120b38dc-8236-4fa6-a452-642b8ad738ee-kube-api-access-bwvjb" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720241 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" volumeName="kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720266 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" volumeName="kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720284 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="7d51f445-054a-4e4f-a67b-a828f5a32511" volumeName="kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-bound-sa-token" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720304 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc291782-27d2-4a74-af79-c7dcb31535d2" volumeName="kubernetes.io/projected/cc291782-27d2-4a74-af79-c7dcb31535d2-kube-api-access-4sfhc" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720325 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="13ad7555-5f28-4555-a563-892713a8433a" volumeName="kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720340 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="5bacb25d-97b6-4491-8fb4-99feae1d802a" volumeName="kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720357 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="7d51f445-054a-4e4f-a67b-a828f5a32511" volumeName="kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720371 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="51a02bbf-2d40-4f84-868a-d399ea18a846" volumeName="kubernetes.io/projected/51a02bbf-2d40-4f84-868a-d399ea18a846-kube-api-access-zjg2w" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720384 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="5bacb25d-97b6-4491-8fb4-99feae1d802a" volumeName="kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720396 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8" volumeName="kubernetes.io/configmap/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-cni-binary-copy" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720411 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="120b38dc-8236-4fa6-a452-642b8ad738ee" volumeName="kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-auth-proxy-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720438 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" volumeName="kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720451 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="13045510-8717-4a71-ade4-be95a76440a7" volumeName="kubernetes.io/projected/13045510-8717-4a71-ade4-be95a76440a7-kube-api-access-dtjml" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720465 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="13ad7555-5f28-4555-a563-892713a8433a" volumeName="kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720483 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.726965 4183 reconstruct_new.go:149] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6ea5f9a7192af1960ec8c50a86fd2d9a756dbf85695798868f611e04a03ec009/globalmount" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727094 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="af6b67a3-a2bd-4051-9adc-c208a5a65d79" volumeName="kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727112 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="b54e8941-2fc4-432a-9e51-39684df9089e" volumeName="kubernetes.io/projected/b54e8941-2fc4-432a-9e51-39684df9089e-kube-api-access-9x6dp" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727125 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf1a8b70-3856-486f-9912-a2de1d57c3fb" volumeName="kubernetes.io/secret/bf1a8b70-3856-486f-9912-a2de1d57c3fb-node-bootstrap-token" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727143 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="13ad7555-5f28-4555-a563-892713a8433a" volumeName="kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727157 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" volumeName="kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727170 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="410cf605-1970-4691-9c95-53fdc123b1f3" volumeName="kubernetes.io/configmap/410cf605-1970-4691-9c95-53fdc123b1f3-ovnkube-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727282 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="87df87f4-ba66-4137-8e41-1fa632ad4207" volumeName="kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727302 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="13ad7555-5f28-4555-a563-892713a8433a" volumeName="kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727318 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c782cf62-a827-4677-b3c2-6f82c5f09cbb" volumeName="kubernetes.io/empty-dir/c782cf62-a827-4677-b3c2-6f82c5f09cbb-utilities" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727331 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3e19f9e8-9a37-4ca8-9790-c219750ab482" volumeName="kubernetes.io/secret/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovn-node-metrics-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727353 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" volumeName="kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727366 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="5bacb25d-97b6-4491-8fb4-99feae1d802a" volumeName="kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727379 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="71af81a9-7d43-49b2-9287-c375900aa905" volumeName="kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727509 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="13ad7555-5f28-4555-a563-892713a8433a" volumeName="kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727526 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3e19f9e8-9a37-4ca8-9790-c219750ab482" volumeName="kubernetes.io/projected/3e19f9e8-9a37-4ca8-9790-c219750ab482-kube-api-access-f9495" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727582 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="475321a1-8b7e-4033-8f72-b05a8b377347" volumeName="kubernetes.io/configmap/475321a1-8b7e-4033-8f72-b05a8b377347-cni-binary-copy" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727599 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="410cf605-1970-4691-9c95-53fdc123b1f3" volumeName="kubernetes.io/configmap/410cf605-1970-4691-9c95-53fdc123b1f3-env-overrides" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727618 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="6d67253e-2acd-4bc1-8185-793587da4f17" volumeName="kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727635 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="887d596e-c519-4bfa-af90-3edd9e1b2f0f" volumeName="kubernetes.io/empty-dir/887d596e-c519-4bfa-af90-3edd9e1b2f0f-catalog-content" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727648 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="af6b67a3-a2bd-4051-9adc-c208a5a65d79" volumeName="kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727667 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ed024e5d-8fc2-4c22-803d-73f3c9795f19" volumeName="kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727680 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="13045510-8717-4a71-ade4-be95a76440a7" volumeName="kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727693 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="297ab9b6-2186-4d5b-a952-2bfd59af63c4" volumeName="kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727706 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="378552fd-5e53-4882-87ff-95f3d9198861" volumeName="kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727723 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" volumeName="kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727741 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="13ad7555-5f28-4555-a563-892713a8433a" volumeName="kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727754 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="71af81a9-7d43-49b2-9287-c375900aa905" volumeName="kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727767 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" volumeName="kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727839 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="887d596e-c519-4bfa-af90-3edd9e1b2f0f" volumeName="kubernetes.io/empty-dir/887d596e-c519-4bfa-af90-3edd9e1b2f0f-utilities" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727855 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0f394926-bdb9-425c-b36e-264d7fd34550" volumeName="kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727878 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" volumeName="kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727890 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" volumeName="kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727902 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="59748b9b-c309-4712-aa85-bb38d71c4915" volumeName="kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727924 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa90b3c2-febd-4588-a063-7fbbe82f00c1" volumeName="kubernetes.io/secret/aa90b3c2-febd-4588-a063-7fbbe82f00c1-metrics-certs" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727936 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="120b38dc-8236-4fa6-a452-642b8ad738ee" volumeName="kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727948 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" volumeName="kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727960 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="af6b67a3-a2bd-4051-9adc-c208a5a65d79" volumeName="kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727977 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" volumeName="kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727993 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b5c38ff-1fa8-4219-994d-15776acd4a4d" volumeName="kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728005 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" volumeName="kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728016 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a5ae51d-d173-4531-8975-f164c975ce1f" volumeName="kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728033 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="af6b67a3-a2bd-4051-9adc-c208a5a65d79" volumeName="kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728049 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0f394926-bdb9-425c-b36e-264d7fd34550" volumeName="kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728062 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="13ad7555-5f28-4555-a563-892713a8433a" volumeName="kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728074 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa90b3c2-febd-4588-a063-7fbbe82f00c1" volumeName="kubernetes.io/configmap/aa90b3c2-febd-4588-a063-7fbbe82f00c1-service-ca-bundle" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728086 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" volumeName="kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728502 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="13045510-8717-4a71-ade4-be95a76440a7" volumeName="kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728516 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b5c38ff-1fa8-4219-994d-15776acd4a4d" volumeName="kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728528 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" volumeName="kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728546 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="13ad7555-5f28-4555-a563-892713a8433a" volumeName="kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728562 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" volumeName="kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728575 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="120b38dc-8236-4fa6-a452-642b8ad738ee" volumeName="kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728596 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="13ad7555-5f28-4555-a563-892713a8433a" volumeName="kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728609 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="13ad7555-5f28-4555-a563-892713a8433a" volumeName="kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728620 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" volumeName="kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-kube-api-access-khtlk" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728631 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e" volumeName="kubernetes.io/configmap/f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e-serviceca" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728643 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8" volumeName="kubernetes.io/projected/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-kube-api-access-bwbqm" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728654 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" volumeName="kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728665 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ed024e5d-8fc2-4c22-803d-73f3c9795f19" volumeName="kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728681 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d0dcce3-d96e-48cb-9b9f-362105911589" volumeName="kubernetes.io/configmap/9d0dcce3-d96e-48cb-9b9f-362105911589-mcd-auth-proxy-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728697 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9fb762d1-812f-43f1-9eac-68034c1ecec7" volumeName="kubernetes.io/projected/9fb762d1-812f-43f1-9eac-68034c1ecec7-kube-api-access" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728708 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" volumeName="kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-bound-sa-token" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728729 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3e19f9e8-9a37-4ca8-9790-c219750ab482" volumeName="kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-env-overrides" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728742 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" volumeName="kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728754 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="887d596e-c519-4bfa-af90-3edd9e1b2f0f" volumeName="kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728766 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="4092a9f8-5acc-4932-9e90-ef962eeb301a" volumeName="kubernetes.io/empty-dir/4092a9f8-5acc-4932-9e90-ef962eeb301a-catalog-content" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728871 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="5bacb25d-97b6-4491-8fb4-99feae1d802a" volumeName="kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728892 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="5bacb25d-97b6-4491-8fb4-99feae1d802a" volumeName="kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728904 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" volumeName="kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728921 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" volumeName="kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728935 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="410cf605-1970-4691-9c95-53fdc123b1f3" volumeName="kubernetes.io/projected/410cf605-1970-4691-9c95-53fdc123b1f3-kube-api-access-cx4f9" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728950 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" volumeName="kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728962 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="34a48baf-1bee-4921-8bb2-9b7320e76f79" volumeName="kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728973 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" volumeName="kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728985 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" volumeName="kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728997 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="51a02bbf-2d40-4f84-868a-d399ea18a846" volumeName="kubernetes.io/secret/51a02bbf-2d40-4f84-868a-d399ea18a846-webhook-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729010 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="59748b9b-c309-4712-aa85-bb38d71c4915" volumeName="kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729022 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="5bacb25d-97b6-4491-8fb4-99feae1d802a" volumeName="kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729045 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8" volumeName="kubernetes.io/configmap/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-cni-sysctl-allowlist" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729058 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa90b3c2-febd-4588-a063-7fbbe82f00c1" volumeName="kubernetes.io/secret/aa90b3c2-febd-4588-a063-7fbbe82f00c1-default-certificate" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729071 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc291782-27d2-4a74-af79-c7dcb31535d2" volumeName="kubernetes.io/secret/cc291782-27d2-4a74-af79-c7dcb31535d2-metrics-tls" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729084 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="13ad7555-5f28-4555-a563-892713a8433a" volumeName="kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729565 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" volumeName="kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729583 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" volumeName="kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729595 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" volumeName="kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729607 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="6d67253e-2acd-4bc1-8185-793587da4f17" volumeName="kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729619 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" volumeName="kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729633 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9fb762d1-812f-43f1-9eac-68034c1ecec7" volumeName="kubernetes.io/secret/9fb762d1-812f-43f1-9eac-68034c1ecec7-serving-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729644 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3f4dca86-e6ee-4ec9-8324-86aff960225e" volumeName="kubernetes.io/empty-dir/3f4dca86-e6ee-4ec9-8324-86aff960225e-utilities" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729656 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="71af81a9-7d43-49b2-9287-c375900aa905" volumeName="kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729669 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c782cf62-a827-4677-b3c2-6f82c5f09cbb" volumeName="kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729686 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="87df87f4-ba66-4137-8e41-1fa632ad4207" volumeName="kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729701 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="4092a9f8-5acc-4932-9e90-ef962eeb301a" volumeName="kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729714 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="51a02bbf-2d40-4f84-868a-d399ea18a846" volumeName="kubernetes.io/configmap/51a02bbf-2d40-4f84-868a-d399ea18a846-env-overrides" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729732 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" volumeName="kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-certificates" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729748 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="10603adc-d495-423c-9459-4caa405960bb" volumeName="kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729761 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3e19f9e8-9a37-4ca8-9790-c219750ab482" volumeName="kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovnkube-script-lib" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729817 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec1bae8b-3200-4ad9-b33b-cf8701f3027c" volumeName="kubernetes.io/projected/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-kube-api-access-4qr9t" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729836 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="87df87f4-ba66-4137-8e41-1fa632ad4207" volumeName="kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729852 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="87df87f4-ba66-4137-8e41-1fa632ad4207" volumeName="kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729870 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" volumeName="kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729883 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd556935-a077-45df-ba3f-d42c39326ccd" volumeName="kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729895 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="2b6d14a5-ca00-40c7-af7a-051a98a24eed" volumeName="kubernetes.io/projected/2b6d14a5-ca00-40c7-af7a-051a98a24eed-kube-api-access-j4qn7" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729909 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" volumeName="kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729922 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="87df87f4-ba66-4137-8e41-1fa632ad4207" volumeName="kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729934 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="13ad7555-5f28-4555-a563-892713a8433a" volumeName="kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729946 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf1a8b70-3856-486f-9912-a2de1d57c3fb" volumeName="kubernetes.io/secret/bf1a8b70-3856-486f-9912-a2de1d57c3fb-certs" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729959 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" volumeName="kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.730684 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d0dcce3-d96e-48cb-9b9f-362105911589" volumeName="kubernetes.io/projected/9d0dcce3-d96e-48cb-9b9f-362105911589-kube-api-access-xkzjk" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.730704 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" volumeName="kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.730716 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" volumeName="kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.730733 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b5c38ff-1fa8-4219-994d-15776acd4a4d" volumeName="kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.730748 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b5d722a-1123-4935-9740-52a08d018bc9" volumeName="kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.730760 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="13ad7555-5f28-4555-a563-892713a8433a" volumeName="kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.730994 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0f394926-bdb9-425c-b36e-264d7fd34550" volumeName="kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.731015 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" volumeName="kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.731032 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd556935-a077-45df-ba3f-d42c39326ccd" volumeName="kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.731056 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="4092a9f8-5acc-4932-9e90-ef962eeb301a" volumeName="kubernetes.io/empty-dir/4092a9f8-5acc-4932-9e90-ef962eeb301a-utilities" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.731075 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a5ae51d-d173-4531-8975-f164c975ce1f" volumeName="kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.731088 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" volumeName="kubernetes.io/empty-dir/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-ca-trust-extracted" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.731103 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec1bae8b-3200-4ad9-b33b-cf8701f3027c" volumeName="kubernetes.io/secret/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-machine-approver-tls" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.731115 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c085412c-b875-46c9-ae3e-e6b0d8067091" volumeName="kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.731133 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" volumeName="kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.731150 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c782cf62-a827-4677-b3c2-6f82c5f09cbb" volumeName="kubernetes.io/empty-dir/c782cf62-a827-4677-b3c2-6f82c5f09cbb-catalog-content" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.731163 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" volumeName="kubernetes.io/projected/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-kube-api-access-8svnk" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.731241 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" volumeName="kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.731260 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d0dcce3-d96e-48cb-9b9f-362105911589" volumeName="kubernetes.io/secret/9d0dcce3-d96e-48cb-9b9f-362105911589-proxy-tls" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.731276 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="297ab9b6-2186-4d5b-a952-2bfd59af63c4" volumeName="kubernetes.io/configmap/297ab9b6-2186-4d5b-a952-2bfd59af63c4-mcc-auth-proxy-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.731296 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="cf1a8966-f594-490a-9fbb-eec5bafd13d3" volumeName="kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.731398 4183 reconstruct_new.go:102] "Volume reconstruction finished" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.731411 4183 reconciler_new.go:29] "Reconciler: start to sync state" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.760614 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:43:54 crc kubenswrapper[4183]: E0813 19:43:54.765043 4183 container_manager_linux.go:884] "Unable to get rootfs data from cAdvisor interface" err="unable to find data in memory cache" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.775241 4183 factory.go:55] Registering systemd factory Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.775368 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.775678 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.775770 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.775873 4183 factory.go:221] Registration of the systemd container factory successfully Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.776145 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:43:54 crc kubenswrapper[4183]: E0813 19:43:54.779389 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 192.168.130.11:6443: connect: connection refused" node="crc" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.779986 4183 factory.go:153] Registering CRI-O factory Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.780147 4183 factory.go:221] Registration of the crio container factory successfully Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.780616 4183 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.780912 4183 factory.go:103] Registering Raw factory Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.781217 4183 manager.go:1196] Started watching for new ooms in manager Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.782546 4183 manager.go:319] Starting recovery of all containers Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.836554 4183 manager.go:324] Recovery completed Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.856954 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.858618 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.858719 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.858742 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:43:54 crc kubenswrapper[4183]: E0813 19:43:54.878047 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="400ms" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.980529 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.024187 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.024243 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.024678 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.024710 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:43:55 crc kubenswrapper[4183]: E0813 19:43:55.026755 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 192.168.130.11:6443: connect: connection refused" node="crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.029064 4183 cpu_manager.go:215] "Starting CPU manager" policy="none" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.029249 4183 cpu_manager.go:216] "Reconciling" reconcilePeriod="10s" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.029599 4183 state_mem.go:36] "Initialized new in-memory state store" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.046027 4183 policy_none.go:49] "None policy: Start" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.048422 4183 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.048995 4183 state_mem.go:35] "Initializing new in-memory state store" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.152712 4183 manager.go:296] "Starting Device Plugin manager" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.153754 4183 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.154469 4183 server.go:79] "Starting device plugin registration server" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.159564 4183 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.160021 4183 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.160109 4183 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.203607 4183 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.207046 4183 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.207448 4183 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.207823 4183 kubelet.go:2343] "Starting kubelet main sync loop" Aug 13 19:43:55 crc kubenswrapper[4183]: E0813 19:43:55.208236 4183 kubelet.go:2367] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Aug 13 19:43:55 crc kubenswrapper[4183]: W0813 19:43:55.221281 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:55 crc kubenswrapper[4183]: E0813 19:43:55.221355 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:55 crc kubenswrapper[4183]: E0813 19:43:55.280947 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="800ms" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.309413 4183 kubelet.go:2429] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.310904 4183 topology_manager.go:215] "Topology Admit Handler" podUID="d3ae206906481b4831fd849b559269c8" podNamespace="openshift-machine-config-operator" podName="kube-rbac-proxy-crio-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.312723 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.317346 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.317408 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.317428 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.319511 4183 topology_manager.go:215] "Topology Admit Handler" podUID="b2a6a3b2ca08062d24afa4c01aaf9e4f" podNamespace="openshift-etcd" podName="etcd-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.319642 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.323652 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.324535 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.329208 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.329259 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.329281 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.329319 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.329356 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.329377 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.330172 4183 topology_manager.go:215] "Topology Admit Handler" podUID="53c1db1508241fbac1bedf9130341ffe" podNamespace="openshift-kube-apiserver" podName="kube-apiserver-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.330245 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.330639 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.330667 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.332452 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.332511 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.332524 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.332629 4183 topology_manager.go:215] "Topology Admit Handler" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" podNamespace="openshift-kube-controller-manager" podName="kube-controller-manager-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.332661 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.333185 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.333258 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.334389 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.334431 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.334444 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.335632 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.335680 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.335705 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.335733 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.335771 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.335860 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.336747 4183 topology_manager.go:215] "Topology Admit Handler" podUID="631cdb37fbb54e809ecc5e719aebd371" podNamespace="openshift-kube-scheduler" podName="openshift-kube-scheduler-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.336855 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.336897 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.337520 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.340045 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.340131 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.340203 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.340406 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.340446 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.402370 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.402442 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.402456 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:43:55 crc kubenswrapper[4183]: E0813 19:43:55.404278 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.405101 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.405176 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.405191 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.427930 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.429816 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.429869 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.429883 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.429912 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:43:55 crc kubenswrapper[4183]: E0813 19:43:55.431407 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 192.168.130.11:6443: connect: connection refused" node="crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.458478 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d3ae206906481b4831fd849b559269c8-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d3ae206906481b4831fd849b559269c8\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.458898 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-data-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.458984 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-usr-local-bin\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.459010 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/53c1db1508241fbac1bedf9130341ffe-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"53c1db1508241fbac1bedf9130341ffe\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.459030 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/53c1db1508241fbac1bedf9130341ffe-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"53c1db1508241fbac1bedf9130341ffe\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.459062 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2eb2b200bca0d10cf0fe16fb7c0caf80-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"2eb2b200bca0d10cf0fe16fb7c0caf80\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.459083 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/631cdb37fbb54e809ecc5e719aebd371-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"631cdb37fbb54e809ecc5e719aebd371\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.459104 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d3ae206906481b4831fd849b559269c8-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d3ae206906481b4831fd849b559269c8\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.459122 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-static-pod-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.459251 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-cert-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.459318 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-log-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.459384 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/53c1db1508241fbac1bedf9130341ffe-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"53c1db1508241fbac1bedf9130341ffe\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.459415 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-resource-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.459465 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2eb2b200bca0d10cf0fe16fb7c0caf80-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"2eb2b200bca0d10cf0fe16fb7c0caf80\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.459494 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/631cdb37fbb54e809ecc5e719aebd371-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"631cdb37fbb54e809ecc5e719aebd371\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.506240 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:55 crc kubenswrapper[4183]: W0813 19:43:55.537648 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:55 crc kubenswrapper[4183]: E0813 19:43:55.537744 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.561519 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d3ae206906481b4831fd849b559269c8-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d3ae206906481b4831fd849b559269c8\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.561622 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-static-pod-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.561661 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-cert-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.561688 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-log-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.561715 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/53c1db1508241fbac1bedf9130341ffe-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"53c1db1508241fbac1bedf9130341ffe\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.561740 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-resource-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.561850 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2eb2b200bca0d10cf0fe16fb7c0caf80-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"2eb2b200bca0d10cf0fe16fb7c0caf80\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.561890 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/631cdb37fbb54e809ecc5e719aebd371-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"631cdb37fbb54e809ecc5e719aebd371\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.561916 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/631cdb37fbb54e809ecc5e719aebd371-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"631cdb37fbb54e809ecc5e719aebd371\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.561934 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d3ae206906481b4831fd849b559269c8-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d3ae206906481b4831fd849b559269c8\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.561955 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-data-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.561980 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-usr-local-bin\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.562001 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/53c1db1508241fbac1bedf9130341ffe-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"53c1db1508241fbac1bedf9130341ffe\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.562030 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/53c1db1508241fbac1bedf9130341ffe-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"53c1db1508241fbac1bedf9130341ffe\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.562053 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2eb2b200bca0d10cf0fe16fb7c0caf80-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"2eb2b200bca0d10cf0fe16fb7c0caf80\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.562414 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d3ae206906481b4831fd849b559269c8-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d3ae206906481b4831fd849b559269c8\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.562520 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-usr-local-bin\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.562569 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/53c1db1508241fbac1bedf9130341ffe-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"53c1db1508241fbac1bedf9130341ffe\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.562536 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/53c1db1508241fbac1bedf9130341ffe-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"53c1db1508241fbac1bedf9130341ffe\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.562757 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/631cdb37fbb54e809ecc5e719aebd371-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"631cdb37fbb54e809ecc5e719aebd371\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.562826 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-log-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.562768 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2eb2b200bca0d10cf0fe16fb7c0caf80-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"2eb2b200bca0d10cf0fe16fb7c0caf80\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.562873 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/631cdb37fbb54e809ecc5e719aebd371-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"631cdb37fbb54e809ecc5e719aebd371\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.562900 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-static-pod-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.562923 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d3ae206906481b4831fd849b559269c8-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d3ae206906481b4831fd849b559269c8\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.562945 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-cert-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.562969 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-data-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.562977 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/53c1db1508241fbac1bedf9130341ffe-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"53c1db1508241fbac1bedf9130341ffe\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.562990 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-resource-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.563241 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2eb2b200bca0d10cf0fe16fb7c0caf80-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"2eb2b200bca0d10cf0fe16fb7c0caf80\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.664890 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.688244 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.699689 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.729881 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.738024 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: W0813 19:43:55.755628 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:55 crc kubenswrapper[4183]: E0813 19:43:55.755711 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:55 crc kubenswrapper[4183]: W0813 19:43:55.771301 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53c1db1508241fbac1bedf9130341ffe.slice/crio-e09ebdd208d66afb0ba856fe61dfd2ca4a4d9b0d5aab8790984ba43fbfd18d83 WatchSource:0}: Error finding container e09ebdd208d66afb0ba856fe61dfd2ca4a4d9b0d5aab8790984ba43fbfd18d83: Status 404 returned error can't find the container with id e09ebdd208d66afb0ba856fe61dfd2ca4a4d9b0d5aab8790984ba43fbfd18d83 Aug 13 19:43:55 crc kubenswrapper[4183]: W0813 19:43:55.775105 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3ae206906481b4831fd849b559269c8.slice/crio-410a136ab4d60a86c7b8b3d5f28a28bd1118455ff54525a3bc99a50a4ad5a66b WatchSource:0}: Error finding container 410a136ab4d60a86c7b8b3d5f28a28bd1118455ff54525a3bc99a50a4ad5a66b: Status 404 returned error can't find the container with id 410a136ab4d60a86c7b8b3d5f28a28bd1118455ff54525a3bc99a50a4ad5a66b Aug 13 19:43:55 crc kubenswrapper[4183]: W0813 19:43:55.776442 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2a6a3b2ca08062d24afa4c01aaf9e4f.slice/crio-b55571250f9ecd41f6aecef022adaa7dfc487a62d8b3c48363ff694df16723fc WatchSource:0}: Error finding container b55571250f9ecd41f6aecef022adaa7dfc487a62d8b3c48363ff694df16723fc: Status 404 returned error can't find the container with id b55571250f9ecd41f6aecef022adaa7dfc487a62d8b3c48363ff694df16723fc Aug 13 19:43:55 crc kubenswrapper[4183]: W0813 19:43:55.799304 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:55 crc kubenswrapper[4183]: E0813 19:43:55.799427 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:55 crc kubenswrapper[4183]: W0813 19:43:55.800647 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2eb2b200bca0d10cf0fe16fb7c0caf80.slice/crio-f37d107ed757bb5270315ab709945eb5fc67489de969c3be9362d277114d8d29 WatchSource:0}: Error finding container f37d107ed757bb5270315ab709945eb5fc67489de969c3be9362d277114d8d29: Status 404 returned error can't find the container with id f37d107ed757bb5270315ab709945eb5fc67489de969c3be9362d277114d8d29 Aug 13 19:43:56 crc kubenswrapper[4183]: W0813 19:43:56.069422 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:56 crc kubenswrapper[4183]: E0813 19:43:56.069914 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:56 crc kubenswrapper[4183]: E0813 19:43:56.082587 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="1.6s" Aug 13 19:43:56 crc kubenswrapper[4183]: I0813 19:43:56.227474 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"53c1db1508241fbac1bedf9130341ffe","Type":"ContainerStarted","Data":"e09ebdd208d66afb0ba856fe61dfd2ca4a4d9b0d5aab8790984ba43fbfd18d83"} Aug 13 19:43:56 crc kubenswrapper[4183]: I0813 19:43:56.229358 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerStarted","Data":"b55571250f9ecd41f6aecef022adaa7dfc487a62d8b3c48363ff694df16723fc"} Aug 13 19:43:56 crc kubenswrapper[4183]: I0813 19:43:56.230869 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d3ae206906481b4831fd849b559269c8","Type":"ContainerStarted","Data":"410a136ab4d60a86c7b8b3d5f28a28bd1118455ff54525a3bc99a50a4ad5a66b"} Aug 13 19:43:56 crc kubenswrapper[4183]: I0813 19:43:56.232052 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:43:56 crc kubenswrapper[4183]: I0813 19:43:56.234146 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:43:56 crc kubenswrapper[4183]: I0813 19:43:56.234221 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:43:56 crc kubenswrapper[4183]: I0813 19:43:56.234239 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:43:56 crc kubenswrapper[4183]: I0813 19:43:56.234266 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:43:56 crc kubenswrapper[4183]: I0813 19:43:56.235577 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"2eb2b200bca0d10cf0fe16fb7c0caf80","Type":"ContainerStarted","Data":"f37d107ed757bb5270315ab709945eb5fc67489de969c3be9362d277114d8d29"} Aug 13 19:43:56 crc kubenswrapper[4183]: E0813 19:43:56.235746 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 192.168.130.11:6443: connect: connection refused" node="crc" Aug 13 19:43:56 crc kubenswrapper[4183]: I0813 19:43:56.237420 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"631cdb37fbb54e809ecc5e719aebd371","Type":"ContainerStarted","Data":"970bf8339a8e8001b60c124abd60c2b2381265f54d5bcdb460515789626b6ba9"} Aug 13 19:43:56 crc kubenswrapper[4183]: I0813 19:43:56.451076 4183 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Aug 13 19:43:56 crc kubenswrapper[4183]: E0813 19:43:56.455457 4183 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:56 crc kubenswrapper[4183]: I0813 19:43:56.508515 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:57 crc kubenswrapper[4183]: W0813 19:43:57.317931 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:57 crc kubenswrapper[4183]: E0813 19:43:57.318144 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:57 crc kubenswrapper[4183]: I0813 19:43:57.509595 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:57 crc kubenswrapper[4183]: W0813 19:43:57.628935 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:57 crc kubenswrapper[4183]: E0813 19:43:57.629006 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:57 crc kubenswrapper[4183]: E0813 19:43:57.685165 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="3.2s" Aug 13 19:43:57 crc kubenswrapper[4183]: I0813 19:43:57.836113 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:43:57 crc kubenswrapper[4183]: I0813 19:43:57.839094 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:43:57 crc kubenswrapper[4183]: I0813 19:43:57.839177 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:43:57 crc kubenswrapper[4183]: I0813 19:43:57.839196 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:43:57 crc kubenswrapper[4183]: I0813 19:43:57.839229 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:43:57 crc kubenswrapper[4183]: E0813 19:43:57.840852 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 192.168.130.11:6443: connect: connection refused" node="crc" Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.249354 4183 generic.go:334] "Generic (PLEG): container finished" podID="d3ae206906481b4831fd849b559269c8" containerID="e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b" exitCode=0 Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.249430 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d3ae206906481b4831fd849b559269c8","Type":"ContainerDied","Data":"e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b"} Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.249608 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.251184 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.251225 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.251241 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.266930 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"2eb2b200bca0d10cf0fe16fb7c0caf80","Type":"ContainerStarted","Data":"7670de641a29c43088fe5304b3060d152eed7ef9cf7e78cb240a6c54fce1995c"} Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.266977 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"2eb2b200bca0d10cf0fe16fb7c0caf80","Type":"ContainerStarted","Data":"28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509"} Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.269747 4183 generic.go:334] "Generic (PLEG): container finished" podID="631cdb37fbb54e809ecc5e719aebd371" containerID="d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624" exitCode=0 Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.269973 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"631cdb37fbb54e809ecc5e719aebd371","Type":"ContainerDied","Data":"d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624"} Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.270197 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.271762 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.271931 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.272147 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.276167 4183 generic.go:334] "Generic (PLEG): container finished" podID="53c1db1508241fbac1bedf9130341ffe" containerID="f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480" exitCode=0 Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.276318 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"53c1db1508241fbac1bedf9130341ffe","Type":"ContainerDied","Data":"f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480"} Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.276473 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.287206 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.287241 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.287260 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.291941 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.293208 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.293247 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.293259 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.294336 4183 generic.go:334] "Generic (PLEG): container finished" podID="b2a6a3b2ca08062d24afa4c01aaf9e4f" containerID="726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6" exitCode=0 Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.294394 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerDied","Data":"726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6"} Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.294503 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.313351 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.313410 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.313425 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.505669 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:58 crc kubenswrapper[4183]: W0813 19:43:58.854605 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:58 crc kubenswrapper[4183]: E0813 19:43:58.855205 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:58 crc kubenswrapper[4183]: W0813 19:43:58.867610 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:58 crc kubenswrapper[4183]: E0813 19:43:58.867659 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:59 crc kubenswrapper[4183]: I0813 19:43:59.324418 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"2eb2b200bca0d10cf0fe16fb7c0caf80","Type":"ContainerStarted","Data":"ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93"} Aug 13 19:43:59 crc kubenswrapper[4183]: I0813 19:43:59.507149 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:44:00 crc kubenswrapper[4183]: I0813 19:44:00.410433 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"53c1db1508241fbac1bedf9130341ffe","Type":"ContainerStarted","Data":"7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5"} Aug 13 19:44:00 crc kubenswrapper[4183]: I0813 19:44:00.466757 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d3ae206906481b4831fd849b559269c8","Type":"ContainerStarted","Data":"6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9"} Aug 13 19:44:00 crc kubenswrapper[4183]: I0813 19:44:00.467072 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:00 crc kubenswrapper[4183]: I0813 19:44:00.471089 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:00 crc kubenswrapper[4183]: I0813 19:44:00.471277 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:00 crc kubenswrapper[4183]: I0813 19:44:00.471297 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:00 crc kubenswrapper[4183]: I0813 19:44:00.486883 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"2eb2b200bca0d10cf0fe16fb7c0caf80","Type":"ContainerStarted","Data":"8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc"} Aug 13 19:44:00 crc kubenswrapper[4183]: I0813 19:44:00.487041 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:00 crc kubenswrapper[4183]: I0813 19:44:00.492887 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:00 crc kubenswrapper[4183]: I0813 19:44:00.492975 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:00 crc kubenswrapper[4183]: I0813 19:44:00.492989 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:00 crc kubenswrapper[4183]: I0813 19:44:00.505078 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"631cdb37fbb54e809ecc5e719aebd371","Type":"ContainerStarted","Data":"51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52"} Aug 13 19:44:00 crc kubenswrapper[4183]: I0813 19:44:00.505299 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:44:00 crc kubenswrapper[4183]: I0813 19:44:00.577033 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:44:00 crc kubenswrapper[4183]: E0813 19:44:00.590270 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 192.168.130.11:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.185b6b18e7a3052c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,LastTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:44:00 crc kubenswrapper[4183]: I0813 19:44:00.720716 4183 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Aug 13 19:44:00 crc kubenswrapper[4183]: E0813 19:44:00.723203 4183 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:44:00 crc kubenswrapper[4183]: E0813 19:44:00.887637 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="6.4s" Aug 13 19:44:01 crc kubenswrapper[4183]: I0813 19:44:01.041735 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:01 crc kubenswrapper[4183]: I0813 19:44:01.044357 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:01 crc kubenswrapper[4183]: I0813 19:44:01.044477 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:01 crc kubenswrapper[4183]: I0813 19:44:01.044501 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:01 crc kubenswrapper[4183]: I0813 19:44:01.044544 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:44:01 crc kubenswrapper[4183]: E0813 19:44:01.046129 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 192.168.130.11:6443: connect: connection refused" node="crc" Aug 13 19:44:01 crc kubenswrapper[4183]: I0813 19:44:01.510569 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:44:01 crc kubenswrapper[4183]: I0813 19:44:01.520531 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"53c1db1508241fbac1bedf9130341ffe","Type":"ContainerStarted","Data":"2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2"} Aug 13 19:44:01 crc kubenswrapper[4183]: I0813 19:44:01.545127 4183 generic.go:334] "Generic (PLEG): container finished" podID="b2a6a3b2ca08062d24afa4c01aaf9e4f" containerID="a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0" exitCode=0 Aug 13 19:44:01 crc kubenswrapper[4183]: I0813 19:44:01.545242 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerDied","Data":"a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0"} Aug 13 19:44:01 crc kubenswrapper[4183]: I0813 19:44:01.545204 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:01 crc kubenswrapper[4183]: I0813 19:44:01.547675 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:01 crc kubenswrapper[4183]: I0813 19:44:01.547827 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:01 crc kubenswrapper[4183]: I0813 19:44:01.547851 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:01 crc kubenswrapper[4183]: I0813 19:44:01.558076 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:44:01 crc kubenswrapper[4183]: I0813 19:44:01.564287 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:01 crc kubenswrapper[4183]: I0813 19:44:01.564398 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:01 crc kubenswrapper[4183]: I0813 19:44:01.565986 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:01 crc kubenswrapper[4183]: I0813 19:44:01.566209 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:01 crc kubenswrapper[4183]: I0813 19:44:01.566213 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:01 crc kubenswrapper[4183]: I0813 19:44:01.566227 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:01 crc kubenswrapper[4183]: I0813 19:44:01.566240 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:01 crc kubenswrapper[4183]: I0813 19:44:01.566256 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:01 crc kubenswrapper[4183]: W0813 19:44:01.898722 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:44:01 crc kubenswrapper[4183]: E0813 19:44:01.898960 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:44:02 crc kubenswrapper[4183]: I0813 19:44:02.510177 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:44:02 crc kubenswrapper[4183]: I0813 19:44:02.588563 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"631cdb37fbb54e809ecc5e719aebd371","Type":"ContainerStarted","Data":"e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff"} Aug 13 19:44:02 crc kubenswrapper[4183]: I0813 19:44:02.588662 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:02 crc kubenswrapper[4183]: I0813 19:44:02.601242 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:02 crc kubenswrapper[4183]: I0813 19:44:02.601332 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:02 crc kubenswrapper[4183]: I0813 19:44:02.601355 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:02 crc kubenswrapper[4183]: W0813 19:44:02.882299 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:44:02 crc kubenswrapper[4183]: E0813 19:44:02.882601 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:44:03 crc kubenswrapper[4183]: W0813 19:44:03.445602 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:44:03 crc kubenswrapper[4183]: E0813 19:44:03.445714 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:44:03 crc kubenswrapper[4183]: I0813 19:44:03.617916 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"53c1db1508241fbac1bedf9130341ffe","Type":"ContainerStarted","Data":"138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325"} Aug 13 19:44:03 crc kubenswrapper[4183]: I0813 19:44:03.636725 4183 generic.go:334] "Generic (PLEG): container finished" podID="b2a6a3b2ca08062d24afa4c01aaf9e4f" containerID="1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73" exitCode=0 Aug 13 19:44:03 crc kubenswrapper[4183]: I0813 19:44:03.637116 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerDied","Data":"1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73"} Aug 13 19:44:03 crc kubenswrapper[4183]: I0813 19:44:03.637226 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:03 crc kubenswrapper[4183]: I0813 19:44:03.641321 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:03 crc kubenswrapper[4183]: I0813 19:44:03.641454 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:03 crc kubenswrapper[4183]: I0813 19:44:03.641475 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:04 crc kubenswrapper[4183]: I0813 19:44:04.643619 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"631cdb37fbb54e809ecc5e719aebd371","Type":"ContainerStarted","Data":"7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e"} Aug 13 19:44:04 crc kubenswrapper[4183]: I0813 19:44:04.643721 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:04 crc kubenswrapper[4183]: I0813 19:44:04.645099 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:04 crc kubenswrapper[4183]: I0813 19:44:04.645124 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:04 crc kubenswrapper[4183]: I0813 19:44:04.645135 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:05 crc kubenswrapper[4183]: E0813 19:44:05.404914 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:44:05 crc kubenswrapper[4183]: I0813 19:44:05.651064 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerStarted","Data":"0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd"} Aug 13 19:44:05 crc kubenswrapper[4183]: I0813 19:44:05.660344 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"53c1db1508241fbac1bedf9130341ffe","Type":"ContainerStarted","Data":"fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a"} Aug 13 19:44:05 crc kubenswrapper[4183]: I0813 19:44:05.660370 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:05 crc kubenswrapper[4183]: I0813 19:44:05.660455 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 19:44:05 crc kubenswrapper[4183]: I0813 19:44:05.661600 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:05 crc kubenswrapper[4183]: I0813 19:44:05.661675 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:05 crc kubenswrapper[4183]: I0813 19:44:05.661856 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:06 crc kubenswrapper[4183]: I0813 19:44:06.699489 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"53c1db1508241fbac1bedf9130341ffe","Type":"ContainerStarted","Data":"9de5e451cc2d3784d191ca7ee29ddfdd8d4ba15f3a93c605d7c310f6a8f0c5ff"} Aug 13 19:44:06 crc kubenswrapper[4183]: I0813 19:44:06.700288 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:06 crc kubenswrapper[4183]: I0813 19:44:06.701949 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:06 crc kubenswrapper[4183]: I0813 19:44:06.702080 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:06 crc kubenswrapper[4183]: I0813 19:44:06.702100 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:06 crc kubenswrapper[4183]: I0813 19:44:06.709009 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:06 crc kubenswrapper[4183]: I0813 19:44:06.709489 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerStarted","Data":"9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c"} Aug 13 19:44:06 crc kubenswrapper[4183]: I0813 19:44:06.710124 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:06 crc kubenswrapper[4183]: I0813 19:44:06.710206 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:06 crc kubenswrapper[4183]: I0813 19:44:06.710226 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:07 crc kubenswrapper[4183]: I0813 19:44:07.447444 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:07 crc kubenswrapper[4183]: I0813 19:44:07.449366 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:07 crc kubenswrapper[4183]: I0813 19:44:07.449427 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:07 crc kubenswrapper[4183]: I0813 19:44:07.449443 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:07 crc kubenswrapper[4183]: I0813 19:44:07.449484 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:44:07 crc kubenswrapper[4183]: I0813 19:44:07.563401 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:44:07 crc kubenswrapper[4183]: I0813 19:44:07.705518 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:44:07 crc kubenswrapper[4183]: I0813 19:44:07.705957 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:07 crc kubenswrapper[4183]: I0813 19:44:07.709252 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:07 crc kubenswrapper[4183]: I0813 19:44:07.709310 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:07 crc kubenswrapper[4183]: I0813 19:44:07.709334 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:07 crc kubenswrapper[4183]: I0813 19:44:07.726474 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerStarted","Data":"c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15"} Aug 13 19:44:07 crc kubenswrapper[4183]: I0813 19:44:07.726614 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:07 crc kubenswrapper[4183]: I0813 19:44:07.728519 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:07 crc kubenswrapper[4183]: I0813 19:44:07.729063 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:07 crc kubenswrapper[4183]: I0813 19:44:07.729094 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:07 crc kubenswrapper[4183]: I0813 19:44:07.746001 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:44:08 crc kubenswrapper[4183]: I0813 19:44:08.743550 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:08 crc kubenswrapper[4183]: I0813 19:44:08.743552 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerStarted","Data":"86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44"} Aug 13 19:44:08 crc kubenswrapper[4183]: I0813 19:44:08.743630 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:08 crc kubenswrapper[4183]: I0813 19:44:08.744334 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:08 crc kubenswrapper[4183]: I0813 19:44:08.746270 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:08 crc kubenswrapper[4183]: I0813 19:44:08.746333 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:08 crc kubenswrapper[4183]: I0813 19:44:08.746349 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:08 crc kubenswrapper[4183]: I0813 19:44:08.747251 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:08 crc kubenswrapper[4183]: I0813 19:44:08.747304 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:08 crc kubenswrapper[4183]: I0813 19:44:08.747321 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:08 crc kubenswrapper[4183]: I0813 19:44:08.747733 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:08 crc kubenswrapper[4183]: I0813 19:44:08.747831 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:08 crc kubenswrapper[4183]: I0813 19:44:08.747853 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:08 crc kubenswrapper[4183]: I0813 19:44:08.750507 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:44:08 crc kubenswrapper[4183]: I0813 19:44:08.905078 4183 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Aug 13 19:44:09 crc kubenswrapper[4183]: I0813 19:44:09.008274 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:44:09 crc kubenswrapper[4183]: I0813 19:44:09.358473 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Aug 13 19:44:09 crc kubenswrapper[4183]: I0813 19:44:09.581161 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:44:09 crc kubenswrapper[4183]: I0813 19:44:09.746214 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:09 crc kubenswrapper[4183]: I0813 19:44:09.746245 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:09 crc kubenswrapper[4183]: I0813 19:44:09.746313 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:09 crc kubenswrapper[4183]: I0813 19:44:09.748257 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:09 crc kubenswrapper[4183]: I0813 19:44:09.748316 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:09 crc kubenswrapper[4183]: I0813 19:44:09.748336 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:09 crc kubenswrapper[4183]: I0813 19:44:09.748365 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:09 crc kubenswrapper[4183]: I0813 19:44:09.748395 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:09 crc kubenswrapper[4183]: I0813 19:44:09.748407 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:09 crc kubenswrapper[4183]: I0813 19:44:09.748257 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:09 crc kubenswrapper[4183]: I0813 19:44:09.748448 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:09 crc kubenswrapper[4183]: I0813 19:44:09.748464 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:10 crc kubenswrapper[4183]: I0813 19:44:10.748543 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:10 crc kubenswrapper[4183]: I0813 19:44:10.748652 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:10 crc kubenswrapper[4183]: I0813 19:44:10.749968 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:10 crc kubenswrapper[4183]: I0813 19:44:10.750022 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:10 crc kubenswrapper[4183]: I0813 19:44:10.750232 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:10 crc kubenswrapper[4183]: I0813 19:44:10.750040 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:10 crc kubenswrapper[4183]: I0813 19:44:10.750280 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:10 crc kubenswrapper[4183]: I0813 19:44:10.750296 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:11 crc kubenswrapper[4183]: I0813 19:44:11.169892 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:44:11 crc kubenswrapper[4183]: I0813 19:44:11.170071 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:11 crc kubenswrapper[4183]: I0813 19:44:11.171882 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:11 crc kubenswrapper[4183]: I0813 19:44:11.171927 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:11 crc kubenswrapper[4183]: I0813 19:44:11.171944 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:12 crc kubenswrapper[4183]: I0813 19:44:12.581168 4183 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 19:44:12 crc kubenswrapper[4183]: I0813 19:44:12.582219 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 19:44:13 crc kubenswrapper[4183]: W0813 19:44:13.494495 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Aug 13 19:44:13 crc kubenswrapper[4183]: I0813 19:44:13.495229 4183 trace.go:236] Trace[777984701]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:44:03.491) (total time: 10003ms): Aug 13 19:44:13 crc kubenswrapper[4183]: Trace[777984701]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (19:44:13.494) Aug 13 19:44:13 crc kubenswrapper[4183]: Trace[777984701]: [10.003254671s] [10.003254671s] END Aug 13 19:44:13 crc kubenswrapper[4183]: E0813 19:44:13.495274 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Aug 13 19:44:13 crc kubenswrapper[4183]: I0813 19:44:13.510042 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": net/http: TLS handshake timeout Aug 13 19:44:13 crc kubenswrapper[4183]: I0813 19:44:13.524599 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Aug 13 19:44:13 crc kubenswrapper[4183]: I0813 19:44:13.524771 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:13 crc kubenswrapper[4183]: I0813 19:44:13.526566 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:13 crc kubenswrapper[4183]: I0813 19:44:13.526733 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:13 crc kubenswrapper[4183]: I0813 19:44:13.526958 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:15 crc kubenswrapper[4183]: E0813 19:44:15.406986 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:44:17 crc kubenswrapper[4183]: E0813 19:44:17.290252 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Aug 13 19:44:17 crc kubenswrapper[4183]: E0813 19:44:17.452281 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Aug 13 19:44:18 crc kubenswrapper[4183]: E0813 19:44:18.909575 4183 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": net/http: TLS handshake timeout Aug 13 19:44:20 crc kubenswrapper[4183]: E0813 19:44:20.593140 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{crc.185b6b18e7a3052c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,LastTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:44:21 crc kubenswrapper[4183]: I0813 19:44:21.170909 4183 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/healthz\": context deadline exceeded" start-of-body= Aug 13 19:44:21 crc kubenswrapper[4183]: I0813 19:44:21.171045 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/healthz\": context deadline exceeded" Aug 13 19:44:22 crc kubenswrapper[4183]: W0813 19:44:22.208232 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Aug 13 19:44:22 crc kubenswrapper[4183]: I0813 19:44:22.208402 4183 trace.go:236] Trace[505837227]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:44:12.205) (total time: 10002ms): Aug 13 19:44:22 crc kubenswrapper[4183]: Trace[505837227]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (19:44:22.208) Aug 13 19:44:22 crc kubenswrapper[4183]: Trace[505837227]: [10.002428675s] [10.002428675s] END Aug 13 19:44:22 crc kubenswrapper[4183]: E0813 19:44:22.208424 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Aug 13 19:44:22 crc kubenswrapper[4183]: I0813 19:44:22.427506 4183 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:44570->192.168.126.11:17697: read: connection reset by peer" start-of-body= Aug 13 19:44:22 crc kubenswrapper[4183]: I0813 19:44:22.427635 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:44570->192.168.126.11:17697: read: connection reset by peer" Aug 13 19:44:22 crc kubenswrapper[4183]: W0813 19:44:22.443211 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:22Z is after 2025-06-26T12:47:18Z Aug 13 19:44:22 crc kubenswrapper[4183]: E0813 19:44:22.443301 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:22Z is after 2025-06-26T12:47:18Z Aug 13 19:44:22 crc kubenswrapper[4183]: I0813 19:44:22.492631 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:22Z is after 2025-06-26T12:47:18Z Aug 13 19:44:22 crc kubenswrapper[4183]: W0813 19:44:22.495898 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:22Z is after 2025-06-26T12:47:18Z Aug 13 19:44:22 crc kubenswrapper[4183]: E0813 19:44:22.496042 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:22Z is after 2025-06-26T12:47:18Z Aug 13 19:44:22 crc kubenswrapper[4183]: I0813 19:44:22.530058 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:22Z is after 2025-06-26T12:47:18Z Aug 13 19:44:22 crc kubenswrapper[4183]: I0813 19:44:22.535586 4183 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403} Aug 13 19:44:22 crc kubenswrapper[4183]: I0813 19:44:22.535739 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Aug 13 19:44:22 crc kubenswrapper[4183]: I0813 19:44:22.581414 4183 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 19:44:22 crc kubenswrapper[4183]: I0813 19:44:22.581995 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 19:44:22 crc kubenswrapper[4183]: I0813 19:44:22.882447 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/0.log" Aug 13 19:44:22 crc kubenswrapper[4183]: I0813 19:44:22.885166 4183 generic.go:334] "Generic (PLEG): container finished" podID="53c1db1508241fbac1bedf9130341ffe" containerID="9de5e451cc2d3784d191ca7ee29ddfdd8d4ba15f3a93c605d7c310f6a8f0c5ff" exitCode=255 Aug 13 19:44:22 crc kubenswrapper[4183]: I0813 19:44:22.885352 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"53c1db1508241fbac1bedf9130341ffe","Type":"ContainerDied","Data":"9de5e451cc2d3784d191ca7ee29ddfdd8d4ba15f3a93c605d7c310f6a8f0c5ff"} Aug 13 19:44:22 crc kubenswrapper[4183]: I0813 19:44:22.885557 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:22 crc kubenswrapper[4183]: I0813 19:44:22.887150 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:22 crc kubenswrapper[4183]: I0813 19:44:22.887276 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:22 crc kubenswrapper[4183]: I0813 19:44:22.887352 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:22 crc kubenswrapper[4183]: I0813 19:44:22.888737 4183 scope.go:117] "RemoveContainer" containerID="9de5e451cc2d3784d191ca7ee29ddfdd8d4ba15f3a93c605d7c310f6a8f0c5ff" Aug 13 19:44:23 crc kubenswrapper[4183]: I0813 19:44:23.573335 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:23Z is after 2025-06-26T12:47:18Z Aug 13 19:44:23 crc kubenswrapper[4183]: I0813 19:44:23.771285 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Aug 13 19:44:23 crc kubenswrapper[4183]: I0813 19:44:23.772341 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:23 crc kubenswrapper[4183]: I0813 19:44:23.774293 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:23 crc kubenswrapper[4183]: I0813 19:44:23.774445 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:23 crc kubenswrapper[4183]: I0813 19:44:23.774544 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:23 crc kubenswrapper[4183]: I0813 19:44:23.811249 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Aug 13 19:44:23 crc kubenswrapper[4183]: I0813 19:44:23.894466 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/0.log" Aug 13 19:44:23 crc kubenswrapper[4183]: I0813 19:44:23.903096 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:23 crc kubenswrapper[4183]: I0813 19:44:23.905032 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:23 crc kubenswrapper[4183]: I0813 19:44:23.905088 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:23 crc kubenswrapper[4183]: I0813 19:44:23.905110 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:24 crc kubenswrapper[4183]: E0813 19:44:24.295813 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:24Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:44:24 crc kubenswrapper[4183]: I0813 19:44:24.453246 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:24 crc kubenswrapper[4183]: I0813 19:44:24.455919 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:24 crc kubenswrapper[4183]: I0813 19:44:24.456074 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:24 crc kubenswrapper[4183]: I0813 19:44:24.456100 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:24 crc kubenswrapper[4183]: I0813 19:44:24.456132 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:44:24 crc kubenswrapper[4183]: E0813 19:44:24.472356 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:24Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:44:24 crc kubenswrapper[4183]: I0813 19:44:24.508688 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:24Z is after 2025-06-26T12:47:18Z Aug 13 19:44:24 crc kubenswrapper[4183]: I0813 19:44:24.891416 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:44:24 crc kubenswrapper[4183]: I0813 19:44:24.908121 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/0.log" Aug 13 19:44:24 crc kubenswrapper[4183]: I0813 19:44:24.910526 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"53c1db1508241fbac1bedf9130341ffe","Type":"ContainerStarted","Data":"c827bc1d1e0c62e30b803aa06d0e91a7dc8fda2b967748fd3fae83c74b9028e8"} Aug 13 19:44:24 crc kubenswrapper[4183]: I0813 19:44:24.910718 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:24 crc kubenswrapper[4183]: I0813 19:44:24.911904 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:24 crc kubenswrapper[4183]: I0813 19:44:24.911957 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:24 crc kubenswrapper[4183]: I0813 19:44:24.911975 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:25 crc kubenswrapper[4183]: E0813 19:44:25.408285 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:44:25 crc kubenswrapper[4183]: I0813 19:44:25.512733 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:25Z is after 2025-06-26T12:47:18Z Aug 13 19:44:25 crc kubenswrapper[4183]: I0813 19:44:25.913000 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:25 crc kubenswrapper[4183]: I0813 19:44:25.913136 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:44:25 crc kubenswrapper[4183]: I0813 19:44:25.916084 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:25 crc kubenswrapper[4183]: I0813 19:44:25.916152 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:25 crc kubenswrapper[4183]: I0813 19:44:25.916168 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:26 crc kubenswrapper[4183]: I0813 19:44:26.185479 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:44:26 crc kubenswrapper[4183]: W0813 19:44:26.220924 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:26Z is after 2025-06-26T12:47:18Z Aug 13 19:44:26 crc kubenswrapper[4183]: E0813 19:44:26.221145 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:26Z is after 2025-06-26T12:47:18Z Aug 13 19:44:26 crc kubenswrapper[4183]: I0813 19:44:26.508892 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:26Z is after 2025-06-26T12:47:18Z Aug 13 19:44:26 crc kubenswrapper[4183]: I0813 19:44:26.921346 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/1.log" Aug 13 19:44:26 crc kubenswrapper[4183]: I0813 19:44:26.923508 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/0.log" Aug 13 19:44:26 crc kubenswrapper[4183]: I0813 19:44:26.928912 4183 generic.go:334] "Generic (PLEG): container finished" podID="53c1db1508241fbac1bedf9130341ffe" containerID="c827bc1d1e0c62e30b803aa06d0e91a7dc8fda2b967748fd3fae83c74b9028e8" exitCode=255 Aug 13 19:44:26 crc kubenswrapper[4183]: I0813 19:44:26.928964 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"53c1db1508241fbac1bedf9130341ffe","Type":"ContainerDied","Data":"c827bc1d1e0c62e30b803aa06d0e91a7dc8fda2b967748fd3fae83c74b9028e8"} Aug 13 19:44:26 crc kubenswrapper[4183]: I0813 19:44:26.929010 4183 scope.go:117] "RemoveContainer" containerID="9de5e451cc2d3784d191ca7ee29ddfdd8d4ba15f3a93c605d7c310f6a8f0c5ff" Aug 13 19:44:26 crc kubenswrapper[4183]: I0813 19:44:26.929285 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:26 crc kubenswrapper[4183]: I0813 19:44:26.932302 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:26 crc kubenswrapper[4183]: I0813 19:44:26.933985 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:26 crc kubenswrapper[4183]: I0813 19:44:26.934318 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:26 crc kubenswrapper[4183]: I0813 19:44:26.940734 4183 scope.go:117] "RemoveContainer" containerID="c827bc1d1e0c62e30b803aa06d0e91a7dc8fda2b967748fd3fae83c74b9028e8" Aug 13 19:44:26 crc kubenswrapper[4183]: E0813 19:44:26.943129 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:44:26 crc kubenswrapper[4183]: I0813 19:44:26.953158 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:44:27 crc kubenswrapper[4183]: I0813 19:44:27.509157 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:27Z is after 2025-06-26T12:47:18Z Aug 13 19:44:27 crc kubenswrapper[4183]: I0813 19:44:27.933897 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/1.log" Aug 13 19:44:27 crc kubenswrapper[4183]: I0813 19:44:27.939891 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:27 crc kubenswrapper[4183]: I0813 19:44:27.941421 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:27 crc kubenswrapper[4183]: I0813 19:44:27.941681 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:27 crc kubenswrapper[4183]: I0813 19:44:27.941908 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:27 crc kubenswrapper[4183]: I0813 19:44:27.943245 4183 scope.go:117] "RemoveContainer" containerID="c827bc1d1e0c62e30b803aa06d0e91a7dc8fda2b967748fd3fae83c74b9028e8" Aug 13 19:44:27 crc kubenswrapper[4183]: E0813 19:44:27.943855 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:44:28 crc kubenswrapper[4183]: I0813 19:44:28.507271 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:28Z is after 2025-06-26T12:47:18Z Aug 13 19:44:28 crc kubenswrapper[4183]: I0813 19:44:28.945603 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:28 crc kubenswrapper[4183]: I0813 19:44:28.947340 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:28 crc kubenswrapper[4183]: I0813 19:44:28.947415 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:28 crc kubenswrapper[4183]: I0813 19:44:28.947437 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:28 crc kubenswrapper[4183]: I0813 19:44:28.949265 4183 scope.go:117] "RemoveContainer" containerID="c827bc1d1e0c62e30b803aa06d0e91a7dc8fda2b967748fd3fae83c74b9028e8" Aug 13 19:44:28 crc kubenswrapper[4183]: E0813 19:44:28.949934 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:44:29 crc kubenswrapper[4183]: I0813 19:44:29.510225 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:29Z is after 2025-06-26T12:47:18Z Aug 13 19:44:30 crc kubenswrapper[4183]: I0813 19:44:30.179631 4183 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:58646->192.168.126.11:10357: read: connection reset by peer" start-of-body= Aug 13 19:44:30 crc kubenswrapper[4183]: I0813 19:44:30.179912 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:58646->192.168.126.11:10357: read: connection reset by peer" Aug 13 19:44:30 crc kubenswrapper[4183]: I0813 19:44:30.180009 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:44:30 crc kubenswrapper[4183]: I0813 19:44:30.180293 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:30 crc kubenswrapper[4183]: I0813 19:44:30.184525 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:30 crc kubenswrapper[4183]: I0813 19:44:30.184711 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:30 crc kubenswrapper[4183]: I0813 19:44:30.184746 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:30 crc kubenswrapper[4183]: I0813 19:44:30.189862 4183 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"7670de641a29c43088fe5304b3060d152eed7ef9cf7e78cb240a6c54fce1995c"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Aug 13 19:44:30 crc kubenswrapper[4183]: I0813 19:44:30.190889 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" containerID="cri-o://7670de641a29c43088fe5304b3060d152eed7ef9cf7e78cb240a6c54fce1995c" gracePeriod=30 Aug 13 19:44:30 crc kubenswrapper[4183]: I0813 19:44:30.508175 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:30Z is after 2025-06-26T12:47:18Z Aug 13 19:44:30 crc kubenswrapper[4183]: E0813 19:44:30.598587 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:30Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18e7a3052c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,LastTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:44:30 crc kubenswrapper[4183]: I0813 19:44:30.957497 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/cluster-policy-controller/0.log" Aug 13 19:44:30 crc kubenswrapper[4183]: I0813 19:44:30.958419 4183 generic.go:334] "Generic (PLEG): container finished" podID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerID="7670de641a29c43088fe5304b3060d152eed7ef9cf7e78cb240a6c54fce1995c" exitCode=255 Aug 13 19:44:30 crc kubenswrapper[4183]: I0813 19:44:30.958502 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"2eb2b200bca0d10cf0fe16fb7c0caf80","Type":"ContainerDied","Data":"7670de641a29c43088fe5304b3060d152eed7ef9cf7e78cb240a6c54fce1995c"} Aug 13 19:44:30 crc kubenswrapper[4183]: I0813 19:44:30.958532 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"2eb2b200bca0d10cf0fe16fb7c0caf80","Type":"ContainerStarted","Data":"0f9b09ac6e9dadb007d01c7bbc7bebd022f33438bf5b7327973cb90180aebec9"} Aug 13 19:44:30 crc kubenswrapper[4183]: I0813 19:44:30.958833 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:30 crc kubenswrapper[4183]: I0813 19:44:30.960009 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:30 crc kubenswrapper[4183]: I0813 19:44:30.960062 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:30 crc kubenswrapper[4183]: I0813 19:44:30.960085 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:31 crc kubenswrapper[4183]: E0813 19:44:31.300057 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:31Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:44:31 crc kubenswrapper[4183]: I0813 19:44:31.474098 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:31 crc kubenswrapper[4183]: I0813 19:44:31.475689 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:31 crc kubenswrapper[4183]: I0813 19:44:31.475940 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:31 crc kubenswrapper[4183]: I0813 19:44:31.475967 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:31 crc kubenswrapper[4183]: I0813 19:44:31.476003 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:44:31 crc kubenswrapper[4183]: E0813 19:44:31.479716 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:31Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:44:31 crc kubenswrapper[4183]: I0813 19:44:31.508445 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:31Z is after 2025-06-26T12:47:18Z Aug 13 19:44:31 crc kubenswrapper[4183]: I0813 19:44:31.559283 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:44:31 crc kubenswrapper[4183]: I0813 19:44:31.962125 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:31 crc kubenswrapper[4183]: I0813 19:44:31.963607 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:31 crc kubenswrapper[4183]: I0813 19:44:31.963676 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:31 crc kubenswrapper[4183]: I0813 19:44:31.963699 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:32 crc kubenswrapper[4183]: I0813 19:44:32.508713 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:32Z is after 2025-06-26T12:47:18Z Aug 13 19:44:33 crc kubenswrapper[4183]: I0813 19:44:33.507968 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:33Z is after 2025-06-26T12:47:18Z Aug 13 19:44:34 crc kubenswrapper[4183]: I0813 19:44:34.509459 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:34Z is after 2025-06-26T12:47:18Z Aug 13 19:44:34 crc kubenswrapper[4183]: I0813 19:44:34.891356 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:44:34 crc kubenswrapper[4183]: I0813 19:44:34.891730 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:34 crc kubenswrapper[4183]: I0813 19:44:34.893298 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:34 crc kubenswrapper[4183]: I0813 19:44:34.893389 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:34 crc kubenswrapper[4183]: I0813 19:44:34.893407 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:34 crc kubenswrapper[4183]: I0813 19:44:34.894609 4183 scope.go:117] "RemoveContainer" containerID="c827bc1d1e0c62e30b803aa06d0e91a7dc8fda2b967748fd3fae83c74b9028e8" Aug 13 19:44:34 crc kubenswrapper[4183]: E0813 19:44:34.895045 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:44:34 crc kubenswrapper[4183]: I0813 19:44:34.956972 4183 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Aug 13 19:44:34 crc kubenswrapper[4183]: E0813 19:44:34.965734 4183 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:34Z is after 2025-06-26T12:47:18Z Aug 13 19:44:34 crc kubenswrapper[4183]: E0813 19:44:34.965983 4183 certificate_manager.go:440] kubernetes.io/kube-apiserver-client-kubelet: Reached backoff limit, still unable to rotate certs: timed out waiting for the condition Aug 13 19:44:35 crc kubenswrapper[4183]: E0813 19:44:35.409388 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:44:35 crc kubenswrapper[4183]: I0813 19:44:35.507686 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:35Z is after 2025-06-26T12:47:18Z Aug 13 19:44:36 crc kubenswrapper[4183]: I0813 19:44:36.509197 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:36Z is after 2025-06-26T12:47:18Z Aug 13 19:44:36 crc kubenswrapper[4183]: W0813 19:44:36.583957 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:36Z is after 2025-06-26T12:47:18Z Aug 13 19:44:36 crc kubenswrapper[4183]: E0813 19:44:36.584065 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:36Z is after 2025-06-26T12:47:18Z Aug 13 19:44:37 crc kubenswrapper[4183]: I0813 19:44:37.507683 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:37Z is after 2025-06-26T12:47:18Z Aug 13 19:44:38 crc kubenswrapper[4183]: E0813 19:44:38.304970 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:38Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:44:38 crc kubenswrapper[4183]: I0813 19:44:38.480243 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:38 crc kubenswrapper[4183]: I0813 19:44:38.482006 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:38 crc kubenswrapper[4183]: I0813 19:44:38.482036 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:38 crc kubenswrapper[4183]: I0813 19:44:38.482051 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:38 crc kubenswrapper[4183]: I0813 19:44:38.482077 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:44:38 crc kubenswrapper[4183]: E0813 19:44:38.486195 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:38Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:44:38 crc kubenswrapper[4183]: I0813 19:44:38.507744 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:38Z is after 2025-06-26T12:47:18Z Aug 13 19:44:39 crc kubenswrapper[4183]: I0813 19:44:39.508194 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:39Z is after 2025-06-26T12:47:18Z Aug 13 19:44:39 crc kubenswrapper[4183]: I0813 19:44:39.580897 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:44:39 crc kubenswrapper[4183]: I0813 19:44:39.581127 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:39 crc kubenswrapper[4183]: I0813 19:44:39.582389 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:39 crc kubenswrapper[4183]: I0813 19:44:39.582456 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:39 crc kubenswrapper[4183]: I0813 19:44:39.582473 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:40 crc kubenswrapper[4183]: I0813 19:44:40.507720 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:40Z is after 2025-06-26T12:47:18Z Aug 13 19:44:40 crc kubenswrapper[4183]: E0813 19:44:40.603676 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:40Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18e7a3052c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,LastTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:44:41 crc kubenswrapper[4183]: I0813 19:44:41.507445 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:41Z is after 2025-06-26T12:47:18Z Aug 13 19:44:42 crc kubenswrapper[4183]: I0813 19:44:42.507559 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:42Z is after 2025-06-26T12:47:18Z Aug 13 19:44:42 crc kubenswrapper[4183]: W0813 19:44:42.522365 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:42Z is after 2025-06-26T12:47:18Z Aug 13 19:44:42 crc kubenswrapper[4183]: E0813 19:44:42.522440 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:42Z is after 2025-06-26T12:47:18Z Aug 13 19:44:42 crc kubenswrapper[4183]: I0813 19:44:42.581872 4183 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 19:44:42 crc kubenswrapper[4183]: I0813 19:44:42.582387 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 19:44:43 crc kubenswrapper[4183]: I0813 19:44:43.508421 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:43Z is after 2025-06-26T12:47:18Z Aug 13 19:44:44 crc kubenswrapper[4183]: I0813 19:44:44.507425 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:44Z is after 2025-06-26T12:47:18Z Aug 13 19:44:45 crc kubenswrapper[4183]: W0813 19:44:45.280999 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:45Z is after 2025-06-26T12:47:18Z Aug 13 19:44:45 crc kubenswrapper[4183]: E0813 19:44:45.281599 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:45Z is after 2025-06-26T12:47:18Z Aug 13 19:44:45 crc kubenswrapper[4183]: E0813 19:44:45.309494 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:45Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:44:45 crc kubenswrapper[4183]: E0813 19:44:45.410132 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:44:45 crc kubenswrapper[4183]: I0813 19:44:45.486592 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:45 crc kubenswrapper[4183]: I0813 19:44:45.489724 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:45 crc kubenswrapper[4183]: I0813 19:44:45.490565 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:45 crc kubenswrapper[4183]: I0813 19:44:45.490649 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:45 crc kubenswrapper[4183]: I0813 19:44:45.490692 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:44:45 crc kubenswrapper[4183]: E0813 19:44:45.496415 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:45Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:44:45 crc kubenswrapper[4183]: I0813 19:44:45.508552 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:45Z is after 2025-06-26T12:47:18Z Aug 13 19:44:46 crc kubenswrapper[4183]: I0813 19:44:46.352404 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 19:44:46 crc kubenswrapper[4183]: I0813 19:44:46.353013 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:46 crc kubenswrapper[4183]: I0813 19:44:46.354512 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:46 crc kubenswrapper[4183]: I0813 19:44:46.354573 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:46 crc kubenswrapper[4183]: I0813 19:44:46.354587 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:46 crc kubenswrapper[4183]: I0813 19:44:46.507711 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:46Z is after 2025-06-26T12:47:18Z Aug 13 19:44:47 crc kubenswrapper[4183]: W0813 19:44:47.185997 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:47Z is after 2025-06-26T12:47:18Z Aug 13 19:44:47 crc kubenswrapper[4183]: E0813 19:44:47.186303 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:47Z is after 2025-06-26T12:47:18Z Aug 13 19:44:47 crc kubenswrapper[4183]: I0813 19:44:47.508005 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:47Z is after 2025-06-26T12:47:18Z Aug 13 19:44:48 crc kubenswrapper[4183]: I0813 19:44:48.530896 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:48Z is after 2025-06-26T12:47:18Z Aug 13 19:44:49 crc kubenswrapper[4183]: I0813 19:44:49.508142 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:49Z is after 2025-06-26T12:47:18Z Aug 13 19:44:50 crc kubenswrapper[4183]: I0813 19:44:50.208245 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:50 crc kubenswrapper[4183]: I0813 19:44:50.209677 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:50 crc kubenswrapper[4183]: I0813 19:44:50.209728 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:50 crc kubenswrapper[4183]: I0813 19:44:50.209743 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:50 crc kubenswrapper[4183]: I0813 19:44:50.211129 4183 scope.go:117] "RemoveContainer" containerID="c827bc1d1e0c62e30b803aa06d0e91a7dc8fda2b967748fd3fae83c74b9028e8" Aug 13 19:44:50 crc kubenswrapper[4183]: I0813 19:44:50.508572 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:50Z is after 2025-06-26T12:47:18Z Aug 13 19:44:50 crc kubenswrapper[4183]: E0813 19:44:50.611066 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:50Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18e7a3052c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,LastTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:44:51 crc kubenswrapper[4183]: I0813 19:44:51.030401 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/1.log" Aug 13 19:44:51 crc kubenswrapper[4183]: I0813 19:44:51.045562 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"53c1db1508241fbac1bedf9130341ffe","Type":"ContainerStarted","Data":"2e2e57111c702d662b174d77e773e5ea0e244d70bcef09eea07eac62e0f0af98"} Aug 13 19:44:51 crc kubenswrapper[4183]: I0813 19:44:51.046059 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:51 crc kubenswrapper[4183]: I0813 19:44:51.048093 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:51 crc kubenswrapper[4183]: I0813 19:44:51.048183 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:51 crc kubenswrapper[4183]: I0813 19:44:51.048203 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:51 crc kubenswrapper[4183]: I0813 19:44:51.510559 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:51Z is after 2025-06-26T12:47:18Z Aug 13 19:44:52 crc kubenswrapper[4183]: I0813 19:44:52.054591 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/2.log" Aug 13 19:44:52 crc kubenswrapper[4183]: I0813 19:44:52.055848 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/1.log" Aug 13 19:44:52 crc kubenswrapper[4183]: I0813 19:44:52.064063 4183 generic.go:334] "Generic (PLEG): container finished" podID="53c1db1508241fbac1bedf9130341ffe" containerID="2e2e57111c702d662b174d77e773e5ea0e244d70bcef09eea07eac62e0f0af98" exitCode=255 Aug 13 19:44:52 crc kubenswrapper[4183]: I0813 19:44:52.064165 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"53c1db1508241fbac1bedf9130341ffe","Type":"ContainerDied","Data":"2e2e57111c702d662b174d77e773e5ea0e244d70bcef09eea07eac62e0f0af98"} Aug 13 19:44:52 crc kubenswrapper[4183]: I0813 19:44:52.064305 4183 scope.go:117] "RemoveContainer" containerID="c827bc1d1e0c62e30b803aa06d0e91a7dc8fda2b967748fd3fae83c74b9028e8" Aug 13 19:44:52 crc kubenswrapper[4183]: I0813 19:44:52.064881 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:52 crc kubenswrapper[4183]: I0813 19:44:52.067302 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:52 crc kubenswrapper[4183]: I0813 19:44:52.067486 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:52 crc kubenswrapper[4183]: I0813 19:44:52.067529 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:52 crc kubenswrapper[4183]: I0813 19:44:52.070693 4183 scope.go:117] "RemoveContainer" containerID="2e2e57111c702d662b174d77e773e5ea0e244d70bcef09eea07eac62e0f0af98" Aug 13 19:44:52 crc kubenswrapper[4183]: E0813 19:44:52.072699 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:44:52 crc kubenswrapper[4183]: E0813 19:44:52.319223 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:52Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:44:52 crc kubenswrapper[4183]: I0813 19:44:52.496694 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:52 crc kubenswrapper[4183]: I0813 19:44:52.498405 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:52 crc kubenswrapper[4183]: I0813 19:44:52.498720 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:52 crc kubenswrapper[4183]: I0813 19:44:52.498978 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:52 crc kubenswrapper[4183]: I0813 19:44:52.499107 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:44:52 crc kubenswrapper[4183]: E0813 19:44:52.504188 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:52Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:44:52 crc kubenswrapper[4183]: I0813 19:44:52.507577 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:52Z is after 2025-06-26T12:47:18Z Aug 13 19:44:52 crc kubenswrapper[4183]: I0813 19:44:52.581562 4183 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 19:44:52 crc kubenswrapper[4183]: I0813 19:44:52.581752 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 19:44:53 crc kubenswrapper[4183]: I0813 19:44:53.070983 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/2.log" Aug 13 19:44:53 crc kubenswrapper[4183]: I0813 19:44:53.508312 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:53Z is after 2025-06-26T12:47:18Z Aug 13 19:44:54 crc kubenswrapper[4183]: I0813 19:44:54.508279 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:54Z is after 2025-06-26T12:47:18Z Aug 13 19:44:54 crc kubenswrapper[4183]: I0813 19:44:54.657538 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 19:44:54 crc kubenswrapper[4183]: I0813 19:44:54.657691 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 19:44:54 crc kubenswrapper[4183]: I0813 19:44:54.657720 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 19:44:54 crc kubenswrapper[4183]: I0813 19:44:54.657741 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 19:44:54 crc kubenswrapper[4183]: I0813 19:44:54.657755 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 19:44:54 crc kubenswrapper[4183]: I0813 19:44:54.891466 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:44:54 crc kubenswrapper[4183]: I0813 19:44:54.892106 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:54 crc kubenswrapper[4183]: I0813 19:44:54.893700 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:54 crc kubenswrapper[4183]: I0813 19:44:54.894037 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:54 crc kubenswrapper[4183]: I0813 19:44:54.894089 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:54 crc kubenswrapper[4183]: I0813 19:44:54.895662 4183 scope.go:117] "RemoveContainer" containerID="2e2e57111c702d662b174d77e773e5ea0e244d70bcef09eea07eac62e0f0af98" Aug 13 19:44:54 crc kubenswrapper[4183]: E0813 19:44:54.896216 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:44:55 crc kubenswrapper[4183]: E0813 19:44:55.410662 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:44:55 crc kubenswrapper[4183]: I0813 19:44:55.507525 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:55Z is after 2025-06-26T12:47:18Z Aug 13 19:44:56 crc kubenswrapper[4183]: I0813 19:44:56.508760 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:56Z is after 2025-06-26T12:47:18Z Aug 13 19:44:57 crc kubenswrapper[4183]: I0813 19:44:57.507157 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:57Z is after 2025-06-26T12:47:18Z Aug 13 19:44:57 crc kubenswrapper[4183]: I0813 19:44:57.563091 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:44:57 crc kubenswrapper[4183]: I0813 19:44:57.563345 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:57 crc kubenswrapper[4183]: I0813 19:44:57.565501 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:57 crc kubenswrapper[4183]: I0813 19:44:57.565852 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:57 crc kubenswrapper[4183]: I0813 19:44:57.566000 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:57 crc kubenswrapper[4183]: I0813 19:44:57.571517 4183 scope.go:117] "RemoveContainer" containerID="2e2e57111c702d662b174d77e773e5ea0e244d70bcef09eea07eac62e0f0af98" Aug 13 19:44:57 crc kubenswrapper[4183]: E0813 19:44:57.572262 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:44:58 crc kubenswrapper[4183]: I0813 19:44:58.507190 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:58Z is after 2025-06-26T12:47:18Z Aug 13 19:44:59 crc kubenswrapper[4183]: E0813 19:44:59.326432 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:59Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:44:59 crc kubenswrapper[4183]: I0813 19:44:59.504460 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:59 crc kubenswrapper[4183]: I0813 19:44:59.506489 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:59 crc kubenswrapper[4183]: I0813 19:44:59.506660 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:59 crc kubenswrapper[4183]: I0813 19:44:59.506694 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:59 crc kubenswrapper[4183]: I0813 19:44:59.506737 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:44:59 crc kubenswrapper[4183]: I0813 19:44:59.509406 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:59Z is after 2025-06-26T12:47:18Z Aug 13 19:44:59 crc kubenswrapper[4183]: E0813 19:44:59.512950 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:59Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:45:00 crc kubenswrapper[4183]: I0813 19:45:00.507961 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:00Z is after 2025-06-26T12:47:18Z Aug 13 19:45:00 crc kubenswrapper[4183]: E0813 19:45:00.615941 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:00Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18e7a3052c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,LastTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:45:00 crc kubenswrapper[4183]: I0813 19:45:00.995163 4183 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:59688->192.168.126.11:10357: read: connection reset by peer" start-of-body= Aug 13 19:45:00 crc kubenswrapper[4183]: I0813 19:45:00.995291 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:59688->192.168.126.11:10357: read: connection reset by peer" Aug 13 19:45:00 crc kubenswrapper[4183]: I0813 19:45:00.995354 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:45:00 crc kubenswrapper[4183]: I0813 19:45:00.995730 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:00 crc kubenswrapper[4183]: I0813 19:45:00.997332 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:00 crc kubenswrapper[4183]: I0813 19:45:00.997373 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:00 crc kubenswrapper[4183]: I0813 19:45:00.997385 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:01 crc kubenswrapper[4183]: I0813 19:45:01.002082 4183 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"0f9b09ac6e9dadb007d01c7bbc7bebd022f33438bf5b7327973cb90180aebec9"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Aug 13 19:45:01 crc kubenswrapper[4183]: I0813 19:45:01.003082 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" containerID="cri-o://0f9b09ac6e9dadb007d01c7bbc7bebd022f33438bf5b7327973cb90180aebec9" gracePeriod=30 Aug 13 19:45:01 crc kubenswrapper[4183]: I0813 19:45:01.100706 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/cluster-policy-controller/1.log" Aug 13 19:45:01 crc kubenswrapper[4183]: I0813 19:45:01.102983 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/cluster-policy-controller/0.log" Aug 13 19:45:01 crc kubenswrapper[4183]: I0813 19:45:01.106342 4183 generic.go:334] "Generic (PLEG): container finished" podID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerID="0f9b09ac6e9dadb007d01c7bbc7bebd022f33438bf5b7327973cb90180aebec9" exitCode=255 Aug 13 19:45:01 crc kubenswrapper[4183]: I0813 19:45:01.106406 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"2eb2b200bca0d10cf0fe16fb7c0caf80","Type":"ContainerDied","Data":"0f9b09ac6e9dadb007d01c7bbc7bebd022f33438bf5b7327973cb90180aebec9"} Aug 13 19:45:01 crc kubenswrapper[4183]: I0813 19:45:01.106447 4183 scope.go:117] "RemoveContainer" containerID="7670de641a29c43088fe5304b3060d152eed7ef9cf7e78cb240a6c54fce1995c" Aug 13 19:45:01 crc kubenswrapper[4183]: I0813 19:45:01.508464 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:01Z is after 2025-06-26T12:47:18Z Aug 13 19:45:02 crc kubenswrapper[4183]: I0813 19:45:02.111742 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/cluster-policy-controller/1.log" Aug 13 19:45:02 crc kubenswrapper[4183]: I0813 19:45:02.113541 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"2eb2b200bca0d10cf0fe16fb7c0caf80","Type":"ContainerStarted","Data":"dcdf75b3e39eac7c9e0c31f36cbe80951a52cc88109649d9e8c38789aca6bfb6"} Aug 13 19:45:02 crc kubenswrapper[4183]: I0813 19:45:02.113650 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:02 crc kubenswrapper[4183]: I0813 19:45:02.114682 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:02 crc kubenswrapper[4183]: I0813 19:45:02.114738 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:02 crc kubenswrapper[4183]: I0813 19:45:02.114754 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:02 crc kubenswrapper[4183]: I0813 19:45:02.509447 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:02Z is after 2025-06-26T12:47:18Z Aug 13 19:45:03 crc kubenswrapper[4183]: I0813 19:45:03.116281 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:03 crc kubenswrapper[4183]: I0813 19:45:03.117326 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:03 crc kubenswrapper[4183]: I0813 19:45:03.117378 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:03 crc kubenswrapper[4183]: I0813 19:45:03.117394 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:03 crc kubenswrapper[4183]: I0813 19:45:03.508066 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:03Z is after 2025-06-26T12:47:18Z Aug 13 19:45:04 crc kubenswrapper[4183]: I0813 19:45:04.509005 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:04Z is after 2025-06-26T12:47:18Z Aug 13 19:45:05 crc kubenswrapper[4183]: E0813 19:45:05.410927 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:45:05 crc kubenswrapper[4183]: I0813 19:45:05.509997 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:05Z is after 2025-06-26T12:47:18Z Aug 13 19:45:06 crc kubenswrapper[4183]: E0813 19:45:06.332956 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:06Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:45:06 crc kubenswrapper[4183]: I0813 19:45:06.507894 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:06Z is after 2025-06-26T12:47:18Z Aug 13 19:45:06 crc kubenswrapper[4183]: I0813 19:45:06.514149 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:06 crc kubenswrapper[4183]: I0813 19:45:06.516311 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:06 crc kubenswrapper[4183]: I0813 19:45:06.516383 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:06 crc kubenswrapper[4183]: I0813 19:45:06.516400 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:06 crc kubenswrapper[4183]: I0813 19:45:06.516437 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:45:06 crc kubenswrapper[4183]: E0813 19:45:06.520556 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:06Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:45:06 crc kubenswrapper[4183]: I0813 19:45:06.969439 4183 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Aug 13 19:45:06 crc kubenswrapper[4183]: E0813 19:45:06.974382 4183 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:06Z is after 2025-06-26T12:47:18Z Aug 13 19:45:07 crc kubenswrapper[4183]: I0813 19:45:07.507969 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:07Z is after 2025-06-26T12:47:18Z Aug 13 19:45:08 crc kubenswrapper[4183]: I0813 19:45:08.508286 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:08Z is after 2025-06-26T12:47:18Z Aug 13 19:45:09 crc kubenswrapper[4183]: I0813 19:45:09.507931 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:09Z is after 2025-06-26T12:47:18Z Aug 13 19:45:09 crc kubenswrapper[4183]: I0813 19:45:09.581036 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:45:09 crc kubenswrapper[4183]: I0813 19:45:09.581296 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:09 crc kubenswrapper[4183]: I0813 19:45:09.582869 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:09 crc kubenswrapper[4183]: I0813 19:45:09.582950 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:09 crc kubenswrapper[4183]: I0813 19:45:09.582974 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:10 crc kubenswrapper[4183]: I0813 19:45:10.508251 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:10Z is after 2025-06-26T12:47:18Z Aug 13 19:45:10 crc kubenswrapper[4183]: E0813 19:45:10.621077 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:10Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18e7a3052c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,LastTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:45:11 crc kubenswrapper[4183]: I0813 19:45:11.507141 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:11Z is after 2025-06-26T12:47:18Z Aug 13 19:45:11 crc kubenswrapper[4183]: I0813 19:45:11.558506 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:45:11 crc kubenswrapper[4183]: I0813 19:45:11.558664 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:11 crc kubenswrapper[4183]: I0813 19:45:11.560311 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:11 crc kubenswrapper[4183]: I0813 19:45:11.560465 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:11 crc kubenswrapper[4183]: I0813 19:45:11.560495 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:12 crc kubenswrapper[4183]: I0813 19:45:12.209239 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:12 crc kubenswrapper[4183]: I0813 19:45:12.211048 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:12 crc kubenswrapper[4183]: I0813 19:45:12.211092 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:12 crc kubenswrapper[4183]: I0813 19:45:12.211104 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:12 crc kubenswrapper[4183]: I0813 19:45:12.212843 4183 scope.go:117] "RemoveContainer" containerID="2e2e57111c702d662b174d77e773e5ea0e244d70bcef09eea07eac62e0f0af98" Aug 13 19:45:12 crc kubenswrapper[4183]: W0813 19:45:12.375543 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:12Z is after 2025-06-26T12:47:18Z Aug 13 19:45:12 crc kubenswrapper[4183]: E0813 19:45:12.375667 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:12Z is after 2025-06-26T12:47:18Z Aug 13 19:45:12 crc kubenswrapper[4183]: I0813 19:45:12.508906 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:12Z is after 2025-06-26T12:47:18Z Aug 13 19:45:12 crc kubenswrapper[4183]: I0813 19:45:12.582036 4183 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 19:45:12 crc kubenswrapper[4183]: I0813 19:45:12.582203 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 19:45:13 crc kubenswrapper[4183]: I0813 19:45:13.152957 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/2.log" Aug 13 19:45:13 crc kubenswrapper[4183]: I0813 19:45:13.156207 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"53c1db1508241fbac1bedf9130341ffe","Type":"ContainerStarted","Data":"89ea5c4b7625d1ba9b9cfcf78e2be8cb372cc58135d7587f6df13e0c8e044b53"} Aug 13 19:45:13 crc kubenswrapper[4183]: I0813 19:45:13.156392 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:13 crc kubenswrapper[4183]: I0813 19:45:13.157541 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:13 crc kubenswrapper[4183]: I0813 19:45:13.157717 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:13 crc kubenswrapper[4183]: I0813 19:45:13.157924 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:13 crc kubenswrapper[4183]: E0813 19:45:13.337071 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:13Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:45:13 crc kubenswrapper[4183]: I0813 19:45:13.508426 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:13Z is after 2025-06-26T12:47:18Z Aug 13 19:45:13 crc kubenswrapper[4183]: I0813 19:45:13.520646 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:13 crc kubenswrapper[4183]: I0813 19:45:13.522157 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:13 crc kubenswrapper[4183]: I0813 19:45:13.522456 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:13 crc kubenswrapper[4183]: I0813 19:45:13.522529 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:13 crc kubenswrapper[4183]: I0813 19:45:13.522603 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:45:13 crc kubenswrapper[4183]: E0813 19:45:13.528513 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:13Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:45:14 crc kubenswrapper[4183]: I0813 19:45:14.161681 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/3.log" Aug 13 19:45:14 crc kubenswrapper[4183]: I0813 19:45:14.162518 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/2.log" Aug 13 19:45:14 crc kubenswrapper[4183]: I0813 19:45:14.166966 4183 generic.go:334] "Generic (PLEG): container finished" podID="53c1db1508241fbac1bedf9130341ffe" containerID="89ea5c4b7625d1ba9b9cfcf78e2be8cb372cc58135d7587f6df13e0c8e044b53" exitCode=255 Aug 13 19:45:14 crc kubenswrapper[4183]: I0813 19:45:14.167054 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"53c1db1508241fbac1bedf9130341ffe","Type":"ContainerDied","Data":"89ea5c4b7625d1ba9b9cfcf78e2be8cb372cc58135d7587f6df13e0c8e044b53"} Aug 13 19:45:14 crc kubenswrapper[4183]: I0813 19:45:14.167107 4183 scope.go:117] "RemoveContainer" containerID="2e2e57111c702d662b174d77e773e5ea0e244d70bcef09eea07eac62e0f0af98" Aug 13 19:45:14 crc kubenswrapper[4183]: I0813 19:45:14.167229 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:14 crc kubenswrapper[4183]: I0813 19:45:14.168632 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:14 crc kubenswrapper[4183]: I0813 19:45:14.168746 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:14 crc kubenswrapper[4183]: I0813 19:45:14.168849 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:14 crc kubenswrapper[4183]: I0813 19:45:14.170929 4183 scope.go:117] "RemoveContainer" containerID="89ea5c4b7625d1ba9b9cfcf78e2be8cb372cc58135d7587f6df13e0c8e044b53" Aug 13 19:45:14 crc kubenswrapper[4183]: E0813 19:45:14.171697 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:45:14 crc kubenswrapper[4183]: I0813 19:45:14.208869 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:14 crc kubenswrapper[4183]: I0813 19:45:14.210386 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:14 crc kubenswrapper[4183]: I0813 19:45:14.210540 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:14 crc kubenswrapper[4183]: I0813 19:45:14.210648 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:14 crc kubenswrapper[4183]: I0813 19:45:14.507841 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:14Z is after 2025-06-26T12:47:18Z Aug 13 19:45:14 crc kubenswrapper[4183]: I0813 19:45:14.891288 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:45:15 crc kubenswrapper[4183]: I0813 19:45:15.171833 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/3.log" Aug 13 19:45:15 crc kubenswrapper[4183]: I0813 19:45:15.174120 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:15 crc kubenswrapper[4183]: I0813 19:45:15.175018 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:15 crc kubenswrapper[4183]: I0813 19:45:15.175060 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:15 crc kubenswrapper[4183]: I0813 19:45:15.175073 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:15 crc kubenswrapper[4183]: I0813 19:45:15.176106 4183 scope.go:117] "RemoveContainer" containerID="89ea5c4b7625d1ba9b9cfcf78e2be8cb372cc58135d7587f6df13e0c8e044b53" Aug 13 19:45:15 crc kubenswrapper[4183]: E0813 19:45:15.176437 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:45:15 crc kubenswrapper[4183]: E0813 19:45:15.411865 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:45:15 crc kubenswrapper[4183]: I0813 19:45:15.507316 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:15Z is after 2025-06-26T12:47:18Z Aug 13 19:45:16 crc kubenswrapper[4183]: I0813 19:45:16.509268 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:16Z is after 2025-06-26T12:47:18Z Aug 13 19:45:17 crc kubenswrapper[4183]: I0813 19:45:17.509667 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:17Z is after 2025-06-26T12:47:18Z Aug 13 19:45:17 crc kubenswrapper[4183]: I0813 19:45:17.563182 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:45:17 crc kubenswrapper[4183]: I0813 19:45:17.563484 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:17 crc kubenswrapper[4183]: I0813 19:45:17.565073 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:17 crc kubenswrapper[4183]: I0813 19:45:17.565125 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:17 crc kubenswrapper[4183]: I0813 19:45:17.565145 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:17 crc kubenswrapper[4183]: I0813 19:45:17.566391 4183 scope.go:117] "RemoveContainer" containerID="89ea5c4b7625d1ba9b9cfcf78e2be8cb372cc58135d7587f6df13e0c8e044b53" Aug 13 19:45:17 crc kubenswrapper[4183]: E0813 19:45:17.566892 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:45:18 crc kubenswrapper[4183]: I0813 19:45:18.508241 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:18Z is after 2025-06-26T12:47:18Z Aug 13 19:45:19 crc kubenswrapper[4183]: I0813 19:45:19.511330 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:19Z is after 2025-06-26T12:47:18Z Aug 13 19:45:20 crc kubenswrapper[4183]: E0813 19:45:20.341923 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:20Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:45:20 crc kubenswrapper[4183]: I0813 19:45:20.508349 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:20Z is after 2025-06-26T12:47:18Z Aug 13 19:45:20 crc kubenswrapper[4183]: I0813 19:45:20.528918 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:20 crc kubenswrapper[4183]: I0813 19:45:20.530400 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:20 crc kubenswrapper[4183]: I0813 19:45:20.530507 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:20 crc kubenswrapper[4183]: I0813 19:45:20.530524 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:20 crc kubenswrapper[4183]: I0813 19:45:20.530625 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:45:20 crc kubenswrapper[4183]: E0813 19:45:20.534200 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:20Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:45:20 crc kubenswrapper[4183]: E0813 19:45:20.627698 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:20Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18e7a3052c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,LastTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:45:21 crc kubenswrapper[4183]: I0813 19:45:21.508311 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:21Z is after 2025-06-26T12:47:18Z Aug 13 19:45:22 crc kubenswrapper[4183]: W0813 19:45:22.431240 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:22Z is after 2025-06-26T12:47:18Z Aug 13 19:45:22 crc kubenswrapper[4183]: E0813 19:45:22.431305 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:22Z is after 2025-06-26T12:47:18Z Aug 13 19:45:22 crc kubenswrapper[4183]: I0813 19:45:22.507124 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:22Z is after 2025-06-26T12:47:18Z Aug 13 19:45:22 crc kubenswrapper[4183]: I0813 19:45:22.580405 4183 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 19:45:22 crc kubenswrapper[4183]: I0813 19:45:22.580763 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 19:45:23 crc kubenswrapper[4183]: I0813 19:45:23.507832 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:23Z is after 2025-06-26T12:47:18Z Aug 13 19:45:24 crc kubenswrapper[4183]: I0813 19:45:24.509082 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:24Z is after 2025-06-26T12:47:18Z Aug 13 19:45:25 crc kubenswrapper[4183]: E0813 19:45:25.412585 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:45:25 crc kubenswrapper[4183]: I0813 19:45:25.508881 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:25Z is after 2025-06-26T12:47:18Z Aug 13 19:45:26 crc kubenswrapper[4183]: I0813 19:45:26.507470 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:26Z is after 2025-06-26T12:47:18Z Aug 13 19:45:27 crc kubenswrapper[4183]: E0813 19:45:27.346884 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:27Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:45:27 crc kubenswrapper[4183]: I0813 19:45:27.510549 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:27Z is after 2025-06-26T12:47:18Z Aug 13 19:45:27 crc kubenswrapper[4183]: I0813 19:45:27.534700 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:27 crc kubenswrapper[4183]: I0813 19:45:27.540097 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:27 crc kubenswrapper[4183]: I0813 19:45:27.540188 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:27 crc kubenswrapper[4183]: I0813 19:45:27.540208 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:27 crc kubenswrapper[4183]: I0813 19:45:27.540270 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:45:27 crc kubenswrapper[4183]: E0813 19:45:27.544948 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:27Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:45:28 crc kubenswrapper[4183]: I0813 19:45:28.507944 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:28Z is after 2025-06-26T12:47:18Z Aug 13 19:45:29 crc kubenswrapper[4183]: W0813 19:45:29.332190 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:29Z is after 2025-06-26T12:47:18Z Aug 13 19:45:29 crc kubenswrapper[4183]: E0813 19:45:29.332305 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:29Z is after 2025-06-26T12:47:18Z Aug 13 19:45:29 crc kubenswrapper[4183]: I0813 19:45:29.508640 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:29Z is after 2025-06-26T12:47:18Z Aug 13 19:45:30 crc kubenswrapper[4183]: I0813 19:45:30.507496 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:30Z is after 2025-06-26T12:47:18Z Aug 13 19:45:30 crc kubenswrapper[4183]: E0813 19:45:30.632844 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:30Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18e7a3052c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,LastTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:45:31 crc kubenswrapper[4183]: I0813 19:45:31.209282 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:31 crc kubenswrapper[4183]: I0813 19:45:31.211543 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:31 crc kubenswrapper[4183]: I0813 19:45:31.211643 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:31 crc kubenswrapper[4183]: I0813 19:45:31.211664 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:31 crc kubenswrapper[4183]: I0813 19:45:31.214026 4183 scope.go:117] "RemoveContainer" containerID="89ea5c4b7625d1ba9b9cfcf78e2be8cb372cc58135d7587f6df13e0c8e044b53" Aug 13 19:45:31 crc kubenswrapper[4183]: E0813 19:45:31.215310 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:45:31 crc kubenswrapper[4183]: I0813 19:45:31.508192 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:31Z is after 2025-06-26T12:47:18Z Aug 13 19:45:31 crc kubenswrapper[4183]: I0813 19:45:31.769405 4183 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:50512->192.168.126.11:10357: read: connection reset by peer" start-of-body= Aug 13 19:45:31 crc kubenswrapper[4183]: I0813 19:45:31.769522 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:50512->192.168.126.11:10357: read: connection reset by peer" Aug 13 19:45:31 crc kubenswrapper[4183]: I0813 19:45:31.769608 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:45:31 crc kubenswrapper[4183]: I0813 19:45:31.769813 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:31 crc kubenswrapper[4183]: I0813 19:45:31.771861 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:31 crc kubenswrapper[4183]: I0813 19:45:31.771993 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:31 crc kubenswrapper[4183]: I0813 19:45:31.772154 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:31 crc kubenswrapper[4183]: I0813 19:45:31.774314 4183 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"dcdf75b3e39eac7c9e0c31f36cbe80951a52cc88109649d9e8c38789aca6bfb6"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Aug 13 19:45:31 crc kubenswrapper[4183]: I0813 19:45:31.774876 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" containerID="cri-o://dcdf75b3e39eac7c9e0c31f36cbe80951a52cc88109649d9e8c38789aca6bfb6" gracePeriod=30 Aug 13 19:45:32 crc kubenswrapper[4183]: I0813 19:45:32.248265 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/cluster-policy-controller/2.log" Aug 13 19:45:32 crc kubenswrapper[4183]: I0813 19:45:32.248965 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/cluster-policy-controller/1.log" Aug 13 19:45:32 crc kubenswrapper[4183]: I0813 19:45:32.250470 4183 generic.go:334] "Generic (PLEG): container finished" podID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerID="dcdf75b3e39eac7c9e0c31f36cbe80951a52cc88109649d9e8c38789aca6bfb6" exitCode=255 Aug 13 19:45:32 crc kubenswrapper[4183]: I0813 19:45:32.250514 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"2eb2b200bca0d10cf0fe16fb7c0caf80","Type":"ContainerDied","Data":"dcdf75b3e39eac7c9e0c31f36cbe80951a52cc88109649d9e8c38789aca6bfb6"} Aug 13 19:45:32 crc kubenswrapper[4183]: I0813 19:45:32.250595 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"2eb2b200bca0d10cf0fe16fb7c0caf80","Type":"ContainerStarted","Data":"4a09dda3746e6c59af493f2778fdf8195f1e39bbc6699be4e03d0b41c4a15e3f"} Aug 13 19:45:32 crc kubenswrapper[4183]: I0813 19:45:32.250676 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:32 crc kubenswrapper[4183]: I0813 19:45:32.250666 4183 scope.go:117] "RemoveContainer" containerID="0f9b09ac6e9dadb007d01c7bbc7bebd022f33438bf5b7327973cb90180aebec9" Aug 13 19:45:32 crc kubenswrapper[4183]: I0813 19:45:32.251767 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:32 crc kubenswrapper[4183]: I0813 19:45:32.251898 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:32 crc kubenswrapper[4183]: I0813 19:45:32.251922 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:32 crc kubenswrapper[4183]: I0813 19:45:32.507279 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:32Z is after 2025-06-26T12:47:18Z Aug 13 19:45:33 crc kubenswrapper[4183]: I0813 19:45:33.259638 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/cluster-policy-controller/2.log" Aug 13 19:45:33 crc kubenswrapper[4183]: I0813 19:45:33.262592 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:33 crc kubenswrapper[4183]: I0813 19:45:33.264018 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:33 crc kubenswrapper[4183]: I0813 19:45:33.264120 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:33 crc kubenswrapper[4183]: I0813 19:45:33.264143 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:33 crc kubenswrapper[4183]: I0813 19:45:33.508014 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:33Z is after 2025-06-26T12:47:18Z Aug 13 19:45:33 crc kubenswrapper[4183]: W0813 19:45:33.705946 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:33Z is after 2025-06-26T12:47:18Z Aug 13 19:45:33 crc kubenswrapper[4183]: E0813 19:45:33.706061 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:33Z is after 2025-06-26T12:47:18Z Aug 13 19:45:34 crc kubenswrapper[4183]: E0813 19:45:34.352501 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:34Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:45:34 crc kubenswrapper[4183]: I0813 19:45:34.508937 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:34Z is after 2025-06-26T12:47:18Z Aug 13 19:45:34 crc kubenswrapper[4183]: I0813 19:45:34.545880 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:34 crc kubenswrapper[4183]: I0813 19:45:34.548101 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:34 crc kubenswrapper[4183]: I0813 19:45:34.548169 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:34 crc kubenswrapper[4183]: I0813 19:45:34.548187 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:34 crc kubenswrapper[4183]: I0813 19:45:34.548219 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:45:34 crc kubenswrapper[4183]: E0813 19:45:34.552614 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:34Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:45:35 crc kubenswrapper[4183]: E0813 19:45:35.413709 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:45:35 crc kubenswrapper[4183]: I0813 19:45:35.507972 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:35Z is after 2025-06-26T12:47:18Z Aug 13 19:45:36 crc kubenswrapper[4183]: I0813 19:45:36.507944 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:36Z is after 2025-06-26T12:47:18Z Aug 13 19:45:37 crc kubenswrapper[4183]: I0813 19:45:37.508249 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:37Z is after 2025-06-26T12:47:18Z Aug 13 19:45:38 crc kubenswrapper[4183]: I0813 19:45:38.508206 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:38Z is after 2025-06-26T12:47:18Z Aug 13 19:45:38 crc kubenswrapper[4183]: I0813 19:45:38.969995 4183 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Aug 13 19:45:38 crc kubenswrapper[4183]: E0813 19:45:38.976170 4183 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:38Z is after 2025-06-26T12:47:18Z Aug 13 19:45:39 crc kubenswrapper[4183]: I0813 19:45:39.508669 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:39Z is after 2025-06-26T12:47:18Z Aug 13 19:45:39 crc kubenswrapper[4183]: I0813 19:45:39.581199 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:45:39 crc kubenswrapper[4183]: I0813 19:45:39.581513 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:39 crc kubenswrapper[4183]: I0813 19:45:39.585195 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:39 crc kubenswrapper[4183]: I0813 19:45:39.585255 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:39 crc kubenswrapper[4183]: I0813 19:45:39.585274 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:40 crc kubenswrapper[4183]: I0813 19:45:40.507390 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:40Z is after 2025-06-26T12:47:18Z Aug 13 19:45:40 crc kubenswrapper[4183]: E0813 19:45:40.639384 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:40Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18e7a3052c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,LastTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:45:41 crc kubenswrapper[4183]: E0813 19:45:41.357739 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:41Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:45:41 crc kubenswrapper[4183]: I0813 19:45:41.508453 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:41Z is after 2025-06-26T12:47:18Z Aug 13 19:45:41 crc kubenswrapper[4183]: I0813 19:45:41.554204 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:41 crc kubenswrapper[4183]: I0813 19:45:41.556627 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:41 crc kubenswrapper[4183]: I0813 19:45:41.556974 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:41 crc kubenswrapper[4183]: I0813 19:45:41.557203 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:41 crc kubenswrapper[4183]: I0813 19:45:41.557428 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:45:41 crc kubenswrapper[4183]: I0813 19:45:41.558194 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:45:41 crc kubenswrapper[4183]: I0813 19:45:41.558607 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:41 crc kubenswrapper[4183]: I0813 19:45:41.559625 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:41 crc kubenswrapper[4183]: I0813 19:45:41.559680 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:41 crc kubenswrapper[4183]: I0813 19:45:41.559694 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:41 crc kubenswrapper[4183]: E0813 19:45:41.562659 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:41Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:45:42 crc kubenswrapper[4183]: I0813 19:45:42.508395 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:42Z is after 2025-06-26T12:47:18Z Aug 13 19:45:42 crc kubenswrapper[4183]: I0813 19:45:42.582078 4183 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded" start-of-body= Aug 13 19:45:42 crc kubenswrapper[4183]: I0813 19:45:42.582490 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded" Aug 13 19:45:43 crc kubenswrapper[4183]: I0813 19:45:43.208292 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:43 crc kubenswrapper[4183]: I0813 19:45:43.209891 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:43 crc kubenswrapper[4183]: I0813 19:45:43.209995 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:43 crc kubenswrapper[4183]: I0813 19:45:43.210016 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:43 crc kubenswrapper[4183]: I0813 19:45:43.211226 4183 scope.go:117] "RemoveContainer" containerID="89ea5c4b7625d1ba9b9cfcf78e2be8cb372cc58135d7587f6df13e0c8e044b53" Aug 13 19:45:43 crc kubenswrapper[4183]: E0813 19:45:43.211633 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:45:43 crc kubenswrapper[4183]: I0813 19:45:43.510590 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:43Z is after 2025-06-26T12:47:18Z Aug 13 19:45:44 crc kubenswrapper[4183]: I0813 19:45:44.209354 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:44 crc kubenswrapper[4183]: I0813 19:45:44.213562 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:44 crc kubenswrapper[4183]: I0813 19:45:44.213650 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:44 crc kubenswrapper[4183]: I0813 19:45:44.213670 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:44 crc kubenswrapper[4183]: I0813 19:45:44.508431 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:44Z is after 2025-06-26T12:47:18Z Aug 13 19:45:45 crc kubenswrapper[4183]: E0813 19:45:45.414942 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:45:45 crc kubenswrapper[4183]: I0813 19:45:45.508706 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:45Z is after 2025-06-26T12:47:18Z Aug 13 19:45:46 crc kubenswrapper[4183]: I0813 19:45:46.507259 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:46Z is after 2025-06-26T12:47:18Z Aug 13 19:45:47 crc kubenswrapper[4183]: I0813 19:45:47.509695 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:47Z is after 2025-06-26T12:47:18Z Aug 13 19:45:48 crc kubenswrapper[4183]: E0813 19:45:48.363856 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:48Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:45:48 crc kubenswrapper[4183]: I0813 19:45:48.508271 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:48Z is after 2025-06-26T12:47:18Z Aug 13 19:45:48 crc kubenswrapper[4183]: I0813 19:45:48.564016 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:48 crc kubenswrapper[4183]: I0813 19:45:48.567428 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:48 crc kubenswrapper[4183]: I0813 19:45:48.567522 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:48 crc kubenswrapper[4183]: I0813 19:45:48.567574 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:48 crc kubenswrapper[4183]: I0813 19:45:48.567632 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:45:48 crc kubenswrapper[4183]: E0813 19:45:48.572082 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:48Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:45:49 crc kubenswrapper[4183]: I0813 19:45:49.208719 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:49 crc kubenswrapper[4183]: I0813 19:45:49.210354 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:49 crc kubenswrapper[4183]: I0813 19:45:49.210508 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:49 crc kubenswrapper[4183]: I0813 19:45:49.210740 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:49 crc kubenswrapper[4183]: I0813 19:45:49.508264 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:49Z is after 2025-06-26T12:47:18Z Aug 13 19:45:50 crc kubenswrapper[4183]: I0813 19:45:50.508065 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:50Z is after 2025-06-26T12:47:18Z Aug 13 19:45:50 crc kubenswrapper[4183]: E0813 19:45:50.643361 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:50Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18e7a3052c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,LastTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:45:50 crc kubenswrapper[4183]: E0813 19:45:50.643457 4183 event.go:294] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{crc.185b6b18e7a3052c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,LastTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:45:50 crc kubenswrapper[4183]: E0813 19:45:50.647449 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:50Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80503ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,LastTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:45:51 crc kubenswrapper[4183]: I0813 19:45:51.509519 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:51Z is after 2025-06-26T12:47:18Z Aug 13 19:45:51 crc kubenswrapper[4183]: E0813 19:45:51.794485 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:51Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80503ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,LastTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:45:52 crc kubenswrapper[4183]: I0813 19:45:52.509904 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:52Z is after 2025-06-26T12:47:18Z Aug 13 19:45:52 crc kubenswrapper[4183]: I0813 19:45:52.581821 4183 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 19:45:52 crc kubenswrapper[4183]: I0813 19:45:52.582173 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 19:45:53 crc kubenswrapper[4183]: I0813 19:45:53.509729 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:53Z is after 2025-06-26T12:47:18Z Aug 13 19:45:54 crc kubenswrapper[4183]: I0813 19:45:54.508647 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:54Z is after 2025-06-26T12:47:18Z Aug 13 19:45:54 crc kubenswrapper[4183]: I0813 19:45:54.659167 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 19:45:54 crc kubenswrapper[4183]: I0813 19:45:54.659309 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 19:45:54 crc kubenswrapper[4183]: I0813 19:45:54.659344 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 19:45:54 crc kubenswrapper[4183]: I0813 19:45:54.659370 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 19:45:54 crc kubenswrapper[4183]: I0813 19:45:54.659420 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 19:45:55 crc kubenswrapper[4183]: E0813 19:45:55.368548 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:55Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:45:55 crc kubenswrapper[4183]: E0813 19:45:55.416050 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:45:55 crc kubenswrapper[4183]: I0813 19:45:55.507485 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:55Z is after 2025-06-26T12:47:18Z Aug 13 19:45:55 crc kubenswrapper[4183]: I0813 19:45:55.574137 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:55 crc kubenswrapper[4183]: I0813 19:45:55.576280 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:55 crc kubenswrapper[4183]: I0813 19:45:55.576618 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:55 crc kubenswrapper[4183]: I0813 19:45:55.576670 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:55 crc kubenswrapper[4183]: I0813 19:45:55.576708 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:45:55 crc kubenswrapper[4183]: E0813 19:45:55.580415 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:55Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:45:56 crc kubenswrapper[4183]: I0813 19:45:56.209118 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:56 crc kubenswrapper[4183]: I0813 19:45:56.210732 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:56 crc kubenswrapper[4183]: I0813 19:45:56.210840 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:56 crc kubenswrapper[4183]: I0813 19:45:56.210859 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:56 crc kubenswrapper[4183]: I0813 19:45:56.212460 4183 scope.go:117] "RemoveContainer" containerID="89ea5c4b7625d1ba9b9cfcf78e2be8cb372cc58135d7587f6df13e0c8e044b53" Aug 13 19:45:56 crc kubenswrapper[4183]: I0813 19:45:56.510678 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:56Z is after 2025-06-26T12:47:18Z Aug 13 19:45:57 crc kubenswrapper[4183]: I0813 19:45:57.354644 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/3.log" Aug 13 19:45:57 crc kubenswrapper[4183]: I0813 19:45:57.357558 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"53c1db1508241fbac1bedf9130341ffe","Type":"ContainerStarted","Data":"21bea5e9ace0fbd58622f6ba9a0efdb173b7764b3c538f587b835ba219dcd2ed"} Aug 13 19:45:57 crc kubenswrapper[4183]: I0813 19:45:57.357718 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:57 crc kubenswrapper[4183]: I0813 19:45:57.358960 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:57 crc kubenswrapper[4183]: I0813 19:45:57.359026 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:57 crc kubenswrapper[4183]: I0813 19:45:57.359043 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:57 crc kubenswrapper[4183]: I0813 19:45:57.508048 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:57Z is after 2025-06-26T12:47:18Z Aug 13 19:45:57 crc kubenswrapper[4183]: I0813 19:45:57.563936 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:45:58 crc kubenswrapper[4183]: I0813 19:45:58.363040 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/4.log" Aug 13 19:45:58 crc kubenswrapper[4183]: I0813 19:45:58.364913 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/3.log" Aug 13 19:45:58 crc kubenswrapper[4183]: I0813 19:45:58.367439 4183 generic.go:334] "Generic (PLEG): container finished" podID="53c1db1508241fbac1bedf9130341ffe" containerID="21bea5e9ace0fbd58622f6ba9a0efdb173b7764b3c538f587b835ba219dcd2ed" exitCode=255 Aug 13 19:45:58 crc kubenswrapper[4183]: I0813 19:45:58.367539 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"53c1db1508241fbac1bedf9130341ffe","Type":"ContainerDied","Data":"21bea5e9ace0fbd58622f6ba9a0efdb173b7764b3c538f587b835ba219dcd2ed"} Aug 13 19:45:58 crc kubenswrapper[4183]: I0813 19:45:58.367572 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:58 crc kubenswrapper[4183]: I0813 19:45:58.367603 4183 scope.go:117] "RemoveContainer" containerID="89ea5c4b7625d1ba9b9cfcf78e2be8cb372cc58135d7587f6df13e0c8e044b53" Aug 13 19:45:58 crc kubenswrapper[4183]: I0813 19:45:58.369304 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:58 crc kubenswrapper[4183]: I0813 19:45:58.369404 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:58 crc kubenswrapper[4183]: I0813 19:45:58.369630 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:58 crc kubenswrapper[4183]: I0813 19:45:58.371325 4183 scope.go:117] "RemoveContainer" containerID="21bea5e9ace0fbd58622f6ba9a0efdb173b7764b3c538f587b835ba219dcd2ed" Aug 13 19:45:58 crc kubenswrapper[4183]: E0813 19:45:58.371984 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:45:58 crc kubenswrapper[4183]: I0813 19:45:58.508439 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:58Z is after 2025-06-26T12:47:18Z Aug 13 19:45:59 crc kubenswrapper[4183]: I0813 19:45:59.376302 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/4.log" Aug 13 19:45:59 crc kubenswrapper[4183]: I0813 19:45:59.384107 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:59 crc kubenswrapper[4183]: I0813 19:45:59.386032 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:59 crc kubenswrapper[4183]: I0813 19:45:59.386120 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:59 crc kubenswrapper[4183]: I0813 19:45:59.386155 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:59 crc kubenswrapper[4183]: I0813 19:45:59.388711 4183 scope.go:117] "RemoveContainer" containerID="21bea5e9ace0fbd58622f6ba9a0efdb173b7764b3c538f587b835ba219dcd2ed" Aug 13 19:45:59 crc kubenswrapper[4183]: E0813 19:45:59.389651 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:45:59 crc kubenswrapper[4183]: I0813 19:45:59.507063 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:59Z is after 2025-06-26T12:47:18Z Aug 13 19:46:00 crc kubenswrapper[4183]: I0813 19:46:00.517885 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:00Z is after 2025-06-26T12:47:18Z Aug 13 19:46:01 crc kubenswrapper[4183]: W0813 19:46:01.348988 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:01Z is after 2025-06-26T12:47:18Z Aug 13 19:46:01 crc kubenswrapper[4183]: E0813 19:46:01.349134 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:01Z is after 2025-06-26T12:47:18Z Aug 13 19:46:01 crc kubenswrapper[4183]: I0813 19:46:01.507847 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:01Z is after 2025-06-26T12:47:18Z Aug 13 19:46:01 crc kubenswrapper[4183]: E0813 19:46:01.804456 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:01Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80503ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,LastTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:46:02 crc kubenswrapper[4183]: E0813 19:46:02.375954 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:02Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:46:02 crc kubenswrapper[4183]: I0813 19:46:02.511228 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:02Z is after 2025-06-26T12:47:18Z Aug 13 19:46:02 crc kubenswrapper[4183]: I0813 19:46:02.571763 4183 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:36156->192.168.126.11:10357: read: connection reset by peer" start-of-body= Aug 13 19:46:02 crc kubenswrapper[4183]: I0813 19:46:02.571983 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:36156->192.168.126.11:10357: read: connection reset by peer" Aug 13 19:46:02 crc kubenswrapper[4183]: I0813 19:46:02.572064 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:46:02 crc kubenswrapper[4183]: I0813 19:46:02.572264 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:46:02 crc kubenswrapper[4183]: I0813 19:46:02.574337 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:46:02 crc kubenswrapper[4183]: I0813 19:46:02.574366 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:46:02 crc kubenswrapper[4183]: I0813 19:46:02.574378 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:46:02 crc kubenswrapper[4183]: I0813 19:46:02.576042 4183 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"4a09dda3746e6c59af493f2778fdf8195f1e39bbc6699be4e03d0b41c4a15e3f"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Aug 13 19:46:02 crc kubenswrapper[4183]: I0813 19:46:02.576385 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" containerID="cri-o://4a09dda3746e6c59af493f2778fdf8195f1e39bbc6699be4e03d0b41c4a15e3f" gracePeriod=30 Aug 13 19:46:02 crc kubenswrapper[4183]: I0813 19:46:02.581620 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:46:02 crc kubenswrapper[4183]: I0813 19:46:02.584487 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:46:02 crc kubenswrapper[4183]: I0813 19:46:02.584708 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:46:02 crc kubenswrapper[4183]: I0813 19:46:02.584733 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:46:02 crc kubenswrapper[4183]: I0813 19:46:02.584834 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:46:02 crc kubenswrapper[4183]: E0813 19:46:02.595868 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:02Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:46:03 crc kubenswrapper[4183]: I0813 19:46:03.399607 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/cluster-policy-controller/3.log" Aug 13 19:46:03 crc kubenswrapper[4183]: I0813 19:46:03.400721 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/cluster-policy-controller/2.log" Aug 13 19:46:03 crc kubenswrapper[4183]: I0813 19:46:03.402969 4183 generic.go:334] "Generic (PLEG): container finished" podID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerID="4a09dda3746e6c59af493f2778fdf8195f1e39bbc6699be4e03d0b41c4a15e3f" exitCode=255 Aug 13 19:46:03 crc kubenswrapper[4183]: I0813 19:46:03.403024 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"2eb2b200bca0d10cf0fe16fb7c0caf80","Type":"ContainerDied","Data":"4a09dda3746e6c59af493f2778fdf8195f1e39bbc6699be4e03d0b41c4a15e3f"} Aug 13 19:46:03 crc kubenswrapper[4183]: I0813 19:46:03.403062 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"2eb2b200bca0d10cf0fe16fb7c0caf80","Type":"ContainerStarted","Data":"519968a9462e8fe101b32ab89f90f7df5940085d68dc41f9bb8fe6dcd45fe76a"} Aug 13 19:46:03 crc kubenswrapper[4183]: I0813 19:46:03.403091 4183 scope.go:117] "RemoveContainer" containerID="dcdf75b3e39eac7c9e0c31f36cbe80951a52cc88109649d9e8c38789aca6bfb6" Aug 13 19:46:03 crc kubenswrapper[4183]: I0813 19:46:03.403245 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:46:03 crc kubenswrapper[4183]: I0813 19:46:03.404463 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:46:03 crc kubenswrapper[4183]: I0813 19:46:03.404582 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:46:03 crc kubenswrapper[4183]: I0813 19:46:03.404599 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:46:03 crc kubenswrapper[4183]: I0813 19:46:03.507733 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:03Z is after 2025-06-26T12:47:18Z Aug 13 19:46:04 crc kubenswrapper[4183]: I0813 19:46:04.413221 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/cluster-policy-controller/3.log" Aug 13 19:46:04 crc kubenswrapper[4183]: I0813 19:46:04.509144 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:04Z is after 2025-06-26T12:47:18Z Aug 13 19:46:04 crc kubenswrapper[4183]: I0813 19:46:04.892034 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:46:04 crc kubenswrapper[4183]: I0813 19:46:04.892472 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:46:04 crc kubenswrapper[4183]: I0813 19:46:04.894998 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:46:04 crc kubenswrapper[4183]: I0813 19:46:04.895184 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:46:04 crc kubenswrapper[4183]: I0813 19:46:04.895294 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:46:04 crc kubenswrapper[4183]: I0813 19:46:04.896912 4183 scope.go:117] "RemoveContainer" containerID="21bea5e9ace0fbd58622f6ba9a0efdb173b7764b3c538f587b835ba219dcd2ed" Aug 13 19:46:04 crc kubenswrapper[4183]: E0813 19:46:04.897399 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:46:05 crc kubenswrapper[4183]: E0813 19:46:05.416222 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:46:05 crc kubenswrapper[4183]: W0813 19:46:05.449941 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:05Z is after 2025-06-26T12:47:18Z Aug 13 19:46:05 crc kubenswrapper[4183]: E0813 19:46:05.450097 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:05Z is after 2025-06-26T12:47:18Z Aug 13 19:46:05 crc kubenswrapper[4183]: I0813 19:46:05.508913 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:05Z is after 2025-06-26T12:47:18Z Aug 13 19:46:06 crc kubenswrapper[4183]: I0813 19:46:06.510697 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:06Z is after 2025-06-26T12:47:18Z Aug 13 19:46:07 crc kubenswrapper[4183]: I0813 19:46:07.508141 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:07Z is after 2025-06-26T12:47:18Z Aug 13 19:46:08 crc kubenswrapper[4183]: I0813 19:46:08.509106 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:08Z is after 2025-06-26T12:47:18Z Aug 13 19:46:09 crc kubenswrapper[4183]: E0813 19:46:09.380169 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:09Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:46:09 crc kubenswrapper[4183]: I0813 19:46:09.508176 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:09Z is after 2025-06-26T12:47:18Z Aug 13 19:46:09 crc kubenswrapper[4183]: I0813 19:46:09.580950 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:46:09 crc kubenswrapper[4183]: I0813 19:46:09.581183 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:46:09 crc kubenswrapper[4183]: I0813 19:46:09.584743 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:46:09 crc kubenswrapper[4183]: I0813 19:46:09.585010 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:46:09 crc kubenswrapper[4183]: I0813 19:46:09.585109 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:46:09 crc kubenswrapper[4183]: I0813 19:46:09.596742 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:46:09 crc kubenswrapper[4183]: I0813 19:46:09.598652 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:46:09 crc kubenswrapper[4183]: I0813 19:46:09.598702 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:46:09 crc kubenswrapper[4183]: I0813 19:46:09.598718 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:46:09 crc kubenswrapper[4183]: I0813 19:46:09.598745 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:46:09 crc kubenswrapper[4183]: E0813 19:46:09.605621 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:09Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:46:10 crc kubenswrapper[4183]: I0813 19:46:10.509770 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:10Z is after 2025-06-26T12:47:18Z Aug 13 19:46:10 crc kubenswrapper[4183]: I0813 19:46:10.969747 4183 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Aug 13 19:46:10 crc kubenswrapper[4183]: E0813 19:46:10.975379 4183 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:10Z is after 2025-06-26T12:47:18Z Aug 13 19:46:11 crc kubenswrapper[4183]: I0813 19:46:11.511689 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:11Z is after 2025-06-26T12:47:18Z Aug 13 19:46:11 crc kubenswrapper[4183]: I0813 19:46:11.559714 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:46:11 crc kubenswrapper[4183]: I0813 19:46:11.561022 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:46:11 crc kubenswrapper[4183]: I0813 19:46:11.564169 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:46:11 crc kubenswrapper[4183]: I0813 19:46:11.564287 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:46:11 crc kubenswrapper[4183]: I0813 19:46:11.564307 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:46:11 crc kubenswrapper[4183]: E0813 19:46:11.816090 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:11Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80503ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,LastTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:46:12 crc kubenswrapper[4183]: I0813 19:46:12.509294 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:12Z is after 2025-06-26T12:47:18Z Aug 13 19:46:12 crc kubenswrapper[4183]: I0813 19:46:12.581260 4183 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 19:46:12 crc kubenswrapper[4183]: I0813 19:46:12.581482 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 19:46:13 crc kubenswrapper[4183]: I0813 19:46:13.519035 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:13Z is after 2025-06-26T12:47:18Z Aug 13 19:46:14 crc kubenswrapper[4183]: I0813 19:46:14.509354 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:14Z is after 2025-06-26T12:47:18Z Aug 13 19:46:15 crc kubenswrapper[4183]: E0813 19:46:15.416692 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:46:15 crc kubenswrapper[4183]: I0813 19:46:15.508135 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:15Z is after 2025-06-26T12:47:18Z Aug 13 19:46:16 crc kubenswrapper[4183]: E0813 19:46:16.385964 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:16Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:46:16 crc kubenswrapper[4183]: I0813 19:46:16.507766 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:16Z is after 2025-06-26T12:47:18Z Aug 13 19:46:16 crc kubenswrapper[4183]: I0813 19:46:16.606104 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:46:16 crc kubenswrapper[4183]: I0813 19:46:16.607732 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:46:16 crc kubenswrapper[4183]: I0813 19:46:16.607889 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:46:16 crc kubenswrapper[4183]: I0813 19:46:16.607912 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:46:16 crc kubenswrapper[4183]: I0813 19:46:16.607953 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:46:16 crc kubenswrapper[4183]: E0813 19:46:16.612289 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:16Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:46:17 crc kubenswrapper[4183]: I0813 19:46:17.507760 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:17Z is after 2025-06-26T12:47:18Z Aug 13 19:46:18 crc kubenswrapper[4183]: I0813 19:46:18.509153 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:18Z is after 2025-06-26T12:47:18Z Aug 13 19:46:18 crc kubenswrapper[4183]: W0813 19:46:18.734308 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:18Z is after 2025-06-26T12:47:18Z Aug 13 19:46:18 crc kubenswrapper[4183]: E0813 19:46:18.734454 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:18Z is after 2025-06-26T12:47:18Z Aug 13 19:46:19 crc kubenswrapper[4183]: I0813 19:46:19.209340 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:46:19 crc kubenswrapper[4183]: I0813 19:46:19.211018 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:46:19 crc kubenswrapper[4183]: I0813 19:46:19.211174 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:46:19 crc kubenswrapper[4183]: I0813 19:46:19.211190 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:46:19 crc kubenswrapper[4183]: I0813 19:46:19.212634 4183 scope.go:117] "RemoveContainer" containerID="21bea5e9ace0fbd58622f6ba9a0efdb173b7764b3c538f587b835ba219dcd2ed" Aug 13 19:46:19 crc kubenswrapper[4183]: E0813 19:46:19.213052 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:46:19 crc kubenswrapper[4183]: I0813 19:46:19.513958 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:19Z is after 2025-06-26T12:47:18Z Aug 13 19:46:20 crc kubenswrapper[4183]: I0813 19:46:20.508721 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:20Z is after 2025-06-26T12:47:18Z Aug 13 19:46:21 crc kubenswrapper[4183]: I0813 19:46:21.509911 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:21Z is after 2025-06-26T12:47:18Z Aug 13 19:46:21 crc kubenswrapper[4183]: E0813 19:46:21.820321 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:21Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80503ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,LastTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:46:22 crc kubenswrapper[4183]: I0813 19:46:22.508481 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:22Z is after 2025-06-26T12:47:18Z Aug 13 19:46:22 crc kubenswrapper[4183]: I0813 19:46:22.580330 4183 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 19:46:22 crc kubenswrapper[4183]: I0813 19:46:22.580470 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 19:46:23 crc kubenswrapper[4183]: E0813 19:46:23.390894 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:23Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:46:23 crc kubenswrapper[4183]: I0813 19:46:23.508225 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:23Z is after 2025-06-26T12:47:18Z Aug 13 19:46:23 crc kubenswrapper[4183]: I0813 19:46:23.613426 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:46:23 crc kubenswrapper[4183]: I0813 19:46:23.615406 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:46:23 crc kubenswrapper[4183]: I0813 19:46:23.615477 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:46:23 crc kubenswrapper[4183]: I0813 19:46:23.615582 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:46:23 crc kubenswrapper[4183]: I0813 19:46:23.615626 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:46:23 crc kubenswrapper[4183]: E0813 19:46:23.619335 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:23Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:46:24 crc kubenswrapper[4183]: I0813 19:46:24.508866 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:24Z is after 2025-06-26T12:47:18Z Aug 13 19:46:25 crc kubenswrapper[4183]: E0813 19:46:25.417160 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:46:25 crc kubenswrapper[4183]: I0813 19:46:25.508965 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:25Z is after 2025-06-26T12:47:18Z Aug 13 19:46:26 crc kubenswrapper[4183]: W0813 19:46:26.192309 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:26Z is after 2025-06-26T12:47:18Z Aug 13 19:46:26 crc kubenswrapper[4183]: E0813 19:46:26.192390 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:26Z is after 2025-06-26T12:47:18Z Aug 13 19:46:26 crc kubenswrapper[4183]: I0813 19:46:26.508890 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:26Z is after 2025-06-26T12:47:18Z Aug 13 19:46:27 crc kubenswrapper[4183]: I0813 19:46:27.508416 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:27Z is after 2025-06-26T12:47:18Z Aug 13 19:46:28 crc kubenswrapper[4183]: I0813 19:46:28.509326 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:28Z is after 2025-06-26T12:47:18Z Aug 13 19:46:29 crc kubenswrapper[4183]: I0813 19:46:29.507732 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:29Z is after 2025-06-26T12:47:18Z Aug 13 19:46:30 crc kubenswrapper[4183]: E0813 19:46:30.396465 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:30Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:46:30 crc kubenswrapper[4183]: I0813 19:46:30.509171 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:30Z is after 2025-06-26T12:47:18Z Aug 13 19:46:30 crc kubenswrapper[4183]: I0813 19:46:30.619914 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:46:30 crc kubenswrapper[4183]: I0813 19:46:30.622010 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:46:30 crc kubenswrapper[4183]: I0813 19:46:30.622079 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:46:30 crc kubenswrapper[4183]: I0813 19:46:30.622098 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:46:30 crc kubenswrapper[4183]: I0813 19:46:30.622127 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:46:30 crc kubenswrapper[4183]: E0813 19:46:30.626393 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:30Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:46:31 crc kubenswrapper[4183]: I0813 19:46:31.507850 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:31Z is after 2025-06-26T12:47:18Z Aug 13 19:46:31 crc kubenswrapper[4183]: E0813 19:46:31.824915 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:31Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80503ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,LastTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:46:32 crc kubenswrapper[4183]: I0813 19:46:32.209187 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:46:32 crc kubenswrapper[4183]: I0813 19:46:32.210499 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:46:32 crc kubenswrapper[4183]: I0813 19:46:32.210595 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:46:32 crc kubenswrapper[4183]: I0813 19:46:32.210615 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:46:32 crc kubenswrapper[4183]: I0813 19:46:32.212945 4183 scope.go:117] "RemoveContainer" containerID="21bea5e9ace0fbd58622f6ba9a0efdb173b7764b3c538f587b835ba219dcd2ed" Aug 13 19:46:32 crc kubenswrapper[4183]: E0813 19:46:32.213376 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:46:32 crc kubenswrapper[4183]: I0813 19:46:32.510109 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:32Z is after 2025-06-26T12:47:18Z Aug 13 19:46:32 crc kubenswrapper[4183]: I0813 19:46:32.581386 4183 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 19:46:32 crc kubenswrapper[4183]: I0813 19:46:32.581503 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Aug 13 19:46:32 crc kubenswrapper[4183]: I0813 19:46:32.581546 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:46:32 crc kubenswrapper[4183]: I0813 19:46:32.581805 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:46:32 crc kubenswrapper[4183]: I0813 19:46:32.583846 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:46:32 crc kubenswrapper[4183]: I0813 19:46:32.583900 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:46:32 crc kubenswrapper[4183]: I0813 19:46:32.583916 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:46:32 crc kubenswrapper[4183]: I0813 19:46:32.585397 4183 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"519968a9462e8fe101b32ab89f90f7df5940085d68dc41f9bb8fe6dcd45fe76a"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Aug 13 19:46:32 crc kubenswrapper[4183]: I0813 19:46:32.585847 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" containerID="cri-o://519968a9462e8fe101b32ab89f90f7df5940085d68dc41f9bb8fe6dcd45fe76a" gracePeriod=30 Aug 13 19:46:32 crc kubenswrapper[4183]: E0813 19:46:32.750551 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-crc_openshift-kube-controller-manager(2eb2b200bca0d10cf0fe16fb7c0caf80)\"" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" Aug 13 19:46:33 crc kubenswrapper[4183]: I0813 19:46:33.508606 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:33Z is after 2025-06-26T12:47:18Z Aug 13 19:46:33 crc kubenswrapper[4183]: I0813 19:46:33.531882 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/cluster-policy-controller/4.log" Aug 13 19:46:33 crc kubenswrapper[4183]: I0813 19:46:33.533863 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/cluster-policy-controller/3.log" Aug 13 19:46:33 crc kubenswrapper[4183]: I0813 19:46:33.536919 4183 generic.go:334] "Generic (PLEG): container finished" podID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerID="519968a9462e8fe101b32ab89f90f7df5940085d68dc41f9bb8fe6dcd45fe76a" exitCode=255 Aug 13 19:46:33 crc kubenswrapper[4183]: I0813 19:46:33.537005 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"2eb2b200bca0d10cf0fe16fb7c0caf80","Type":"ContainerDied","Data":"519968a9462e8fe101b32ab89f90f7df5940085d68dc41f9bb8fe6dcd45fe76a"} Aug 13 19:46:33 crc kubenswrapper[4183]: I0813 19:46:33.537060 4183 scope.go:117] "RemoveContainer" containerID="4a09dda3746e6c59af493f2778fdf8195f1e39bbc6699be4e03d0b41c4a15e3f" Aug 13 19:46:33 crc kubenswrapper[4183]: I0813 19:46:33.537440 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:46:33 crc kubenswrapper[4183]: I0813 19:46:33.539432 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:46:33 crc kubenswrapper[4183]: I0813 19:46:33.539516 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:46:33 crc kubenswrapper[4183]: I0813 19:46:33.539540 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:46:33 crc kubenswrapper[4183]: I0813 19:46:33.542224 4183 scope.go:117] "RemoveContainer" containerID="519968a9462e8fe101b32ab89f90f7df5940085d68dc41f9bb8fe6dcd45fe76a" Aug 13 19:46:33 crc kubenswrapper[4183]: E0813 19:46:33.543207 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-crc_openshift-kube-controller-manager(2eb2b200bca0d10cf0fe16fb7c0caf80)\"" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" Aug 13 19:46:34 crc kubenswrapper[4183]: I0813 19:46:34.511695 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:34Z is after 2025-06-26T12:47:18Z Aug 13 19:46:34 crc kubenswrapper[4183]: I0813 19:46:34.542528 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/cluster-policy-controller/4.log" Aug 13 19:46:35 crc kubenswrapper[4183]: E0813 19:46:35.417415 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:46:35 crc kubenswrapper[4183]: I0813 19:46:35.508819 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:35Z is after 2025-06-26T12:47:18Z Aug 13 19:46:36 crc kubenswrapper[4183]: I0813 19:46:36.208887 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:46:36 crc kubenswrapper[4183]: I0813 19:46:36.210507 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:46:36 crc kubenswrapper[4183]: I0813 19:46:36.210562 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:46:36 crc kubenswrapper[4183]: I0813 19:46:36.210609 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:46:36 crc kubenswrapper[4183]: I0813 19:46:36.508479 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:36Z is after 2025-06-26T12:47:18Z Aug 13 19:46:37 crc kubenswrapper[4183]: E0813 19:46:37.401966 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:37Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:46:37 crc kubenswrapper[4183]: I0813 19:46:37.509111 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:37Z is after 2025-06-26T12:47:18Z Aug 13 19:46:37 crc kubenswrapper[4183]: I0813 19:46:37.627700 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:46:37 crc kubenswrapper[4183]: I0813 19:46:37.630333 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:46:37 crc kubenswrapper[4183]: I0813 19:46:37.630409 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:46:37 crc kubenswrapper[4183]: I0813 19:46:37.630433 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:46:37 crc kubenswrapper[4183]: I0813 19:46:37.630466 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:46:37 crc kubenswrapper[4183]: E0813 19:46:37.634557 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:37Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:46:38 crc kubenswrapper[4183]: I0813 19:46:38.508190 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:38Z is after 2025-06-26T12:47:18Z Aug 13 19:46:39 crc kubenswrapper[4183]: I0813 19:46:39.507942 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:39Z is after 2025-06-26T12:47:18Z Aug 13 19:46:40 crc kubenswrapper[4183]: I0813 19:46:40.508066 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:40Z is after 2025-06-26T12:47:18Z Aug 13 19:46:40 crc kubenswrapper[4183]: I0813 19:46:40.519061 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:46:40 crc kubenswrapper[4183]: I0813 19:46:40.519281 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:46:40 crc kubenswrapper[4183]: I0813 19:46:40.521387 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:46:40 crc kubenswrapper[4183]: I0813 19:46:40.521474 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:46:40 crc kubenswrapper[4183]: I0813 19:46:40.521498 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:46:40 crc kubenswrapper[4183]: I0813 19:46:40.523226 4183 scope.go:117] "RemoveContainer" containerID="519968a9462e8fe101b32ab89f90f7df5940085d68dc41f9bb8fe6dcd45fe76a" Aug 13 19:46:40 crc kubenswrapper[4183]: E0813 19:46:40.524113 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-crc_openshift-kube-controller-manager(2eb2b200bca0d10cf0fe16fb7c0caf80)\"" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" Aug 13 19:46:41 crc kubenswrapper[4183]: I0813 19:46:41.507265 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:41Z is after 2025-06-26T12:47:18Z Aug 13 19:46:41 crc kubenswrapper[4183]: E0813 19:46:41.829421 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:41Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80503ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,LastTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:46:42 crc kubenswrapper[4183]: I0813 19:46:42.508908 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:42Z is after 2025-06-26T12:47:18Z Aug 13 19:46:42 crc kubenswrapper[4183]: I0813 19:46:42.969557 4183 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Aug 13 19:46:42 crc kubenswrapper[4183]: E0813 19:46:42.974395 4183 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:42Z is after 2025-06-26T12:47:18Z Aug 13 19:46:43 crc kubenswrapper[4183]: I0813 19:46:43.507078 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:43Z is after 2025-06-26T12:47:18Z Aug 13 19:46:44 crc kubenswrapper[4183]: E0813 19:46:44.408387 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:44Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:46:44 crc kubenswrapper[4183]: I0813 19:46:44.508719 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:44Z is after 2025-06-26T12:47:18Z Aug 13 19:46:44 crc kubenswrapper[4183]: I0813 19:46:44.634877 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:46:44 crc kubenswrapper[4183]: I0813 19:46:44.636828 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:46:44 crc kubenswrapper[4183]: I0813 19:46:44.636871 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:46:44 crc kubenswrapper[4183]: I0813 19:46:44.636883 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:46:44 crc kubenswrapper[4183]: I0813 19:46:44.636915 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:46:44 crc kubenswrapper[4183]: E0813 19:46:44.640455 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:44Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:46:45 crc kubenswrapper[4183]: E0813 19:46:45.418298 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:46:45 crc kubenswrapper[4183]: I0813 19:46:45.508495 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:45Z is after 2025-06-26T12:47:18Z Aug 13 19:46:46 crc kubenswrapper[4183]: I0813 19:46:46.509767 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:46Z is after 2025-06-26T12:47:18Z Aug 13 19:46:47 crc kubenswrapper[4183]: I0813 19:46:47.209002 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:46:47 crc kubenswrapper[4183]: I0813 19:46:47.211679 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:46:47 crc kubenswrapper[4183]: I0813 19:46:47.211988 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:46:47 crc kubenswrapper[4183]: I0813 19:46:47.212106 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:46:47 crc kubenswrapper[4183]: I0813 19:46:47.217114 4183 scope.go:117] "RemoveContainer" containerID="21bea5e9ace0fbd58622f6ba9a0efdb173b7764b3c538f587b835ba219dcd2ed" Aug 13 19:46:47 crc kubenswrapper[4183]: E0813 19:46:47.218997 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:46:47 crc kubenswrapper[4183]: I0813 19:46:47.509395 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:47Z is after 2025-06-26T12:47:18Z Aug 13 19:46:48 crc kubenswrapper[4183]: I0813 19:46:48.509431 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:48Z is after 2025-06-26T12:47:18Z Aug 13 19:46:49 crc kubenswrapper[4183]: I0813 19:46:49.509521 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:49Z is after 2025-06-26T12:47:18Z Aug 13 19:46:50 crc kubenswrapper[4183]: I0813 19:46:50.511905 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:50Z is after 2025-06-26T12:47:18Z Aug 13 19:46:51 crc kubenswrapper[4183]: E0813 19:46:51.415323 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:51Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:46:51 crc kubenswrapper[4183]: I0813 19:46:51.512918 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:51Z is after 2025-06-26T12:47:18Z Aug 13 19:46:51 crc kubenswrapper[4183]: I0813 19:46:51.640738 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:46:51 crc kubenswrapper[4183]: I0813 19:46:51.643827 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:46:51 crc kubenswrapper[4183]: I0813 19:46:51.643923 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:46:51 crc kubenswrapper[4183]: I0813 19:46:51.643941 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:46:51 crc kubenswrapper[4183]: I0813 19:46:51.643979 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:46:51 crc kubenswrapper[4183]: E0813 19:46:51.648044 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:51Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:46:51 crc kubenswrapper[4183]: E0813 19:46:51.835285 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:51Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80503ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,LastTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:46:52 crc kubenswrapper[4183]: I0813 19:46:52.508157 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:52Z is after 2025-06-26T12:47:18Z Aug 13 19:46:53 crc kubenswrapper[4183]: I0813 19:46:53.209177 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:46:53 crc kubenswrapper[4183]: I0813 19:46:53.211254 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:46:53 crc kubenswrapper[4183]: I0813 19:46:53.211362 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:46:53 crc kubenswrapper[4183]: I0813 19:46:53.211384 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:46:53 crc kubenswrapper[4183]: I0813 19:46:53.214540 4183 scope.go:117] "RemoveContainer" containerID="519968a9462e8fe101b32ab89f90f7df5940085d68dc41f9bb8fe6dcd45fe76a" Aug 13 19:46:53 crc kubenswrapper[4183]: E0813 19:46:53.216083 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-crc_openshift-kube-controller-manager(2eb2b200bca0d10cf0fe16fb7c0caf80)\"" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" Aug 13 19:46:53 crc kubenswrapper[4183]: I0813 19:46:53.508249 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:53Z is after 2025-06-26T12:47:18Z Aug 13 19:46:54 crc kubenswrapper[4183]: I0813 19:46:54.509012 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:54Z is after 2025-06-26T12:47:18Z Aug 13 19:46:54 crc kubenswrapper[4183]: I0813 19:46:54.660046 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 19:46:54 crc kubenswrapper[4183]: I0813 19:46:54.660276 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 19:46:54 crc kubenswrapper[4183]: I0813 19:46:54.660354 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 19:46:54 crc kubenswrapper[4183]: I0813 19:46:54.660430 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 19:46:54 crc kubenswrapper[4183]: I0813 19:46:54.660490 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 19:46:54 crc kubenswrapper[4183]: W0813 19:46:54.762914 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:54Z is after 2025-06-26T12:47:18Z Aug 13 19:46:54 crc kubenswrapper[4183]: E0813 19:46:54.763075 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:54Z is after 2025-06-26T12:47:18Z Aug 13 19:46:55 crc kubenswrapper[4183]: E0813 19:46:55.419283 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:46:55 crc kubenswrapper[4183]: I0813 19:46:55.507740 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:55Z is after 2025-06-26T12:47:18Z Aug 13 19:46:56 crc kubenswrapper[4183]: W0813 19:46:56.316182 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:56Z is after 2025-06-26T12:47:18Z Aug 13 19:46:56 crc kubenswrapper[4183]: E0813 19:46:56.317742 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:56Z is after 2025-06-26T12:47:18Z Aug 13 19:46:56 crc kubenswrapper[4183]: I0813 19:46:56.507468 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:56Z is after 2025-06-26T12:47:18Z Aug 13 19:46:57 crc kubenswrapper[4183]: I0813 19:46:57.510435 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:57Z is after 2025-06-26T12:47:18Z Aug 13 19:46:58 crc kubenswrapper[4183]: E0813 19:46:58.420378 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:58Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:46:58 crc kubenswrapper[4183]: I0813 19:46:58.510520 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:58Z is after 2025-06-26T12:47:18Z Aug 13 19:46:58 crc kubenswrapper[4183]: I0813 19:46:58.648586 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:46:58 crc kubenswrapper[4183]: I0813 19:46:58.650512 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:46:58 crc kubenswrapper[4183]: I0813 19:46:58.650638 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:46:58 crc kubenswrapper[4183]: I0813 19:46:58.650666 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:46:58 crc kubenswrapper[4183]: I0813 19:46:58.650710 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:46:58 crc kubenswrapper[4183]: E0813 19:46:58.655036 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:58Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:46:59 crc kubenswrapper[4183]: I0813 19:46:59.507745 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:59Z is after 2025-06-26T12:47:18Z Aug 13 19:47:00 crc kubenswrapper[4183]: I0813 19:47:00.209201 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:00 crc kubenswrapper[4183]: I0813 19:47:00.210994 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:00 crc kubenswrapper[4183]: I0813 19:47:00.211078 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:00 crc kubenswrapper[4183]: I0813 19:47:00.211095 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:00 crc kubenswrapper[4183]: I0813 19:47:00.212387 4183 scope.go:117] "RemoveContainer" containerID="21bea5e9ace0fbd58622f6ba9a0efdb173b7764b3c538f587b835ba219dcd2ed" Aug 13 19:47:00 crc kubenswrapper[4183]: E0813 19:47:00.212844 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:47:00 crc kubenswrapper[4183]: I0813 19:47:00.507343 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:00Z is after 2025-06-26T12:47:18Z Aug 13 19:47:01 crc kubenswrapper[4183]: I0813 19:47:01.209026 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:01 crc kubenswrapper[4183]: I0813 19:47:01.210969 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:01 crc kubenswrapper[4183]: I0813 19:47:01.211158 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:01 crc kubenswrapper[4183]: I0813 19:47:01.211204 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:01 crc kubenswrapper[4183]: I0813 19:47:01.508677 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:01Z is after 2025-06-26T12:47:18Z Aug 13 19:47:01 crc kubenswrapper[4183]: E0813 19:47:01.841030 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:01Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80503ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,LastTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:47:02 crc kubenswrapper[4183]: I0813 19:47:02.508683 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:02Z is after 2025-06-26T12:47:18Z Aug 13 19:47:03 crc kubenswrapper[4183]: I0813 19:47:03.508739 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:03Z is after 2025-06-26T12:47:18Z Aug 13 19:47:04 crc kubenswrapper[4183]: W0813 19:47:04.417066 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:04Z is after 2025-06-26T12:47:18Z Aug 13 19:47:04 crc kubenswrapper[4183]: E0813 19:47:04.417169 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:04Z is after 2025-06-26T12:47:18Z Aug 13 19:47:04 crc kubenswrapper[4183]: I0813 19:47:04.509200 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:04Z is after 2025-06-26T12:47:18Z Aug 13 19:47:05 crc kubenswrapper[4183]: I0813 19:47:05.208117 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:05 crc kubenswrapper[4183]: I0813 19:47:05.208129 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:05 crc kubenswrapper[4183]: I0813 19:47:05.209842 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:05 crc kubenswrapper[4183]: I0813 19:47:05.209912 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:05 crc kubenswrapper[4183]: I0813 19:47:05.209931 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:05 crc kubenswrapper[4183]: I0813 19:47:05.210577 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:05 crc kubenswrapper[4183]: I0813 19:47:05.210707 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:05 crc kubenswrapper[4183]: I0813 19:47:05.210721 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:05 crc kubenswrapper[4183]: I0813 19:47:05.211447 4183 scope.go:117] "RemoveContainer" containerID="519968a9462e8fe101b32ab89f90f7df5940085d68dc41f9bb8fe6dcd45fe76a" Aug 13 19:47:05 crc kubenswrapper[4183]: E0813 19:47:05.212145 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-crc_openshift-kube-controller-manager(2eb2b200bca0d10cf0fe16fb7c0caf80)\"" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" Aug 13 19:47:05 crc kubenswrapper[4183]: E0813 19:47:05.419590 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:47:05 crc kubenswrapper[4183]: E0813 19:47:05.424203 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:05Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:47:05 crc kubenswrapper[4183]: I0813 19:47:05.507503 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:05Z is after 2025-06-26T12:47:18Z Aug 13 19:47:05 crc kubenswrapper[4183]: I0813 19:47:05.655938 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:05 crc kubenswrapper[4183]: I0813 19:47:05.657746 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:05 crc kubenswrapper[4183]: I0813 19:47:05.657885 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:05 crc kubenswrapper[4183]: I0813 19:47:05.657904 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:05 crc kubenswrapper[4183]: I0813 19:47:05.657938 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:47:05 crc kubenswrapper[4183]: E0813 19:47:05.661734 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:05Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:47:06 crc kubenswrapper[4183]: I0813 19:47:06.507595 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:06Z is after 2025-06-26T12:47:18Z Aug 13 19:47:07 crc kubenswrapper[4183]: I0813 19:47:07.508034 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:07Z is after 2025-06-26T12:47:18Z Aug 13 19:47:08 crc kubenswrapper[4183]: I0813 19:47:08.509584 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:08Z is after 2025-06-26T12:47:18Z Aug 13 19:47:09 crc kubenswrapper[4183]: I0813 19:47:09.508409 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:09Z is after 2025-06-26T12:47:18Z Aug 13 19:47:10 crc kubenswrapper[4183]: I0813 19:47:10.508936 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:10Z is after 2025-06-26T12:47:18Z Aug 13 19:47:11 crc kubenswrapper[4183]: I0813 19:47:11.508554 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:11Z is after 2025-06-26T12:47:18Z Aug 13 19:47:11 crc kubenswrapper[4183]: E0813 19:47:11.846977 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:11Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80503ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,LastTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:47:12 crc kubenswrapper[4183]: E0813 19:47:12.429244 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:12Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:47:12 crc kubenswrapper[4183]: I0813 19:47:12.508767 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:12Z is after 2025-06-26T12:47:18Z Aug 13 19:47:12 crc kubenswrapper[4183]: I0813 19:47:12.662198 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:12 crc kubenswrapper[4183]: I0813 19:47:12.664100 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:12 crc kubenswrapper[4183]: I0813 19:47:12.664207 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:12 crc kubenswrapper[4183]: I0813 19:47:12.664223 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:12 crc kubenswrapper[4183]: I0813 19:47:12.664255 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:47:12 crc kubenswrapper[4183]: E0813 19:47:12.667699 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:12Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:47:13 crc kubenswrapper[4183]: I0813 19:47:13.507705 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:13Z is after 2025-06-26T12:47:18Z Aug 13 19:47:14 crc kubenswrapper[4183]: I0813 19:47:14.511515 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:14Z is after 2025-06-26T12:47:18Z Aug 13 19:47:14 crc kubenswrapper[4183]: I0813 19:47:14.969073 4183 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Aug 13 19:47:14 crc kubenswrapper[4183]: E0813 19:47:14.974040 4183 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:14Z is after 2025-06-26T12:47:18Z Aug 13 19:47:15 crc kubenswrapper[4183]: I0813 19:47:15.211738 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:15 crc kubenswrapper[4183]: I0813 19:47:15.214599 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:15 crc kubenswrapper[4183]: I0813 19:47:15.215001 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:15 crc kubenswrapper[4183]: I0813 19:47:15.216039 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:15 crc kubenswrapper[4183]: I0813 19:47:15.223661 4183 scope.go:117] "RemoveContainer" containerID="21bea5e9ace0fbd58622f6ba9a0efdb173b7764b3c538f587b835ba219dcd2ed" Aug 13 19:47:15 crc kubenswrapper[4183]: E0813 19:47:15.224342 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:47:15 crc kubenswrapper[4183]: E0813 19:47:15.419994 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:47:15 crc kubenswrapper[4183]: I0813 19:47:15.507591 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:15Z is after 2025-06-26T12:47:18Z Aug 13 19:47:16 crc kubenswrapper[4183]: I0813 19:47:16.508495 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:16Z is after 2025-06-26T12:47:18Z Aug 13 19:47:17 crc kubenswrapper[4183]: I0813 19:47:17.507568 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:17Z is after 2025-06-26T12:47:18Z Aug 13 19:47:18 crc kubenswrapper[4183]: W0813 19:47:18.411205 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:18Z is after 2025-06-26T12:47:18Z Aug 13 19:47:18 crc kubenswrapper[4183]: E0813 19:47:18.411326 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:18Z is after 2025-06-26T12:47:18Z Aug 13 19:47:18 crc kubenswrapper[4183]: I0813 19:47:18.508359 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:18Z is after 2025-06-26T12:47:18Z Aug 13 19:47:19 crc kubenswrapper[4183]: E0813 19:47:19.433994 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:19Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:47:19 crc kubenswrapper[4183]: I0813 19:47:19.507416 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:19Z is after 2025-06-26T12:47:18Z Aug 13 19:47:19 crc kubenswrapper[4183]: I0813 19:47:19.668611 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:19 crc kubenswrapper[4183]: I0813 19:47:19.671730 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:19 crc kubenswrapper[4183]: I0813 19:47:19.671895 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:19 crc kubenswrapper[4183]: I0813 19:47:19.671913 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:19 crc kubenswrapper[4183]: I0813 19:47:19.671939 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:47:19 crc kubenswrapper[4183]: E0813 19:47:19.675885 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:19Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:47:20 crc kubenswrapper[4183]: I0813 19:47:20.209458 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:20 crc kubenswrapper[4183]: I0813 19:47:20.212833 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:20 crc kubenswrapper[4183]: I0813 19:47:20.212929 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:20 crc kubenswrapper[4183]: I0813 19:47:20.212950 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:20 crc kubenswrapper[4183]: I0813 19:47:20.214992 4183 scope.go:117] "RemoveContainer" containerID="519968a9462e8fe101b32ab89f90f7df5940085d68dc41f9bb8fe6dcd45fe76a" Aug 13 19:47:20 crc kubenswrapper[4183]: I0813 19:47:20.508204 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:20Z is after 2025-06-26T12:47:18Z Aug 13 19:47:20 crc kubenswrapper[4183]: I0813 19:47:20.719999 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/cluster-policy-controller/4.log" Aug 13 19:47:20 crc kubenswrapper[4183]: I0813 19:47:20.722455 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"2eb2b200bca0d10cf0fe16fb7c0caf80","Type":"ContainerStarted","Data":"d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc"} Aug 13 19:47:20 crc kubenswrapper[4183]: I0813 19:47:20.722689 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:20 crc kubenswrapper[4183]: I0813 19:47:20.723695 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:20 crc kubenswrapper[4183]: I0813 19:47:20.723877 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:20 crc kubenswrapper[4183]: I0813 19:47:20.723902 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:21 crc kubenswrapper[4183]: I0813 19:47:21.509064 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:21Z is after 2025-06-26T12:47:18Z Aug 13 19:47:21 crc kubenswrapper[4183]: I0813 19:47:21.558967 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:47:21 crc kubenswrapper[4183]: I0813 19:47:21.725590 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:21 crc kubenswrapper[4183]: I0813 19:47:21.727073 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:21 crc kubenswrapper[4183]: I0813 19:47:21.727166 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:21 crc kubenswrapper[4183]: I0813 19:47:21.727188 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:21 crc kubenswrapper[4183]: E0813 19:47:21.851710 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:21Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80503ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,LastTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:47:22 crc kubenswrapper[4183]: I0813 19:47:22.509575 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:22Z is after 2025-06-26T12:47:18Z Aug 13 19:47:23 crc kubenswrapper[4183]: I0813 19:47:23.509622 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:23Z is after 2025-06-26T12:47:18Z Aug 13 19:47:24 crc kubenswrapper[4183]: I0813 19:47:24.508707 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:24Z is after 2025-06-26T12:47:18Z Aug 13 19:47:25 crc kubenswrapper[4183]: E0813 19:47:25.420710 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:47:25 crc kubenswrapper[4183]: I0813 19:47:25.509082 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:25Z is after 2025-06-26T12:47:18Z Aug 13 19:47:26 crc kubenswrapper[4183]: E0813 19:47:26.438944 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:26Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:47:26 crc kubenswrapper[4183]: I0813 19:47:26.509324 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:26Z is after 2025-06-26T12:47:18Z Aug 13 19:47:26 crc kubenswrapper[4183]: I0813 19:47:26.676882 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:26 crc kubenswrapper[4183]: I0813 19:47:26.678202 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:26 crc kubenswrapper[4183]: I0813 19:47:26.678232 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:26 crc kubenswrapper[4183]: I0813 19:47:26.678257 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:26 crc kubenswrapper[4183]: I0813 19:47:26.678283 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:47:26 crc kubenswrapper[4183]: E0813 19:47:26.683126 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:26Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:47:27 crc kubenswrapper[4183]: I0813 19:47:27.508125 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:27Z is after 2025-06-26T12:47:18Z Aug 13 19:47:28 crc kubenswrapper[4183]: I0813 19:47:28.512301 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:28Z is after 2025-06-26T12:47:18Z Aug 13 19:47:29 crc kubenswrapper[4183]: I0813 19:47:29.208320 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:29 crc kubenswrapper[4183]: I0813 19:47:29.210618 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:29 crc kubenswrapper[4183]: I0813 19:47:29.210723 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:29 crc kubenswrapper[4183]: I0813 19:47:29.210741 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:29 crc kubenswrapper[4183]: I0813 19:47:29.212256 4183 scope.go:117] "RemoveContainer" containerID="21bea5e9ace0fbd58622f6ba9a0efdb173b7764b3c538f587b835ba219dcd2ed" Aug 13 19:47:29 crc kubenswrapper[4183]: I0813 19:47:29.508562 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:29Z is after 2025-06-26T12:47:18Z Aug 13 19:47:29 crc kubenswrapper[4183]: I0813 19:47:29.581158 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:47:29 crc kubenswrapper[4183]: I0813 19:47:29.582083 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:29 crc kubenswrapper[4183]: I0813 19:47:29.584193 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:29 crc kubenswrapper[4183]: I0813 19:47:29.584290 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:29 crc kubenswrapper[4183]: I0813 19:47:29.584311 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:30 crc kubenswrapper[4183]: I0813 19:47:30.508950 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:30Z is after 2025-06-26T12:47:18Z Aug 13 19:47:30 crc kubenswrapper[4183]: I0813 19:47:30.759441 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/4.log" Aug 13 19:47:30 crc kubenswrapper[4183]: I0813 19:47:30.762178 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"53c1db1508241fbac1bedf9130341ffe","Type":"ContainerStarted","Data":"42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf"} Aug 13 19:47:30 crc kubenswrapper[4183]: I0813 19:47:30.762366 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:30 crc kubenswrapper[4183]: I0813 19:47:30.763392 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:30 crc kubenswrapper[4183]: I0813 19:47:30.763448 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:30 crc kubenswrapper[4183]: I0813 19:47:30.763468 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:31 crc kubenswrapper[4183]: I0813 19:47:31.509020 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:47:18Z Aug 13 19:47:31 crc kubenswrapper[4183]: I0813 19:47:31.768549 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/5.log" Aug 13 19:47:31 crc kubenswrapper[4183]: I0813 19:47:31.769851 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/4.log" Aug 13 19:47:31 crc kubenswrapper[4183]: I0813 19:47:31.772701 4183 generic.go:334] "Generic (PLEG): container finished" podID="53c1db1508241fbac1bedf9130341ffe" containerID="42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf" exitCode=255 Aug 13 19:47:31 crc kubenswrapper[4183]: I0813 19:47:31.772760 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"53c1db1508241fbac1bedf9130341ffe","Type":"ContainerDied","Data":"42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf"} Aug 13 19:47:31 crc kubenswrapper[4183]: I0813 19:47:31.772973 4183 scope.go:117] "RemoveContainer" containerID="21bea5e9ace0fbd58622f6ba9a0efdb173b7764b3c538f587b835ba219dcd2ed" Aug 13 19:47:31 crc kubenswrapper[4183]: I0813 19:47:31.773033 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:31 crc kubenswrapper[4183]: I0813 19:47:31.774486 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:31 crc kubenswrapper[4183]: I0813 19:47:31.774511 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:31 crc kubenswrapper[4183]: I0813 19:47:31.774525 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:31 crc kubenswrapper[4183]: I0813 19:47:31.775962 4183 scope.go:117] "RemoveContainer" containerID="42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf" Aug 13 19:47:31 crc kubenswrapper[4183]: E0813 19:47:31.776312 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:47:31 crc kubenswrapper[4183]: E0813 19:47:31.858065 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80503ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,LastTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:47:31 crc kubenswrapper[4183]: E0813 19:47:31.858166 4183 event.go:294] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{crc.185b6b18f80503ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,LastTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:47:31 crc kubenswrapper[4183]: E0813 19:47:31.862471 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80a72b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,LastTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:47:32 crc kubenswrapper[4183]: I0813 19:47:32.510068 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:32Z is after 2025-06-26T12:47:18Z Aug 13 19:47:32 crc kubenswrapper[4183]: I0813 19:47:32.581916 4183 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 19:47:32 crc kubenswrapper[4183]: I0813 19:47:32.582098 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 19:47:32 crc kubenswrapper[4183]: I0813 19:47:32.780905 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/5.log" Aug 13 19:47:33 crc kubenswrapper[4183]: E0813 19:47:33.454745 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:33Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:47:33 crc kubenswrapper[4183]: I0813 19:47:33.510367 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:33Z is after 2025-06-26T12:47:18Z Aug 13 19:47:33 crc kubenswrapper[4183]: I0813 19:47:33.685376 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:33 crc kubenswrapper[4183]: I0813 19:47:33.687753 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:33 crc kubenswrapper[4183]: I0813 19:47:33.687831 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:33 crc kubenswrapper[4183]: I0813 19:47:33.687856 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:33 crc kubenswrapper[4183]: I0813 19:47:33.687888 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:47:33 crc kubenswrapper[4183]: E0813 19:47:33.697290 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:33Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:47:34 crc kubenswrapper[4183]: I0813 19:47:34.509279 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:34Z is after 2025-06-26T12:47:18Z Aug 13 19:47:34 crc kubenswrapper[4183]: I0813 19:47:34.891441 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:47:34 crc kubenswrapper[4183]: I0813 19:47:34.891615 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:34 crc kubenswrapper[4183]: I0813 19:47:34.893176 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:34 crc kubenswrapper[4183]: I0813 19:47:34.893232 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:34 crc kubenswrapper[4183]: I0813 19:47:34.893250 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:34 crc kubenswrapper[4183]: I0813 19:47:34.895909 4183 scope.go:117] "RemoveContainer" containerID="42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf" Aug 13 19:47:34 crc kubenswrapper[4183]: E0813 19:47:34.896584 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:47:35 crc kubenswrapper[4183]: E0813 19:47:35.422135 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:47:35 crc kubenswrapper[4183]: I0813 19:47:35.508730 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:35Z is after 2025-06-26T12:47:18Z Aug 13 19:47:36 crc kubenswrapper[4183]: I0813 19:47:36.507939 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:36Z is after 2025-06-26T12:47:18Z Aug 13 19:47:36 crc kubenswrapper[4183]: E0813 19:47:36.808517 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:36Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80a72b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,LastTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:47:37 crc kubenswrapper[4183]: I0813 19:47:37.507996 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:37Z is after 2025-06-26T12:47:18Z Aug 13 19:47:37 crc kubenswrapper[4183]: I0813 19:47:37.564200 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:47:37 crc kubenswrapper[4183]: I0813 19:47:37.564474 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:37 crc kubenswrapper[4183]: I0813 19:47:37.565916 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:37 crc kubenswrapper[4183]: I0813 19:47:37.565990 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:37 crc kubenswrapper[4183]: I0813 19:47:37.566009 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:37 crc kubenswrapper[4183]: I0813 19:47:37.567289 4183 scope.go:117] "RemoveContainer" containerID="42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf" Aug 13 19:47:37 crc kubenswrapper[4183]: E0813 19:47:37.567716 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:47:38 crc kubenswrapper[4183]: I0813 19:47:38.509349 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:38Z is after 2025-06-26T12:47:18Z Aug 13 19:47:39 crc kubenswrapper[4183]: I0813 19:47:39.508117 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:39Z is after 2025-06-26T12:47:18Z Aug 13 19:47:40 crc kubenswrapper[4183]: E0813 19:47:40.462748 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:40Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:47:40 crc kubenswrapper[4183]: I0813 19:47:40.508756 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:40Z is after 2025-06-26T12:47:18Z Aug 13 19:47:40 crc kubenswrapper[4183]: I0813 19:47:40.698172 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:40 crc kubenswrapper[4183]: I0813 19:47:40.700280 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:40 crc kubenswrapper[4183]: I0813 19:47:40.700388 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:40 crc kubenswrapper[4183]: I0813 19:47:40.700409 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:40 crc kubenswrapper[4183]: I0813 19:47:40.700442 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:47:40 crc kubenswrapper[4183]: E0813 19:47:40.709132 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:40Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:47:41 crc kubenswrapper[4183]: I0813 19:47:41.512169 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:41Z is after 2025-06-26T12:47:18Z Aug 13 19:47:42 crc kubenswrapper[4183]: I0813 19:47:42.507757 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:42Z is after 2025-06-26T12:47:18Z Aug 13 19:47:42 crc kubenswrapper[4183]: I0813 19:47:42.582073 4183 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 19:47:42 crc kubenswrapper[4183]: I0813 19:47:42.582216 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Aug 13 19:47:43 crc kubenswrapper[4183]: I0813 19:47:43.508350 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:43Z is after 2025-06-26T12:47:18Z Aug 13 19:47:44 crc kubenswrapper[4183]: I0813 19:47:44.508294 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:44Z is after 2025-06-26T12:47:18Z Aug 13 19:47:45 crc kubenswrapper[4183]: E0813 19:47:45.422727 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:47:45 crc kubenswrapper[4183]: I0813 19:47:45.509076 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:45Z is after 2025-06-26T12:47:18Z Aug 13 19:47:46 crc kubenswrapper[4183]: I0813 19:47:46.508744 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:46Z is after 2025-06-26T12:47:18Z Aug 13 19:47:46 crc kubenswrapper[4183]: E0813 19:47:46.812453 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:46Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80a72b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,LastTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:47:46 crc kubenswrapper[4183]: I0813 19:47:46.969286 4183 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Aug 13 19:47:46 crc kubenswrapper[4183]: E0813 19:47:46.975593 4183 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:46Z is after 2025-06-26T12:47:18Z Aug 13 19:47:47 crc kubenswrapper[4183]: E0813 19:47:47.467519 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:47Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:47:47 crc kubenswrapper[4183]: I0813 19:47:47.509582 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:47Z is after 2025-06-26T12:47:18Z Aug 13 19:47:47 crc kubenswrapper[4183]: I0813 19:47:47.709930 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:47 crc kubenswrapper[4183]: I0813 19:47:47.713739 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:47 crc kubenswrapper[4183]: I0813 19:47:47.713967 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:47 crc kubenswrapper[4183]: I0813 19:47:47.713987 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:47 crc kubenswrapper[4183]: I0813 19:47:47.714020 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:47:47 crc kubenswrapper[4183]: E0813 19:47:47.718181 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:47Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:47:48 crc kubenswrapper[4183]: W0813 19:47:48.118499 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:48Z is after 2025-06-26T12:47:18Z Aug 13 19:47:48 crc kubenswrapper[4183]: E0813 19:47:48.118609 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:48Z is after 2025-06-26T12:47:18Z Aug 13 19:47:48 crc kubenswrapper[4183]: I0813 19:47:48.508468 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:48Z is after 2025-06-26T12:47:18Z Aug 13 19:47:49 crc kubenswrapper[4183]: I0813 19:47:49.209234 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:49 crc kubenswrapper[4183]: I0813 19:47:49.210976 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:49 crc kubenswrapper[4183]: I0813 19:47:49.211070 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:49 crc kubenswrapper[4183]: I0813 19:47:49.211093 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:49 crc kubenswrapper[4183]: I0813 19:47:49.212341 4183 scope.go:117] "RemoveContainer" containerID="42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf" Aug 13 19:47:49 crc kubenswrapper[4183]: E0813 19:47:49.212814 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:47:49 crc kubenswrapper[4183]: I0813 19:47:49.507056 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z Aug 13 19:47:50 crc kubenswrapper[4183]: I0813 19:47:50.508037 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:50Z is after 2025-06-26T12:47:18Z Aug 13 19:47:50 crc kubenswrapper[4183]: I0813 19:47:50.893941 4183 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:42490->192.168.126.11:10357: read: connection reset by peer" start-of-body= Aug 13 19:47:50 crc kubenswrapper[4183]: I0813 19:47:50.894144 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:42490->192.168.126.11:10357: read: connection reset by peer" Aug 13 19:47:50 crc kubenswrapper[4183]: I0813 19:47:50.894229 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:47:50 crc kubenswrapper[4183]: I0813 19:47:50.894387 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:50 crc kubenswrapper[4183]: I0813 19:47:50.896037 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:50 crc kubenswrapper[4183]: I0813 19:47:50.896145 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:50 crc kubenswrapper[4183]: I0813 19:47:50.896164 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:50 crc kubenswrapper[4183]: I0813 19:47:50.898064 4183 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Aug 13 19:47:50 crc kubenswrapper[4183]: I0813 19:47:50.898425 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" containerID="cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc" gracePeriod=30 Aug 13 19:47:51 crc kubenswrapper[4183]: E0813 19:47:51.022282 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=cluster-policy-controller pod=kube-controller-manager-crc_openshift-kube-controller-manager(2eb2b200bca0d10cf0fe16fb7c0caf80)\"" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" Aug 13 19:47:51 crc kubenswrapper[4183]: W0813 19:47:51.416612 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:51Z is after 2025-06-26T12:47:18Z Aug 13 19:47:51 crc kubenswrapper[4183]: E0813 19:47:51.416762 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:51Z is after 2025-06-26T12:47:18Z Aug 13 19:47:51 crc kubenswrapper[4183]: I0813 19:47:51.509431 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:51Z is after 2025-06-26T12:47:18Z Aug 13 19:47:51 crc kubenswrapper[4183]: I0813 19:47:51.849917 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/cluster-policy-controller/5.log" Aug 13 19:47:51 crc kubenswrapper[4183]: I0813 19:47:51.851326 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/cluster-policy-controller/4.log" Aug 13 19:47:51 crc kubenswrapper[4183]: I0813 19:47:51.854231 4183 generic.go:334] "Generic (PLEG): container finished" podID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerID="d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc" exitCode=255 Aug 13 19:47:51 crc kubenswrapper[4183]: I0813 19:47:51.854297 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"2eb2b200bca0d10cf0fe16fb7c0caf80","Type":"ContainerDied","Data":"d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc"} Aug 13 19:47:51 crc kubenswrapper[4183]: I0813 19:47:51.854357 4183 scope.go:117] "RemoveContainer" containerID="519968a9462e8fe101b32ab89f90f7df5940085d68dc41f9bb8fe6dcd45fe76a" Aug 13 19:47:51 crc kubenswrapper[4183]: I0813 19:47:51.854491 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:51 crc kubenswrapper[4183]: I0813 19:47:51.856077 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:51 crc kubenswrapper[4183]: I0813 19:47:51.856150 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:51 crc kubenswrapper[4183]: I0813 19:47:51.856167 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:51 crc kubenswrapper[4183]: I0813 19:47:51.857494 4183 scope.go:117] "RemoveContainer" containerID="d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc" Aug 13 19:47:51 crc kubenswrapper[4183]: E0813 19:47:51.859186 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=cluster-policy-controller pod=kube-controller-manager-crc_openshift-kube-controller-manager(2eb2b200bca0d10cf0fe16fb7c0caf80)\"" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" Aug 13 19:47:52 crc kubenswrapper[4183]: I0813 19:47:52.507851 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:52Z is after 2025-06-26T12:47:18Z Aug 13 19:47:52 crc kubenswrapper[4183]: I0813 19:47:52.859598 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/cluster-policy-controller/5.log" Aug 13 19:47:53 crc kubenswrapper[4183]: I0813 19:47:53.508336 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:53Z is after 2025-06-26T12:47:18Z Aug 13 19:47:53 crc kubenswrapper[4183]: W0813 19:47:53.683937 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:53Z is after 2025-06-26T12:47:18Z Aug 13 19:47:53 crc kubenswrapper[4183]: E0813 19:47:53.684046 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:53Z is after 2025-06-26T12:47:18Z Aug 13 19:47:54 crc kubenswrapper[4183]: E0813 19:47:54.472244 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:54Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:47:54 crc kubenswrapper[4183]: I0813 19:47:54.507411 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:54Z is after 2025-06-26T12:47:18Z Aug 13 19:47:54 crc kubenswrapper[4183]: I0813 19:47:54.661003 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 19:47:54 crc kubenswrapper[4183]: I0813 19:47:54.661149 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 19:47:54 crc kubenswrapper[4183]: I0813 19:47:54.661179 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 19:47:54 crc kubenswrapper[4183]: I0813 19:47:54.661211 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 19:47:54 crc kubenswrapper[4183]: I0813 19:47:54.661232 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 19:47:54 crc kubenswrapper[4183]: I0813 19:47:54.719219 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:54 crc kubenswrapper[4183]: I0813 19:47:54.721408 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:54 crc kubenswrapper[4183]: I0813 19:47:54.721483 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:54 crc kubenswrapper[4183]: I0813 19:47:54.721506 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:54 crc kubenswrapper[4183]: I0813 19:47:54.721536 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:47:54 crc kubenswrapper[4183]: E0813 19:47:54.725028 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:54Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:47:55 crc kubenswrapper[4183]: E0813 19:47:55.424009 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:47:55 crc kubenswrapper[4183]: I0813 19:47:55.508465 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:55Z is after 2025-06-26T12:47:18Z Aug 13 19:47:56 crc kubenswrapper[4183]: I0813 19:47:56.509220 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:56Z is after 2025-06-26T12:47:18Z Aug 13 19:47:56 crc kubenswrapper[4183]: E0813 19:47:56.817564 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:56Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80a72b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,LastTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:47:57 crc kubenswrapper[4183]: I0813 19:47:57.508461 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:57Z is after 2025-06-26T12:47:18Z Aug 13 19:47:58 crc kubenswrapper[4183]: I0813 19:47:58.508564 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:58Z is after 2025-06-26T12:47:18Z Aug 13 19:47:59 crc kubenswrapper[4183]: I0813 19:47:59.508359 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:59Z is after 2025-06-26T12:47:18Z Aug 13 19:48:00 crc kubenswrapper[4183]: I0813 19:48:00.208959 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:48:00 crc kubenswrapper[4183]: I0813 19:48:00.211507 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:48:00 crc kubenswrapper[4183]: I0813 19:48:00.211677 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:48:00 crc kubenswrapper[4183]: I0813 19:48:00.211760 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:48:00 crc kubenswrapper[4183]: I0813 19:48:00.507257 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:00Z is after 2025-06-26T12:47:18Z Aug 13 19:48:00 crc kubenswrapper[4183]: I0813 19:48:00.518471 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:48:00 crc kubenswrapper[4183]: I0813 19:48:00.518721 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:48:00 crc kubenswrapper[4183]: I0813 19:48:00.520642 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:48:00 crc kubenswrapper[4183]: I0813 19:48:00.520730 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:48:00 crc kubenswrapper[4183]: I0813 19:48:00.520752 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:48:00 crc kubenswrapper[4183]: I0813 19:48:00.522654 4183 scope.go:117] "RemoveContainer" containerID="d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc" Aug 13 19:48:00 crc kubenswrapper[4183]: E0813 19:48:00.523656 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=cluster-policy-controller pod=kube-controller-manager-crc_openshift-kube-controller-manager(2eb2b200bca0d10cf0fe16fb7c0caf80)\"" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" Aug 13 19:48:01 crc kubenswrapper[4183]: E0813 19:48:01.478172 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:01Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:48:01 crc kubenswrapper[4183]: I0813 19:48:01.507668 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:01Z is after 2025-06-26T12:47:18Z Aug 13 19:48:01 crc kubenswrapper[4183]: I0813 19:48:01.725214 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:48:01 crc kubenswrapper[4183]: I0813 19:48:01.727276 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:48:01 crc kubenswrapper[4183]: I0813 19:48:01.727358 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:48:01 crc kubenswrapper[4183]: I0813 19:48:01.727388 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:48:01 crc kubenswrapper[4183]: I0813 19:48:01.727437 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:48:01 crc kubenswrapper[4183]: E0813 19:48:01.737482 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:01Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:48:02 crc kubenswrapper[4183]: I0813 19:48:02.509365 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:02Z is after 2025-06-26T12:47:18Z Aug 13 19:48:03 crc kubenswrapper[4183]: I0813 19:48:03.507856 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:03Z is after 2025-06-26T12:47:18Z Aug 13 19:48:04 crc kubenswrapper[4183]: I0813 19:48:04.208541 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:48:04 crc kubenswrapper[4183]: I0813 19:48:04.210210 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:48:04 crc kubenswrapper[4183]: I0813 19:48:04.210255 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:48:04 crc kubenswrapper[4183]: I0813 19:48:04.210268 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:48:04 crc kubenswrapper[4183]: I0813 19:48:04.211905 4183 scope.go:117] "RemoveContainer" containerID="42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf" Aug 13 19:48:04 crc kubenswrapper[4183]: E0813 19:48:04.212462 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:48:04 crc kubenswrapper[4183]: I0813 19:48:04.508972 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:04Z is after 2025-06-26T12:47:18Z Aug 13 19:48:05 crc kubenswrapper[4183]: E0813 19:48:05.424977 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:48:05 crc kubenswrapper[4183]: I0813 19:48:05.510017 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:05Z is after 2025-06-26T12:47:18Z Aug 13 19:48:06 crc kubenswrapper[4183]: I0813 19:48:06.509015 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:06Z is after 2025-06-26T12:47:18Z Aug 13 19:48:06 crc kubenswrapper[4183]: E0813 19:48:06.823687 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:06Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80a72b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,LastTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:48:07 crc kubenswrapper[4183]: I0813 19:48:07.507939 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:07Z is after 2025-06-26T12:47:18Z Aug 13 19:48:08 crc kubenswrapper[4183]: I0813 19:48:08.208284 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:48:08 crc kubenswrapper[4183]: I0813 19:48:08.210308 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:48:08 crc kubenswrapper[4183]: I0813 19:48:08.210385 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:48:08 crc kubenswrapper[4183]: I0813 19:48:08.210406 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:48:08 crc kubenswrapper[4183]: E0813 19:48:08.482375 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:08Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:48:08 crc kubenswrapper[4183]: I0813 19:48:08.529238 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:08Z is after 2025-06-26T12:47:18Z Aug 13 19:48:08 crc kubenswrapper[4183]: I0813 19:48:08.737626 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:48:08 crc kubenswrapper[4183]: I0813 19:48:08.739132 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:48:08 crc kubenswrapper[4183]: I0813 19:48:08.739309 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:48:08 crc kubenswrapper[4183]: I0813 19:48:08.739371 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:48:08 crc kubenswrapper[4183]: I0813 19:48:08.739419 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:48:08 crc kubenswrapper[4183]: E0813 19:48:08.742847 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:08Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:48:09 crc kubenswrapper[4183]: I0813 19:48:09.508101 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:09Z is after 2025-06-26T12:47:18Z Aug 13 19:48:10 crc kubenswrapper[4183]: I0813 19:48:10.509171 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:10Z is after 2025-06-26T12:47:18Z Aug 13 19:48:11 crc kubenswrapper[4183]: I0813 19:48:11.507065 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:11Z is after 2025-06-26T12:47:18Z Aug 13 19:48:12 crc kubenswrapper[4183]: I0813 19:48:12.208022 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:48:12 crc kubenswrapper[4183]: I0813 19:48:12.209637 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:48:12 crc kubenswrapper[4183]: I0813 19:48:12.209725 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:48:12 crc kubenswrapper[4183]: I0813 19:48:12.209746 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:48:12 crc kubenswrapper[4183]: I0813 19:48:12.211524 4183 scope.go:117] "RemoveContainer" containerID="d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc" Aug 13 19:48:12 crc kubenswrapper[4183]: E0813 19:48:12.212281 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=cluster-policy-controller pod=kube-controller-manager-crc_openshift-kube-controller-manager(2eb2b200bca0d10cf0fe16fb7c0caf80)\"" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" Aug 13 19:48:12 crc kubenswrapper[4183]: I0813 19:48:12.508424 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:12Z is after 2025-06-26T12:47:18Z Aug 13 19:48:13 crc kubenswrapper[4183]: I0813 19:48:13.508153 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:13Z is after 2025-06-26T12:47:18Z Aug 13 19:48:14 crc kubenswrapper[4183]: I0813 19:48:14.508084 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:14Z is after 2025-06-26T12:47:18Z Aug 13 19:48:14 crc kubenswrapper[4183]: W0813 19:48:14.894124 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:14Z is after 2025-06-26T12:47:18Z Aug 13 19:48:14 crc kubenswrapper[4183]: E0813 19:48:14.894223 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:14Z is after 2025-06-26T12:47:18Z Aug 13 19:48:15 crc kubenswrapper[4183]: E0813 19:48:15.425881 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:48:15 crc kubenswrapper[4183]: E0813 19:48:15.486630 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:15Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:48:15 crc kubenswrapper[4183]: I0813 19:48:15.507913 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:15Z is after 2025-06-26T12:47:18Z Aug 13 19:48:15 crc kubenswrapper[4183]: I0813 19:48:15.743079 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:48:15 crc kubenswrapper[4183]: I0813 19:48:15.745850 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:48:15 crc kubenswrapper[4183]: I0813 19:48:15.745922 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:48:15 crc kubenswrapper[4183]: I0813 19:48:15.745935 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:48:15 crc kubenswrapper[4183]: I0813 19:48:15.745967 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:48:15 crc kubenswrapper[4183]: E0813 19:48:15.756009 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:15Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:48:16 crc kubenswrapper[4183]: I0813 19:48:16.507684 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:16Z is after 2025-06-26T12:47:18Z Aug 13 19:48:16 crc kubenswrapper[4183]: E0813 19:48:16.828651 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:16Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80a72b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,LastTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:48:17 crc kubenswrapper[4183]: I0813 19:48:17.508695 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:17Z is after 2025-06-26T12:47:18Z Aug 13 19:48:18 crc kubenswrapper[4183]: I0813 19:48:18.208882 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:48:18 crc kubenswrapper[4183]: I0813 19:48:18.210241 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:48:18 crc kubenswrapper[4183]: I0813 19:48:18.210305 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:48:18 crc kubenswrapper[4183]: I0813 19:48:18.210322 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:48:18 crc kubenswrapper[4183]: I0813 19:48:18.211535 4183 scope.go:117] "RemoveContainer" containerID="42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf" Aug 13 19:48:18 crc kubenswrapper[4183]: E0813 19:48:18.212120 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:48:18 crc kubenswrapper[4183]: I0813 19:48:18.509491 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:18Z is after 2025-06-26T12:47:18Z Aug 13 19:48:18 crc kubenswrapper[4183]: I0813 19:48:18.969699 4183 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Aug 13 19:48:18 crc kubenswrapper[4183]: E0813 19:48:18.974609 4183 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:18Z is after 2025-06-26T12:47:18Z Aug 13 19:48:19 crc kubenswrapper[4183]: I0813 19:48:19.507337 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:19Z is after 2025-06-26T12:47:18Z Aug 13 19:48:20 crc kubenswrapper[4183]: I0813 19:48:20.509878 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:20Z is after 2025-06-26T12:47:18Z Aug 13 19:48:21 crc kubenswrapper[4183]: I0813 19:48:21.507142 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:21Z is after 2025-06-26T12:47:18Z Aug 13 19:48:22 crc kubenswrapper[4183]: E0813 19:48:22.492982 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:22Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:48:22 crc kubenswrapper[4183]: I0813 19:48:22.509072 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:22Z is after 2025-06-26T12:47:18Z Aug 13 19:48:22 crc kubenswrapper[4183]: I0813 19:48:22.756562 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:48:22 crc kubenswrapper[4183]: I0813 19:48:22.758373 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:48:22 crc kubenswrapper[4183]: I0813 19:48:22.758482 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:48:22 crc kubenswrapper[4183]: I0813 19:48:22.758512 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:48:22 crc kubenswrapper[4183]: I0813 19:48:22.758553 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:48:22 crc kubenswrapper[4183]: E0813 19:48:22.762269 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:22Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:48:23 crc kubenswrapper[4183]: I0813 19:48:23.508701 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:23Z is after 2025-06-26T12:47:18Z Aug 13 19:48:24 crc kubenswrapper[4183]: I0813 19:48:24.208815 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:48:24 crc kubenswrapper[4183]: I0813 19:48:24.210276 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:48:24 crc kubenswrapper[4183]: I0813 19:48:24.210355 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:48:24 crc kubenswrapper[4183]: I0813 19:48:24.210373 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:48:24 crc kubenswrapper[4183]: I0813 19:48:24.216334 4183 scope.go:117] "RemoveContainer" containerID="d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc" Aug 13 19:48:24 crc kubenswrapper[4183]: E0813 19:48:24.218140 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=cluster-policy-controller pod=kube-controller-manager-crc_openshift-kube-controller-manager(2eb2b200bca0d10cf0fe16fb7c0caf80)\"" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" Aug 13 19:48:24 crc kubenswrapper[4183]: I0813 19:48:24.508837 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:24Z is after 2025-06-26T12:47:18Z Aug 13 19:48:25 crc kubenswrapper[4183]: I0813 19:48:25.208831 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:48:25 crc kubenswrapper[4183]: I0813 19:48:25.210125 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:48:25 crc kubenswrapper[4183]: I0813 19:48:25.210182 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:48:25 crc kubenswrapper[4183]: I0813 19:48:25.210202 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:48:25 crc kubenswrapper[4183]: E0813 19:48:25.427029 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:48:25 crc kubenswrapper[4183]: I0813 19:48:25.507028 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:25Z is after 2025-06-26T12:47:18Z Aug 13 19:48:26 crc kubenswrapper[4183]: I0813 19:48:26.509146 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:26Z is after 2025-06-26T12:47:18Z Aug 13 19:48:26 crc kubenswrapper[4183]: E0813 19:48:26.834373 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:26Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80a72b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,LastTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:48:27 crc kubenswrapper[4183]: I0813 19:48:27.508562 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:27Z is after 2025-06-26T12:47:18Z Aug 13 19:48:28 crc kubenswrapper[4183]: W0813 19:48:28.188409 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:28Z is after 2025-06-26T12:47:18Z Aug 13 19:48:28 crc kubenswrapper[4183]: E0813 19:48:28.188557 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:28Z is after 2025-06-26T12:47:18Z Aug 13 19:48:28 crc kubenswrapper[4183]: I0813 19:48:28.507603 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:28Z is after 2025-06-26T12:47:18Z Aug 13 19:48:29 crc kubenswrapper[4183]: E0813 19:48:29.500911 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:29Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:48:29 crc kubenswrapper[4183]: I0813 19:48:29.511026 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:29Z is after 2025-06-26T12:47:18Z Aug 13 19:48:29 crc kubenswrapper[4183]: I0813 19:48:29.762589 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:48:29 crc kubenswrapper[4183]: I0813 19:48:29.764070 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:48:29 crc kubenswrapper[4183]: I0813 19:48:29.764293 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:48:29 crc kubenswrapper[4183]: I0813 19:48:29.764311 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:48:29 crc kubenswrapper[4183]: I0813 19:48:29.764341 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:48:29 crc kubenswrapper[4183]: E0813 19:48:29.768854 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:29Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:48:30 crc kubenswrapper[4183]: I0813 19:48:30.517188 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:30Z is after 2025-06-26T12:47:18Z Aug 13 19:48:31 crc kubenswrapper[4183]: I0813 19:48:31.209108 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:48:31 crc kubenswrapper[4183]: I0813 19:48:31.210715 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:48:31 crc kubenswrapper[4183]: I0813 19:48:31.210953 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:48:31 crc kubenswrapper[4183]: I0813 19:48:31.210994 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:48:31 crc kubenswrapper[4183]: I0813 19:48:31.212398 4183 scope.go:117] "RemoveContainer" containerID="42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf" Aug 13 19:48:31 crc kubenswrapper[4183]: E0813 19:48:31.212827 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:48:31 crc kubenswrapper[4183]: I0813 19:48:31.507232 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:31Z is after 2025-06-26T12:47:18Z Aug 13 19:48:32 crc kubenswrapper[4183]: I0813 19:48:32.507707 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:32Z is after 2025-06-26T12:47:18Z Aug 13 19:48:33 crc kubenswrapper[4183]: I0813 19:48:33.508146 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:33Z is after 2025-06-26T12:47:18Z Aug 13 19:48:34 crc kubenswrapper[4183]: I0813 19:48:34.507587 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:34Z is after 2025-06-26T12:47:18Z Aug 13 19:48:35 crc kubenswrapper[4183]: E0813 19:48:35.428027 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:48:35 crc kubenswrapper[4183]: I0813 19:48:35.507216 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:35Z is after 2025-06-26T12:47:18Z Aug 13 19:48:36 crc kubenswrapper[4183]: E0813 19:48:36.505587 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:36Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:48:36 crc kubenswrapper[4183]: I0813 19:48:36.507713 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:36Z is after 2025-06-26T12:47:18Z Aug 13 19:48:36 crc kubenswrapper[4183]: W0813 19:48:36.568675 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:36Z is after 2025-06-26T12:47:18Z Aug 13 19:48:36 crc kubenswrapper[4183]: E0813 19:48:36.568942 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:36Z is after 2025-06-26T12:47:18Z Aug 13 19:48:36 crc kubenswrapper[4183]: I0813 19:48:36.769224 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:48:36 crc kubenswrapper[4183]: I0813 19:48:36.770923 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:48:36 crc kubenswrapper[4183]: I0813 19:48:36.770997 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:48:36 crc kubenswrapper[4183]: I0813 19:48:36.771012 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:48:36 crc kubenswrapper[4183]: I0813 19:48:36.771107 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:48:36 crc kubenswrapper[4183]: E0813 19:48:36.778389 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:36Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:48:36 crc kubenswrapper[4183]: E0813 19:48:36.842056 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:36Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80a72b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,LastTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:48:37 crc kubenswrapper[4183]: I0813 19:48:37.209111 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:48:37 crc kubenswrapper[4183]: I0813 19:48:37.211424 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:48:37 crc kubenswrapper[4183]: I0813 19:48:37.211536 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:48:37 crc kubenswrapper[4183]: I0813 19:48:37.211552 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:48:37 crc kubenswrapper[4183]: I0813 19:48:37.213289 4183 scope.go:117] "RemoveContainer" containerID="d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc" Aug 13 19:48:37 crc kubenswrapper[4183]: E0813 19:48:37.214054 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=cluster-policy-controller pod=kube-controller-manager-crc_openshift-kube-controller-manager(2eb2b200bca0d10cf0fe16fb7c0caf80)\"" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" Aug 13 19:48:37 crc kubenswrapper[4183]: I0813 19:48:37.507690 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:37Z is after 2025-06-26T12:47:18Z Aug 13 19:48:38 crc kubenswrapper[4183]: I0813 19:48:38.510445 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:38Z is after 2025-06-26T12:47:18Z Aug 13 19:48:39 crc kubenswrapper[4183]: I0813 19:48:39.508593 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:39Z is after 2025-06-26T12:47:18Z Aug 13 19:48:40 crc kubenswrapper[4183]: I0813 19:48:40.509016 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:40Z is after 2025-06-26T12:47:18Z Aug 13 19:48:41 crc kubenswrapper[4183]: I0813 19:48:41.508595 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:41Z is after 2025-06-26T12:47:18Z Aug 13 19:48:41 crc kubenswrapper[4183]: W0813 19:48:41.776148 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:41Z is after 2025-06-26T12:47:18Z Aug 13 19:48:41 crc kubenswrapper[4183]: E0813 19:48:41.776301 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:41Z is after 2025-06-26T12:47:18Z Aug 13 19:48:42 crc kubenswrapper[4183]: I0813 19:48:42.208554 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:48:42 crc kubenswrapper[4183]: I0813 19:48:42.210237 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:48:42 crc kubenswrapper[4183]: I0813 19:48:42.210343 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:48:42 crc kubenswrapper[4183]: I0813 19:48:42.210366 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:48:42 crc kubenswrapper[4183]: I0813 19:48:42.212399 4183 scope.go:117] "RemoveContainer" containerID="42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf" Aug 13 19:48:42 crc kubenswrapper[4183]: E0813 19:48:42.213095 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:48:42 crc kubenswrapper[4183]: I0813 19:48:42.509594 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:42Z is after 2025-06-26T12:47:18Z Aug 13 19:48:43 crc kubenswrapper[4183]: I0813 19:48:43.508017 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:43Z is after 2025-06-26T12:47:18Z Aug 13 19:48:43 crc kubenswrapper[4183]: E0813 19:48:43.510283 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:43Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:48:43 crc kubenswrapper[4183]: I0813 19:48:43.779767 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:48:43 crc kubenswrapper[4183]: I0813 19:48:43.781546 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:48:43 crc kubenswrapper[4183]: I0813 19:48:43.781625 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:48:43 crc kubenswrapper[4183]: I0813 19:48:43.781640 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:48:43 crc kubenswrapper[4183]: I0813 19:48:43.781718 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:48:43 crc kubenswrapper[4183]: E0813 19:48:43.785898 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:43Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:48:44 crc kubenswrapper[4183]: I0813 19:48:44.508607 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:44Z is after 2025-06-26T12:47:18Z Aug 13 19:48:45 crc kubenswrapper[4183]: E0813 19:48:45.428864 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:48:45 crc kubenswrapper[4183]: I0813 19:48:45.508371 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:45Z is after 2025-06-26T12:47:18Z Aug 13 19:48:46 crc kubenswrapper[4183]: I0813 19:48:46.507471 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:46Z is after 2025-06-26T12:47:18Z Aug 13 19:48:46 crc kubenswrapper[4183]: E0813 19:48:46.846934 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:46Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80a72b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,LastTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:48:47 crc kubenswrapper[4183]: I0813 19:48:47.507610 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:47Z is after 2025-06-26T12:47:18Z Aug 13 19:48:48 crc kubenswrapper[4183]: I0813 19:48:48.507390 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:48Z is after 2025-06-26T12:47:18Z Aug 13 19:48:49 crc kubenswrapper[4183]: I0813 19:48:49.508906 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:49Z is after 2025-06-26T12:47:18Z Aug 13 19:48:49 crc kubenswrapper[4183]: W0813 19:48:49.644590 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:49Z is after 2025-06-26T12:47:18Z Aug 13 19:48:49 crc kubenswrapper[4183]: E0813 19:48:49.644687 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:49Z is after 2025-06-26T12:47:18Z Aug 13 19:48:50 crc kubenswrapper[4183]: I0813 19:48:50.209026 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:48:50 crc kubenswrapper[4183]: I0813 19:48:50.210881 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:48:50 crc kubenswrapper[4183]: I0813 19:48:50.210972 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:48:50 crc kubenswrapper[4183]: I0813 19:48:50.210991 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:48:50 crc kubenswrapper[4183]: I0813 19:48:50.212513 4183 scope.go:117] "RemoveContainer" containerID="d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc" Aug 13 19:48:50 crc kubenswrapper[4183]: E0813 19:48:50.213372 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=cluster-policy-controller pod=kube-controller-manager-crc_openshift-kube-controller-manager(2eb2b200bca0d10cf0fe16fb7c0caf80)\"" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" Aug 13 19:48:50 crc kubenswrapper[4183]: I0813 19:48:50.508219 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:50Z is after 2025-06-26T12:47:18Z Aug 13 19:48:50 crc kubenswrapper[4183]: E0813 19:48:50.516201 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:50Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:48:50 crc kubenswrapper[4183]: I0813 19:48:50.787053 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:48:50 crc kubenswrapper[4183]: I0813 19:48:50.788997 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:48:50 crc kubenswrapper[4183]: I0813 19:48:50.789071 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:48:50 crc kubenswrapper[4183]: I0813 19:48:50.789090 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:48:50 crc kubenswrapper[4183]: I0813 19:48:50.789120 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:48:50 crc kubenswrapper[4183]: E0813 19:48:50.792941 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:50Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:48:50 crc kubenswrapper[4183]: I0813 19:48:50.969403 4183 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Aug 13 19:48:50 crc kubenswrapper[4183]: E0813 19:48:50.974173 4183 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:50Z is after 2025-06-26T12:47:18Z Aug 13 19:48:51 crc kubenswrapper[4183]: I0813 19:48:51.507882 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:51Z is after 2025-06-26T12:47:18Z Aug 13 19:48:52 crc kubenswrapper[4183]: I0813 19:48:52.508568 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:52Z is after 2025-06-26T12:47:18Z Aug 13 19:48:53 crc kubenswrapper[4183]: I0813 19:48:53.508038 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:53Z is after 2025-06-26T12:47:18Z Aug 13 19:48:54 crc kubenswrapper[4183]: E0813 19:48:54.270502 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:48:54 crc kubenswrapper[4183]: E0813 19:48:54.288343 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:48:54 crc kubenswrapper[4183]: I0813 19:48:54.508609 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:54Z is after 2025-06-26T12:47:18Z Aug 13 19:48:54 crc kubenswrapper[4183]: I0813 19:48:54.662493 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 19:48:54 crc kubenswrapper[4183]: I0813 19:48:54.662615 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 19:48:54 crc kubenswrapper[4183]: I0813 19:48:54.662669 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 19:48:54 crc kubenswrapper[4183]: I0813 19:48:54.662703 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 19:48:54 crc kubenswrapper[4183]: I0813 19:48:54.662726 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 19:48:55 crc kubenswrapper[4183]: I0813 19:48:55.208643 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:48:55 crc kubenswrapper[4183]: I0813 19:48:55.210367 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:48:55 crc kubenswrapper[4183]: I0813 19:48:55.210468 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:48:55 crc kubenswrapper[4183]: I0813 19:48:55.210485 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:48:55 crc kubenswrapper[4183]: I0813 19:48:55.212012 4183 scope.go:117] "RemoveContainer" containerID="42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf" Aug 13 19:48:55 crc kubenswrapper[4183]: E0813 19:48:55.212463 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:48:55 crc kubenswrapper[4183]: E0813 19:48:55.269841 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:48:55 crc kubenswrapper[4183]: E0813 19:48:55.429093 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:48:55 crc kubenswrapper[4183]: I0813 19:48:55.507617 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:55Z is after 2025-06-26T12:47:18Z Aug 13 19:48:56 crc kubenswrapper[4183]: E0813 19:48:56.269957 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:48:56 crc kubenswrapper[4183]: I0813 19:48:56.508306 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:56Z is after 2025-06-26T12:47:18Z Aug 13 19:48:56 crc kubenswrapper[4183]: E0813 19:48:56.851929 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:56Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80a72b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,LastTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:48:57 crc kubenswrapper[4183]: E0813 19:48:57.270448 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:48:57 crc kubenswrapper[4183]: I0813 19:48:57.508358 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:57Z is after 2025-06-26T12:47:18Z Aug 13 19:48:57 crc kubenswrapper[4183]: E0813 19:48:57.520207 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:57Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:48:57 crc kubenswrapper[4183]: I0813 19:48:57.793333 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:48:57 crc kubenswrapper[4183]: I0813 19:48:57.794972 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:48:57 crc kubenswrapper[4183]: I0813 19:48:57.795949 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:48:57 crc kubenswrapper[4183]: I0813 19:48:57.795969 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:48:57 crc kubenswrapper[4183]: I0813 19:48:57.796000 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:48:57 crc kubenswrapper[4183]: E0813 19:48:57.801474 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:57Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:48:58 crc kubenswrapper[4183]: E0813 19:48:58.271041 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:48:58 crc kubenswrapper[4183]: I0813 19:48:58.508150 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:58Z is after 2025-06-26T12:47:18Z Aug 13 19:48:59 crc kubenswrapper[4183]: E0813 19:48:59.270193 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:48:59 crc kubenswrapper[4183]: I0813 19:48:59.507252 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:59Z is after 2025-06-26T12:47:18Z Aug 13 19:49:00 crc kubenswrapper[4183]: E0813 19:49:00.270093 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:00 crc kubenswrapper[4183]: I0813 19:49:00.507642 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:00Z is after 2025-06-26T12:47:18Z Aug 13 19:49:01 crc kubenswrapper[4183]: E0813 19:49:01.270053 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:01 crc kubenswrapper[4183]: I0813 19:49:01.507537 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:01Z is after 2025-06-26T12:47:18Z Aug 13 19:49:02 crc kubenswrapper[4183]: E0813 19:49:02.270147 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:02 crc kubenswrapper[4183]: I0813 19:49:02.509575 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:02Z is after 2025-06-26T12:47:18Z Aug 13 19:49:03 crc kubenswrapper[4183]: E0813 19:49:03.270170 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:03 crc kubenswrapper[4183]: I0813 19:49:03.508092 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:03Z is after 2025-06-26T12:47:18Z Aug 13 19:49:04 crc kubenswrapper[4183]: I0813 19:49:04.208680 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:49:04 crc kubenswrapper[4183]: I0813 19:49:04.210579 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:49:04 crc kubenswrapper[4183]: I0813 19:49:04.210675 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:49:04 crc kubenswrapper[4183]: I0813 19:49:04.210693 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:49:04 crc kubenswrapper[4183]: I0813 19:49:04.212286 4183 scope.go:117] "RemoveContainer" containerID="d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc" Aug 13 19:49:04 crc kubenswrapper[4183]: E0813 19:49:04.213044 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=cluster-policy-controller pod=kube-controller-manager-crc_openshift-kube-controller-manager(2eb2b200bca0d10cf0fe16fb7c0caf80)\"" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" Aug 13 19:49:04 crc kubenswrapper[4183]: E0813 19:49:04.270334 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:04 crc kubenswrapper[4183]: E0813 19:49:04.289000 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:04 crc kubenswrapper[4183]: I0813 19:49:04.508476 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:04Z is after 2025-06-26T12:47:18Z Aug 13 19:49:04 crc kubenswrapper[4183]: E0813 19:49:04.525146 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:04Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:49:04 crc kubenswrapper[4183]: I0813 19:49:04.801907 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:49:04 crc kubenswrapper[4183]: I0813 19:49:04.803210 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:49:04 crc kubenswrapper[4183]: I0813 19:49:04.803297 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:49:04 crc kubenswrapper[4183]: I0813 19:49:04.803313 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:49:04 crc kubenswrapper[4183]: I0813 19:49:04.803375 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:49:04 crc kubenswrapper[4183]: E0813 19:49:04.807056 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:04Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:49:05 crc kubenswrapper[4183]: E0813 19:49:05.270106 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:05 crc kubenswrapper[4183]: E0813 19:49:05.430194 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:49:05 crc kubenswrapper[4183]: I0813 19:49:05.507344 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:05Z is after 2025-06-26T12:47:18Z Aug 13 19:49:06 crc kubenswrapper[4183]: E0813 19:49:06.270028 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:06 crc kubenswrapper[4183]: I0813 19:49:06.507669 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:06Z is after 2025-06-26T12:47:18Z Aug 13 19:49:06 crc kubenswrapper[4183]: E0813 19:49:06.858719 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:06Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80a72b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,LastTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:07 crc kubenswrapper[4183]: E0813 19:49:07.270036 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:07 crc kubenswrapper[4183]: I0813 19:49:07.507038 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:07Z is after 2025-06-26T12:47:18Z Aug 13 19:49:08 crc kubenswrapper[4183]: E0813 19:49:08.270104 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:08 crc kubenswrapper[4183]: I0813 19:49:08.509900 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:08Z is after 2025-06-26T12:47:18Z Aug 13 19:49:09 crc kubenswrapper[4183]: I0813 19:49:09.208903 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:49:09 crc kubenswrapper[4183]: I0813 19:49:09.211430 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:49:09 crc kubenswrapper[4183]: I0813 19:49:09.211701 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:49:09 crc kubenswrapper[4183]: I0813 19:49:09.212927 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:49:09 crc kubenswrapper[4183]: E0813 19:49:09.270291 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:09 crc kubenswrapper[4183]: I0813 19:49:09.507217 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:09Z is after 2025-06-26T12:47:18Z Aug 13 19:49:10 crc kubenswrapper[4183]: I0813 19:49:10.209179 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:49:10 crc kubenswrapper[4183]: I0813 19:49:10.210653 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:49:10 crc kubenswrapper[4183]: I0813 19:49:10.210693 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:49:10 crc kubenswrapper[4183]: I0813 19:49:10.210705 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:49:10 crc kubenswrapper[4183]: I0813 19:49:10.212199 4183 scope.go:117] "RemoveContainer" containerID="42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf" Aug 13 19:49:10 crc kubenswrapper[4183]: E0813 19:49:10.212558 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:49:10 crc kubenswrapper[4183]: E0813 19:49:10.270044 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:10 crc kubenswrapper[4183]: I0813 19:49:10.507652 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:10Z is after 2025-06-26T12:47:18Z Aug 13 19:49:10 crc kubenswrapper[4183]: W0813 19:49:10.663149 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:10Z is after 2025-06-26T12:47:18Z Aug 13 19:49:10 crc kubenswrapper[4183]: E0813 19:49:10.663336 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:10Z is after 2025-06-26T12:47:18Z Aug 13 19:49:11 crc kubenswrapper[4183]: E0813 19:49:11.270245 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:11 crc kubenswrapper[4183]: I0813 19:49:11.507705 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:11Z is after 2025-06-26T12:47:18Z Aug 13 19:49:11 crc kubenswrapper[4183]: E0813 19:49:11.530195 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:11Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:49:11 crc kubenswrapper[4183]: I0813 19:49:11.808058 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:49:11 crc kubenswrapper[4183]: I0813 19:49:11.817246 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:49:11 crc kubenswrapper[4183]: I0813 19:49:11.817337 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:49:11 crc kubenswrapper[4183]: I0813 19:49:11.817360 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:49:11 crc kubenswrapper[4183]: I0813 19:49:11.817390 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:49:11 crc kubenswrapper[4183]: E0813 19:49:11.820833 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:11Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:49:12 crc kubenswrapper[4183]: E0813 19:49:12.270100 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:12 crc kubenswrapper[4183]: I0813 19:49:12.508425 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:12Z is after 2025-06-26T12:47:18Z Aug 13 19:49:13 crc kubenswrapper[4183]: E0813 19:49:13.270198 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:13 crc kubenswrapper[4183]: I0813 19:49:13.511476 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:13Z is after 2025-06-26T12:47:18Z Aug 13 19:49:14 crc kubenswrapper[4183]: E0813 19:49:14.270548 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:14 crc kubenswrapper[4183]: E0813 19:49:14.289133 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:14 crc kubenswrapper[4183]: I0813 19:49:14.510249 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:14Z is after 2025-06-26T12:47:18Z Aug 13 19:49:15 crc kubenswrapper[4183]: I0813 19:49:15.208334 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:49:15 crc kubenswrapper[4183]: I0813 19:49:15.209861 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:49:15 crc kubenswrapper[4183]: I0813 19:49:15.209947 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:49:15 crc kubenswrapper[4183]: I0813 19:49:15.209964 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:49:15 crc kubenswrapper[4183]: I0813 19:49:15.211520 4183 scope.go:117] "RemoveContainer" containerID="d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc" Aug 13 19:49:15 crc kubenswrapper[4183]: E0813 19:49:15.270310 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:15 crc kubenswrapper[4183]: E0813 19:49:15.430490 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:49:15 crc kubenswrapper[4183]: I0813 19:49:15.509289 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:15Z is after 2025-06-26T12:47:18Z Aug 13 19:49:16 crc kubenswrapper[4183]: I0813 19:49:16.152473 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/cluster-policy-controller/5.log" Aug 13 19:49:16 crc kubenswrapper[4183]: I0813 19:49:16.154216 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"2eb2b200bca0d10cf0fe16fb7c0caf80","Type":"ContainerStarted","Data":"2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a"} Aug 13 19:49:16 crc kubenswrapper[4183]: I0813 19:49:16.154441 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:49:16 crc kubenswrapper[4183]: I0813 19:49:16.155448 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:49:16 crc kubenswrapper[4183]: I0813 19:49:16.155521 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:49:16 crc kubenswrapper[4183]: I0813 19:49:16.155541 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:49:16 crc kubenswrapper[4183]: E0813 19:49:16.270299 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:16 crc kubenswrapper[4183]: I0813 19:49:16.509020 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:16Z is after 2025-06-26T12:47:18Z Aug 13 19:49:16 crc kubenswrapper[4183]: E0813 19:49:16.868279 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:16Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80a72b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,LastTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:16 crc kubenswrapper[4183]: E0813 19:49:16.868752 4183 event.go:294] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{crc.185b6b18f80a72b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,LastTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:16 crc kubenswrapper[4183]: E0813 19:49:16.874032 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:16Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80c55b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775885241 +0000 UTC m=+1.468550049,LastTimestamp:2025-08-13 19:43:54.775885241 +0000 UTC m=+1.468550049,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:17 crc kubenswrapper[4183]: E0813 19:49:17.270118 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:17 crc kubenswrapper[4183]: I0813 19:49:17.508401 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:17Z is after 2025-06-26T12:47:18Z Aug 13 19:49:18 crc kubenswrapper[4183]: E0813 19:49:18.270308 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:18 crc kubenswrapper[4183]: I0813 19:49:18.509244 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:18Z is after 2025-06-26T12:47:18Z Aug 13 19:49:18 crc kubenswrapper[4183]: E0813 19:49:18.536924 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:18Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:49:18 crc kubenswrapper[4183]: I0813 19:49:18.821885 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:49:18 crc kubenswrapper[4183]: I0813 19:49:18.823712 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:49:18 crc kubenswrapper[4183]: I0813 19:49:18.823832 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:49:18 crc kubenswrapper[4183]: I0813 19:49:18.823855 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:49:18 crc kubenswrapper[4183]: I0813 19:49:18.823894 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:49:18 crc kubenswrapper[4183]: E0813 19:49:18.828073 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:18Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:49:19 crc kubenswrapper[4183]: E0813 19:49:19.270703 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:19 crc kubenswrapper[4183]: W0813 19:49:19.467073 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:19Z is after 2025-06-26T12:47:18Z Aug 13 19:49:19 crc kubenswrapper[4183]: E0813 19:49:19.467173 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:19Z is after 2025-06-26T12:47:18Z Aug 13 19:49:19 crc kubenswrapper[4183]: I0813 19:49:19.507924 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:19Z is after 2025-06-26T12:47:18Z Aug 13 19:49:19 crc kubenswrapper[4183]: I0813 19:49:19.581401 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:49:19 crc kubenswrapper[4183]: I0813 19:49:19.582025 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:49:19 crc kubenswrapper[4183]: I0813 19:49:19.583832 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:49:19 crc kubenswrapper[4183]: I0813 19:49:19.583887 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:49:19 crc kubenswrapper[4183]: I0813 19:49:19.583910 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:49:20 crc kubenswrapper[4183]: I0813 19:49:20.209155 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:49:20 crc kubenswrapper[4183]: I0813 19:49:20.210701 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:49:20 crc kubenswrapper[4183]: I0813 19:49:20.210840 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:49:20 crc kubenswrapper[4183]: I0813 19:49:20.210864 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:49:20 crc kubenswrapper[4183]: E0813 19:49:20.270522 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:20 crc kubenswrapper[4183]: I0813 19:49:20.508890 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:20Z is after 2025-06-26T12:47:18Z Aug 13 19:49:20 crc kubenswrapper[4183]: E0813 19:49:20.708455 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:20Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80c55b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775885241 +0000 UTC m=+1.468550049,LastTimestamp:2025-08-13 19:43:54.775885241 +0000 UTC m=+1.468550049,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:21 crc kubenswrapper[4183]: E0813 19:49:21.269708 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:21 crc kubenswrapper[4183]: I0813 19:49:21.508993 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:21Z is after 2025-06-26T12:47:18Z Aug 13 19:49:21 crc kubenswrapper[4183]: I0813 19:49:21.558277 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:49:21 crc kubenswrapper[4183]: I0813 19:49:21.558456 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:49:21 crc kubenswrapper[4183]: I0813 19:49:21.561552 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:49:21 crc kubenswrapper[4183]: I0813 19:49:21.561720 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:49:21 crc kubenswrapper[4183]: I0813 19:49:21.561906 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:49:22 crc kubenswrapper[4183]: I0813 19:49:22.209056 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:49:22 crc kubenswrapper[4183]: I0813 19:49:22.210374 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:49:22 crc kubenswrapper[4183]: I0813 19:49:22.210467 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:49:22 crc kubenswrapper[4183]: I0813 19:49:22.210483 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:49:22 crc kubenswrapper[4183]: I0813 19:49:22.211652 4183 scope.go:117] "RemoveContainer" containerID="42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf" Aug 13 19:49:22 crc kubenswrapper[4183]: E0813 19:49:22.212112 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:49:22 crc kubenswrapper[4183]: E0813 19:49:22.269829 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:22 crc kubenswrapper[4183]: I0813 19:49:22.507693 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:22Z is after 2025-06-26T12:47:18Z Aug 13 19:49:22 crc kubenswrapper[4183]: I0813 19:49:22.582060 4183 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 19:49:22 crc kubenswrapper[4183]: I0813 19:49:22.582412 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 19:49:22 crc kubenswrapper[4183]: I0813 19:49:22.969453 4183 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Aug 13 19:49:22 crc kubenswrapper[4183]: E0813 19:49:22.975151 4183 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:22Z is after 2025-06-26T12:47:18Z Aug 13 19:49:23 crc kubenswrapper[4183]: E0813 19:49:23.270008 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:23 crc kubenswrapper[4183]: I0813 19:49:23.507942 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:23Z is after 2025-06-26T12:47:18Z Aug 13 19:49:24 crc kubenswrapper[4183]: E0813 19:49:24.270019 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:24 crc kubenswrapper[4183]: E0813 19:49:24.289754 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:24 crc kubenswrapper[4183]: I0813 19:49:24.507602 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:24Z is after 2025-06-26T12:47:18Z Aug 13 19:49:25 crc kubenswrapper[4183]: E0813 19:49:25.270531 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:25 crc kubenswrapper[4183]: E0813 19:49:25.431533 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:49:25 crc kubenswrapper[4183]: I0813 19:49:25.507442 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:25Z is after 2025-06-26T12:47:18Z Aug 13 19:49:25 crc kubenswrapper[4183]: E0813 19:49:25.540981 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:25Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:49:25 crc kubenswrapper[4183]: I0813 19:49:25.828733 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:49:25 crc kubenswrapper[4183]: I0813 19:49:25.830238 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:49:25 crc kubenswrapper[4183]: I0813 19:49:25.830305 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:49:25 crc kubenswrapper[4183]: I0813 19:49:25.830323 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:49:25 crc kubenswrapper[4183]: I0813 19:49:25.830347 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:49:25 crc kubenswrapper[4183]: E0813 19:49:25.834565 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:25Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:49:26 crc kubenswrapper[4183]: E0813 19:49:26.270401 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:26 crc kubenswrapper[4183]: I0813 19:49:26.507524 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:26Z is after 2025-06-26T12:47:18Z Aug 13 19:49:27 crc kubenswrapper[4183]: E0813 19:49:27.270871 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:27 crc kubenswrapper[4183]: I0813 19:49:27.508099 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:27Z is after 2025-06-26T12:47:18Z Aug 13 19:49:28 crc kubenswrapper[4183]: E0813 19:49:28.270537 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:28 crc kubenswrapper[4183]: I0813 19:49:28.507909 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:28Z is after 2025-06-26T12:47:18Z Aug 13 19:49:29 crc kubenswrapper[4183]: E0813 19:49:29.270255 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:29 crc kubenswrapper[4183]: I0813 19:49:29.507404 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:29Z is after 2025-06-26T12:47:18Z Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.270553 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:30 crc kubenswrapper[4183]: I0813 19:49:30.509893 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.715971 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.185b6b18f80c55b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775885241 +0000 UTC m=+1.468550049,LastTimestamp:2025-08-13 19:43:54.775885241 +0000 UTC m=+1.468550049,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.723222 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"crc.185b6b18f80503ed\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.185b6b18f80503ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,LastTimestamp:2025-08-13 19:43:54.85870034 +0000 UTC m=+1.551365198,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.729334 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"crc.185b6b18f80a72b9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.185b6b18f80a72b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,LastTimestamp:2025-08-13 19:43:54.8587333 +0000 UTC m=+1.551398038,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.735411 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"crc.185b6b18f80c55b9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.185b6b18f80c55b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775885241 +0000 UTC m=+1.468550049,LastTimestamp:2025-08-13 19:43:54.85874733 +0000 UTC m=+1.551411958,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.744178 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"crc.185b6b18f80503ed\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.185b6b18f80503ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,LastTimestamp:2025-08-13 19:43:55.024230731 +0000 UTC m=+1.716895459,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.748454 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"crc.185b6b18f80a72b9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.185b6b18f80a72b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,LastTimestamp:2025-08-13 19:43:55.024667024 +0000 UTC m=+1.717331842,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.751936 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"crc.185b6b18f80c55b9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.185b6b18f80c55b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775885241 +0000 UTC m=+1.468550049,LastTimestamp:2025-08-13 19:43:55.024686724 +0000 UTC m=+1.717351492,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.756567 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.185b6b190ee1238d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:55.158930317 +0000 UTC m=+1.851595035,LastTimestamp:2025-08-13 19:43:55.158930317 +0000 UTC m=+1.851595035,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.761713 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"crc.185b6b18f80503ed\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.185b6b18f80503ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,LastTimestamp:2025-08-13 19:43:55.317392991 +0000 UTC m=+2.010058039,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.767268 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"crc.185b6b18f80a72b9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.185b6b18f80a72b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,LastTimestamp:2025-08-13 19:43:55.317419641 +0000 UTC m=+2.010084449,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.773494 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"crc.185b6b18f80c55b9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.185b6b18f80c55b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775885241 +0000 UTC m=+1.468550049,LastTimestamp:2025-08-13 19:43:55.317434591 +0000 UTC m=+2.010099389,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.780170 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"crc.185b6b18f80503ed\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.185b6b18f80503ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,LastTimestamp:2025-08-13 19:43:55.329246191 +0000 UTC m=+2.021910959,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.788362 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"crc.185b6b18f80a72b9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.185b6b18f80a72b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,LastTimestamp:2025-08-13 19:43:55.329270591 +0000 UTC m=+2.021935419,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.794122 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"crc.185b6b18f80c55b9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.185b6b18f80c55b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775885241 +0000 UTC m=+1.468550049,LastTimestamp:2025-08-13 19:43:55.32928957 +0000 UTC m=+2.021954188,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.799561 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"crc.185b6b18f80503ed\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.185b6b18f80503ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,LastTimestamp:2025-08-13 19:43:55.32933991 +0000 UTC m=+2.022004657,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.804277 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"crc.185b6b18f80a72b9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.185b6b18f80a72b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,LastTimestamp:2025-08-13 19:43:55.329369089 +0000 UTC m=+2.022033867,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.809238 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"crc.185b6b18f80c55b9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.185b6b18f80c55b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775885241 +0000 UTC m=+1.468550049,LastTimestamp:2025-08-13 19:43:55.329383399 +0000 UTC m=+2.022048027,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.814081 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"crc.185b6b18f80503ed\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.185b6b18f80503ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,LastTimestamp:2025-08-13 19:43:55.332498119 +0000 UTC m=+2.025162897,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.819425 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"crc.185b6b18f80a72b9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.185b6b18f80a72b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,LastTimestamp:2025-08-13 19:43:55.332519098 +0000 UTC m=+2.025183846,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.824567 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"crc.185b6b18f80c55b9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.185b6b18f80c55b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775885241 +0000 UTC m=+1.468550049,LastTimestamp:2025-08-13 19:43:55.332533998 +0000 UTC m=+2.025198706,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.829662 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"crc.185b6b18f80503ed\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.185b6b18f80503ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,LastTimestamp:2025-08-13 19:43:55.334421288 +0000 UTC m=+2.027086076,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.834495 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"crc.185b6b18f80a72b9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.185b6b18f80a72b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,LastTimestamp:2025-08-13 19:43:55.334438458 +0000 UTC m=+2.027103186,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.839365 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"crc.185b6b18f80c55b9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.185b6b18f80c55b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775885241 +0000 UTC m=+1.468550049,LastTimestamp:2025-08-13 19:43:55.334449487 +0000 UTC m=+2.027114225,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.845902 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.185b6b1934520c58 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:b2a6a3b2ca08062d24afa4c01aaf9e4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:55.787086936 +0000 UTC m=+2.479751734,LastTimestamp:2025-08-13 19:43:55.787086936 +0000 UTC m=+2.479751734,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.851094 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.185b6b193452335e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:55.787096926 +0000 UTC m=+2.479761664,LastTimestamp:2025-08-13 19:43:55.787096926 +0000 UTC m=+2.479761664,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.858497 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.185b6b193454f3a7 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d3ae206906481b4831fd849b559269c8,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:55.787277223 +0000 UTC m=+2.479942161,LastTimestamp:2025-08-13 19:43:55.787277223 +0000 UTC m=+2.479942161,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.863370 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.185b6b1934c22012 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:631cdb37fbb54e809ecc5e719aebd371,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:55.794432018 +0000 UTC m=+2.487096756,LastTimestamp:2025-08-13 19:43:55.794432018 +0000 UTC m=+2.487096756,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.868318 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.185b6b1935677efa openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:55.805269754 +0000 UTC m=+2.497934402,LastTimestamp:2025-08-13 19:43:55.805269754 +0000 UTC m=+2.497934402,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.873439 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.185b6b199886db6b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:b2a6a3b2ca08062d24afa4c01aaf9e4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:57.468269419 +0000 UTC m=+4.160934207,LastTimestamp:2025-08-13 19:43:57.468269419 +0000 UTC m=+4.160934207,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.878613 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.185b6b1998dd30be openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:57.473927358 +0000 UTC m=+4.166592086,LastTimestamp:2025-08-13 19:43:57.473927358 +0000 UTC m=+4.166592086,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.883898 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.185b6b19999cbe50 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:631cdb37fbb54e809ecc5e719aebd371,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:57.486480976 +0000 UTC m=+4.179145604,LastTimestamp:2025-08-13 19:43:57.486480976 +0000 UTC m=+4.179145604,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.889369 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.185b6b1999c204e5 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d3ae206906481b4831fd849b559269c8,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:57.488923877 +0000 UTC m=+4.181588535,LastTimestamp:2025-08-13 19:43:57.488923877 +0000 UTC m=+4.181588535,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.895540 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.185b6b199b54a9df openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:57.515311583 +0000 UTC m=+4.207976331,LastTimestamp:2025-08-13 19:43:57.515311583 +0000 UTC m=+4.207976331,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.900880 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.185b6b199e67d773 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:b2a6a3b2ca08062d24afa4c01aaf9e4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:57.566900083 +0000 UTC m=+4.259564721,LastTimestamp:2025-08-13 19:43:57.566900083 +0000 UTC m=+4.259564721,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.906976 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.185b6b199f3a8cc6 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:631cdb37fbb54e809ecc5e719aebd371,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:57.580709062 +0000 UTC m=+4.273373930,LastTimestamp:2025-08-13 19:43:57.580709062 +0000 UTC m=+4.273373930,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.915324 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.185b6b199fe9c443 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:57.592192067 +0000 UTC m=+4.284856765,LastTimestamp:2025-08-13 19:43:57.592192067 +0000 UTC m=+4.284856765,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.923950 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.185b6b19a0082eef openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:57.594185455 +0000 UTC m=+4.286850313,LastTimestamp:2025-08-13 19:43:57.594185455 +0000 UTC m=+4.286850313,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.929030 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.185b6b19a2a80e70 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d3ae206906481b4831fd849b559269c8,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:57.638217328 +0000 UTC m=+4.330882056,LastTimestamp:2025-08-13 19:43:57.638217328 +0000 UTC m=+4.330882056,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.935053 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.185b6b19b35fe1a6 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:57.918699942 +0000 UTC m=+4.611364680,LastTimestamp:2025-08-13 19:43:57.918699942 +0000 UTC m=+4.611364680,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.940670 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.185b6b19ba50d163 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:58.035153251 +0000 UTC m=+4.727818009,LastTimestamp:2025-08-13 19:43:58.035153251 +0000 UTC m=+4.727818009,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.946372 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.185b6b19ba6c9dae openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:58.036975022 +0000 UTC m=+4.729639900,LastTimestamp:2025-08-13 19:43:58.036975022 +0000 UTC m=+4.729639900,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.953195 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.185b6b19c16e2579 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:58.154515833 +0000 UTC m=+4.847180581,LastTimestamp:2025-08-13 19:43:58.154515833 +0000 UTC m=+4.847180581,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.958937 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.185b6b19c770630c openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d3ae206906481b4831fd849b559269c8,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:58.255325964 +0000 UTC m=+4.947990712,LastTimestamp:2025-08-13 19:43:58.255325964 +0000 UTC m=+4.947990712,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.965183 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.185b6b19c89e5cea openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:631cdb37fbb54e809ecc5e719aebd371,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:58.275116266 +0000 UTC m=+4.967781174,LastTimestamp:2025-08-13 19:43:58.275116266 +0000 UTC m=+4.967781174,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.971982 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.185b6b19c998e3fa openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:58.291534842 +0000 UTC m=+4.984199570,LastTimestamp:2025-08-13 19:43:58.291534842 +0000 UTC m=+4.984199570,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.978918 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.185b6b19cb0fb052 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:b2a6a3b2ca08062d24afa4c01aaf9e4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:58.316097618 +0000 UTC m=+5.008762296,LastTimestamp:2025-08-13 19:43:58.316097618 +0000 UTC m=+5.008762296,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.984856 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.185b6b19e5fef6de openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:58.767986398 +0000 UTC m=+5.460651056,LastTimestamp:2025-08-13 19:43:58.767986398 +0000 UTC m=+5.460651056,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.989255 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.185b6b19fc142bc3 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:59.138474947 +0000 UTC m=+5.831139825,LastTimestamp:2025-08-13 19:43:59.138474947 +0000 UTC m=+5.831139825,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.994025 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.185b6b19fc3be3f5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:59.141078005 +0000 UTC m=+5.833742753,LastTimestamp:2025-08-13 19:43:59.141078005 +0000 UTC m=+5.833742753,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.999221 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.185b6b1a20af9846 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:631cdb37fbb54e809ecc5e719aebd371,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:59.752640582 +0000 UTC m=+6.445305220,LastTimestamp:2025-08-13 19:43:59.752640582 +0000 UTC m=+6.445305220,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.005263 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.185b6b1a2538788f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:59.828719759 +0000 UTC m=+6.521384507,LastTimestamp:2025-08-13 19:43:59.828719759 +0000 UTC m=+6.521384507,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.010708 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.185b6b1a33bbabba openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d3ae206906481b4831fd849b559269c8,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:00.072199098 +0000 UTC m=+6.764864006,LastTimestamp:2025-08-13 19:44:00.072199098 +0000 UTC m=+6.764864006,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.017311 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.185b6b1a352c73be openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:b2a6a3b2ca08062d24afa4c01aaf9e4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:00.09636755 +0000 UTC m=+6.789032298,LastTimestamp:2025-08-13 19:44:00.09636755 +0000 UTC m=+6.789032298,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.022588 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.185b6b1a36add0c5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:00.121622725 +0000 UTC m=+6.814287623,LastTimestamp:2025-08-13 19:44:00.121622725 +0000 UTC m=+6.814287623,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.027421 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.185b6b1a36e70dda openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:00.125373914 +0000 UTC m=+6.818038642,LastTimestamp:2025-08-13 19:44:00.125373914 +0000 UTC m=+6.818038642,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.032735 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.185b6b1a38f39204 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:00.159748612 +0000 UTC m=+6.852413400,LastTimestamp:2025-08-13 19:44:00.159748612 +0000 UTC m=+6.852413400,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.038190 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.185b6b1a3973e4ef openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:631cdb37fbb54e809ecc5e719aebd371,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:00.168158447 +0000 UTC m=+6.860823165,LastTimestamp:2025-08-13 19:44:00.168158447 +0000 UTC m=+6.860823165,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.054452 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.185b6b1a39869685 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:631cdb37fbb54e809ecc5e719aebd371,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:00.169383557 +0000 UTC m=+6.862048295,LastTimestamp:2025-08-13 19:44:00.169383557 +0000 UTC m=+6.862048295,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.060585 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.185b6b1a3d2acd3c openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d3ae206906481b4831fd849b559269c8,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:00.230477116 +0000 UTC m=+6.923141744,LastTimestamp:2025-08-13 19:44:00.230477116 +0000 UTC m=+6.923141744,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.066507 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.185b6b1a3dbdce11 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:00.240111121 +0000 UTC m=+6.932775859,LastTimestamp:2025-08-13 19:44:00.240111121 +0000 UTC m=+6.932775859,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.072140 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.185b6b1a4f719cb8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:b2a6a3b2ca08062d24afa4c01aaf9e4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:00.53710764 +0000 UTC m=+7.229772348,LastTimestamp:2025-08-13 19:44:00.53710764 +0000 UTC m=+7.229772348,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.078285 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.185b6b1a7478fb6e openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:631cdb37fbb54e809ecc5e719aebd371,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:01.15834763 +0000 UTC m=+7.851012988,LastTimestamp:2025-08-13 19:44:01.15834763 +0000 UTC m=+7.851012988,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.089502 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.185b6b1a749b2daa openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:01.160588714 +0000 UTC m=+7.853253362,LastTimestamp:2025-08-13 19:44:01.160588714 +0000 UTC m=+7.853253362,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.096173 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.185b6b1a898817aa openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:01.511659434 +0000 UTC m=+8.204324172,LastTimestamp:2025-08-13 19:44:01.511659434 +0000 UTC m=+8.204324172,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.102249 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.185b6b1a8a37d37f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:01.523176319 +0000 UTC m=+8.215840947,LastTimestamp:2025-08-13 19:44:01.523176319 +0000 UTC m=+8.215840947,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.108244 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.185b6b1a8bfdc49b openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:631cdb37fbb54e809ecc5e719aebd371,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:01.552925851 +0000 UTC m=+8.245590579,LastTimestamp:2025-08-13 19:44:01.552925851 +0000 UTC m=+8.245590579,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.115351 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.185b6b1a8c18b55e openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:631cdb37fbb54e809ecc5e719aebd371,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:01.554691422 +0000 UTC m=+8.247356050,LastTimestamp:2025-08-13 19:44:01.554691422 +0000 UTC m=+8.247356050,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.121877 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.185b6b1a8c2871a0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:b2a6a3b2ca08062d24afa4c01aaf9e4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:01.555722656 +0000 UTC m=+8.248387694,LastTimestamp:2025-08-13 19:44:01.555722656 +0000 UTC m=+8.248387694,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.129240 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.185b6b1ae43f56b0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:03.033618096 +0000 UTC m=+9.726282814,LastTimestamp:2025-08-13 19:44:03.033618096 +0000 UTC m=+9.726282814,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.135255 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.185b6b1ae71d62bb openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:b2a6a3b2ca08062d24afa4c01aaf9e4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:03.081724603 +0000 UTC m=+9.774389431,LastTimestamp:2025-08-13 19:44:03.081724603 +0000 UTC m=+9.774389431,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.142020 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.185b6b1aeee82a72 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:03.212454514 +0000 UTC m=+9.905119352,LastTimestamp:2025-08-13 19:44:03.212454514 +0000 UTC m=+9.905119352,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.147335 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.185b6b1aefb94b8e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:03.226160014 +0000 UTC m=+9.918824642,LastTimestamp:2025-08-13 19:44:03.226160014 +0000 UTC m=+9.918824642,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.153455 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.185b6b1af0961313 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:631cdb37fbb54e809ecc5e719aebd371,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:03.240629011 +0000 UTC m=+9.933296709,LastTimestamp:2025-08-13 19:44:03.240629011 +0000 UTC m=+9.933296709,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.159561 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.185b6b1af3f4aa7b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:b2a6a3b2ca08062d24afa4c01aaf9e4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:03.297159803 +0000 UTC m=+9.989824671,LastTimestamp:2025-08-13 19:44:03.297159803 +0000 UTC m=+9.989824671,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.165738 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.185b6b1b08a0a410 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:b2a6a3b2ca08062d24afa4c01aaf9e4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:03.643974672 +0000 UTC m=+10.336639400,LastTimestamp:2025-08-13 19:44:03.643974672 +0000 UTC m=+10.336639400,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.172647 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.185b6b1b09844dfa openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:631cdb37fbb54e809ecc5e719aebd371,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:03.658894842 +0000 UTC m=+10.351559570,LastTimestamp:2025-08-13 19:44:03.658894842 +0000 UTC m=+10.351559570,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.179930 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.185b6b1b4a743788 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:b2a6a3b2ca08062d24afa4c01aaf9e4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:04.74835956 +0000 UTC m=+11.441025118,LastTimestamp:2025-08-13 19:44:04.74835956 +0000 UTC m=+11.441025118,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.181609 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.185b6b1b4a769be8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:04.748516328 +0000 UTC m=+11.441181476,LastTimestamp:2025-08-13 19:44:04.748516328 +0000 UTC m=+11.441181476,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.188466 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.185b6b1b4f78ce68 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:b2a6a3b2ca08062d24afa4c01aaf9e4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:04.832546408 +0000 UTC m=+11.525211506,LastTimestamp:2025-08-13 19:44:04.832546408 +0000 UTC m=+11.525211506,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.193493 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.185b6b1b4f9e7370 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:b2a6a3b2ca08062d24afa4c01aaf9e4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:04.835013488 +0000 UTC m=+11.527678176,LastTimestamp:2025-08-13 19:44:04.835013488 +0000 UTC m=+11.527678176,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.198940 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.185b6b1b5384199a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:04.900395418 +0000 UTC m=+11.593060046,LastTimestamp:2025-08-13 19:44:04.900395418 +0000 UTC m=+11.593060046,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.205056 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.185b6b1b53c35bb7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:04.904541111 +0000 UTC m=+11.597206259,LastTimestamp:2025-08-13 19:44:04.904541111 +0000 UTC m=+11.597206259,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.211243 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.185b6b1b891abecf openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:05.799460559 +0000 UTC m=+12.492125337,LastTimestamp:2025-08-13 19:44:05.799460559 +0000 UTC m=+12.492125337,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.216698 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.185b6b1b89221cd6 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:b2a6a3b2ca08062d24afa4c01aaf9e4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:05.799943382 +0000 UTC m=+12.492608170,LastTimestamp:2025-08-13 19:44:05.799943382 +0000 UTC m=+12.492608170,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.222906 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.185b6b1b8d621d7a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:05.871246714 +0000 UTC m=+12.563911562,LastTimestamp:2025-08-13 19:44:05.871246714 +0000 UTC m=+12.563911562,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.228245 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.185b6b1b9004b8dd openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:b2a6a3b2ca08062d24afa4c01aaf9e4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:05.915457757 +0000 UTC m=+12.608122415,LastTimestamp:2025-08-13 19:44:05.915457757 +0000 UTC m=+12.608122415,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.233893 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.185b6b1b9025a162 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:b2a6a3b2ca08062d24afa4c01aaf9e4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:05.917614434 +0000 UTC m=+12.610279142,LastTimestamp:2025-08-13 19:44:05.917614434 +0000 UTC m=+12.610279142,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.239673 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.185b6b1bdc2e4fe5 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:b2a6a3b2ca08062d24afa4c01aaf9e4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:07.193251813 +0000 UTC m=+13.885916601,LastTimestamp:2025-08-13 19:44:07.193251813 +0000 UTC m=+13.885916601,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.244436 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.185b6b1be6038a15 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:b2a6a3b2ca08062d24afa4c01aaf9e4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:07.358220821 +0000 UTC m=+14.050885539,LastTimestamp:2025-08-13 19:44:07.358220821 +0000 UTC m=+14.050885539,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.250241 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.185b6b1be637912f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:b2a6a3b2ca08062d24afa4c01aaf9e4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:07.361630511 +0000 UTC m=+14.054295269,LastTimestamp:2025-08-13 19:44:07.361630511 +0000 UTC m=+14.054295269,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.256487 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.185b6b1c0fd99e9b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:b2a6a3b2ca08062d24afa4c01aaf9e4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:08.060116635 +0000 UTC m=+14.752781353,LastTimestamp:2025-08-13 19:44:08.060116635 +0000 UTC m=+14.752781353,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.261845 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.185b6b1c1834ac80 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:b2a6a3b2ca08062d24afa4c01aaf9e4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:08.200301696 +0000 UTC m=+14.892966424,LastTimestamp:2025-08-13 19:44:08.200301696 +0000 UTC m=+14.892966424,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.268266 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Aug 13 19:49:31 crc kubenswrapper[4183]: &Event{ObjectMeta:{kube-controller-manager-crc.185b6b1d1d6149ff openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Aug 13 19:49:31 crc kubenswrapper[4183]: body: Aug 13 19:49:31 crc kubenswrapper[4183]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:12.582078975 +0000 UTC m=+19.274743833,LastTimestamp:2025-08-13 19:44:12.582078975 +0000 UTC m=+19.274743833,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Aug 13 19:49:31 crc kubenswrapper[4183]: > Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.270099 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.273193 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.185b6b1d1d63bae5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:12.582238949 +0000 UTC m=+19.274903587,LastTimestamp:2025-08-13 19:44:12.582238949 +0000 UTC m=+19.274903587,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.279406 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Aug 13 19:49:31 crc kubenswrapper[4183]: &Event{ObjectMeta:{kube-apiserver-crc.185b6b1f1d51d0e2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:6443/healthz": context deadline exceeded Aug 13 19:49:31 crc kubenswrapper[4183]: body: Aug 13 19:49:31 crc kubenswrapper[4183]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:21.170999522 +0000 UTC m=+27.863664511,LastTimestamp:2025-08-13 19:44:21.170999522 +0000 UTC m=+27.863664511,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Aug 13 19:49:31 crc kubenswrapper[4183]: > Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.285865 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.185b6b1f1d52c4f4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:6443/healthz\": context deadline exceeded,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:21.171062004 +0000 UTC m=+27.863726712,LastTimestamp:2025-08-13 19:44:21.171062004 +0000 UTC m=+27.863726712,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.291293 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Aug 13 19:49:31 crc kubenswrapper[4183]: &Event{ObjectMeta:{kube-apiserver-crc.185b6b1f6837ed20 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:44570->192.168.126.11:17697: read: connection reset by peer Aug 13 19:49:31 crc kubenswrapper[4183]: body: Aug 13 19:49:31 crc kubenswrapper[4183]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:22.427594016 +0000 UTC m=+29.120259044,LastTimestamp:2025-08-13 19:44:22.427594016 +0000 UTC m=+29.120259044,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Aug 13 19:49:31 crc kubenswrapper[4183]: > Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.296244 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.185b6b1f6838c787 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:44570->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:22.427649927 +0000 UTC m=+29.120314995,LastTimestamp:2025-08-13 19:44:22.427649927 +0000 UTC m=+29.120314995,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.300958 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Aug 13 19:49:31 crc kubenswrapper[4183]: &Event{ObjectMeta:{kube-apiserver-crc.185b6b1f6ea889af openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Aug 13 19:49:31 crc kubenswrapper[4183]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403} Aug 13 19:49:31 crc kubenswrapper[4183]: Aug 13 19:49:31 crc kubenswrapper[4183]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:22.535637423 +0000 UTC m=+29.228302151,LastTimestamp:2025-08-13 19:44:22.535637423 +0000 UTC m=+29.228302151,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Aug 13 19:49:31 crc kubenswrapper[4183]: > Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.305822 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.185b6b1f6eaa6926 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:22.535760166 +0000 UTC m=+29.228424934,LastTimestamp:2025-08-13 19:44:22.535760166 +0000 UTC m=+29.228424934,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.311049 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.185b6b1d1d6149ff\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Aug 13 19:49:31 crc kubenswrapper[4183]: &Event{ObjectMeta:{kube-controller-manager-crc.185b6b1d1d6149ff openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Aug 13 19:49:31 crc kubenswrapper[4183]: body: Aug 13 19:49:31 crc kubenswrapper[4183]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:12.582078975 +0000 UTC m=+19.274743833,LastTimestamp:2025-08-13 19:44:22.581770237 +0000 UTC m=+29.274586219,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Aug 13 19:49:31 crc kubenswrapper[4183]: > Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.315857 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.185b6b1d1d63bae5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.185b6b1d1d63bae5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:12.582238949 +0000 UTC m=+19.274903587,LastTimestamp:2025-08-13 19:44:22.582142917 +0000 UTC m=+29.274807915,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.321366 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.185b6b1b53c35bb7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.185b6b1b53c35bb7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:04.904541111 +0000 UTC m=+11.597206259,LastTimestamp:2025-08-13 19:44:22.890986821 +0000 UTC m=+29.583651619,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.328168 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Aug 13 19:49:31 crc kubenswrapper[4183]: &Event{ObjectMeta:{kube-controller-manager-crc.185b6b21364a25ab openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": read tcp 192.168.126.11:58646->192.168.126.11:10357: read: connection reset by peer Aug 13 19:49:31 crc kubenswrapper[4183]: body: Aug 13 19:49:31 crc kubenswrapper[4183]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:30.179861931 +0000 UTC m=+36.872527479,LastTimestamp:2025-08-13 19:44:30.179861931 +0000 UTC m=+36.872527479,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Aug 13 19:49:31 crc kubenswrapper[4183]: > Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.333579 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.185b6b21364b662f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:58646->192.168.126.11:10357: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:30.179943983 +0000 UTC m=+36.872609101,LastTimestamp:2025-08-13 19:44:30.179943983 +0000 UTC m=+36.872609101,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.338449 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.185b6b2136ee1b84 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Killing,Message:Container cluster-policy-controller failed startup probe, will be restarted,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:30.190607236 +0000 UTC m=+36.883273024,LastTimestamp:2025-08-13 19:44:30.190607236 +0000 UTC m=+36.883273024,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.343715 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.185b6b19a0082eef\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.185b6b19a0082eef openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:57.594185455 +0000 UTC m=+4.286850313,LastTimestamp:2025-08-13 19:44:30.265237637 +0000 UTC m=+36.957902255,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.349819 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.185b6b19b35fe1a6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.185b6b19b35fe1a6 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:57.918699942 +0000 UTC m=+4.611364680,LastTimestamp:2025-08-13 19:44:30.560420379 +0000 UTC m=+37.253085177,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.354916 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.185b6b19ba50d163\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.185b6b19ba50d163 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:58.035153251 +0000 UTC m=+4.727818009,LastTimestamp:2025-08-13 19:44:30.600329758 +0000 UTC m=+37.292994536,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.361362 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.185b6b1d1d6149ff\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Aug 13 19:49:31 crc kubenswrapper[4183]: &Event{ObjectMeta:{kube-controller-manager-crc.185b6b1d1d6149ff openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Aug 13 19:49:31 crc kubenswrapper[4183]: body: Aug 13 19:49:31 crc kubenswrapper[4183]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:12.582078975 +0000 UTC m=+19.274743833,LastTimestamp:2025-08-13 19:44:42.58231867 +0000 UTC m=+49.274983458,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Aug 13 19:49:31 crc kubenswrapper[4183]: > Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.368279 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.185b6b1d1d63bae5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.185b6b1d1d63bae5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:12.582238949 +0000 UTC m=+19.274903587,LastTimestamp:2025-08-13 19:44:42.583111371 +0000 UTC m=+49.275776039,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.377404 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.185b6b1d1d6149ff\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Aug 13 19:49:31 crc kubenswrapper[4183]: &Event{ObjectMeta:{kube-controller-manager-crc.185b6b1d1d6149ff openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Aug 13 19:49:31 crc kubenswrapper[4183]: body: Aug 13 19:49:31 crc kubenswrapper[4183]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:12.582078975 +0000 UTC m=+19.274743833,LastTimestamp:2025-08-13 19:44:52.581706322 +0000 UTC m=+59.274371120,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Aug 13 19:49:31 crc kubenswrapper[4183]: > Aug 13 19:49:31 crc kubenswrapper[4183]: I0813 19:49:31.512119 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:32 crc kubenswrapper[4183]: I0813 19:49:32.209040 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:49:32 crc kubenswrapper[4183]: I0813 19:49:32.210739 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:49:32 crc kubenswrapper[4183]: I0813 19:49:32.210975 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:49:32 crc kubenswrapper[4183]: I0813 19:49:32.211129 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:49:32 crc kubenswrapper[4183]: E0813 19:49:32.270105 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:32 crc kubenswrapper[4183]: I0813 19:49:32.510493 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:32 crc kubenswrapper[4183]: E0813 19:49:32.547606 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Aug 13 19:49:32 crc kubenswrapper[4183]: I0813 19:49:32.581299 4183 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 19:49:32 crc kubenswrapper[4183]: I0813 19:49:32.581414 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 19:49:32 crc kubenswrapper[4183]: I0813 19:49:32.835071 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:49:32 crc kubenswrapper[4183]: I0813 19:49:32.836842 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:49:32 crc kubenswrapper[4183]: I0813 19:49:32.836921 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:49:32 crc kubenswrapper[4183]: I0813 19:49:32.836946 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:49:32 crc kubenswrapper[4183]: I0813 19:49:32.836979 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:49:32 crc kubenswrapper[4183]: E0813 19:49:32.842913 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Aug 13 19:49:33 crc kubenswrapper[4183]: I0813 19:49:33.208703 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:49:33 crc kubenswrapper[4183]: I0813 19:49:33.209917 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:49:33 crc kubenswrapper[4183]: I0813 19:49:33.209984 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:49:33 crc kubenswrapper[4183]: I0813 19:49:33.209999 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:49:33 crc kubenswrapper[4183]: I0813 19:49:33.211385 4183 scope.go:117] "RemoveContainer" containerID="42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf" Aug 13 19:49:33 crc kubenswrapper[4183]: E0813 19:49:33.213154 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:49:33 crc kubenswrapper[4183]: E0813 19:49:33.270083 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:33 crc kubenswrapper[4183]: I0813 19:49:33.513266 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:34 crc kubenswrapper[4183]: E0813 19:49:34.270501 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:34 crc kubenswrapper[4183]: E0813 19:49:34.290122 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:34 crc kubenswrapper[4183]: I0813 19:49:34.511654 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:35 crc kubenswrapper[4183]: E0813 19:49:35.269914 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:35 crc kubenswrapper[4183]: E0813 19:49:35.432366 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:49:35 crc kubenswrapper[4183]: I0813 19:49:35.509201 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:36 crc kubenswrapper[4183]: E0813 19:49:36.270729 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:36 crc kubenswrapper[4183]: I0813 19:49:36.510235 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:37 crc kubenswrapper[4183]: E0813 19:49:37.270369 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:37 crc kubenswrapper[4183]: I0813 19:49:37.511214 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:37 crc kubenswrapper[4183]: W0813 19:49:37.988112 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:37 crc kubenswrapper[4183]: E0813 19:49:37.988181 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:38 crc kubenswrapper[4183]: E0813 19:49:38.270227 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:38 crc kubenswrapper[4183]: I0813 19:49:38.516757 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:39 crc kubenswrapper[4183]: E0813 19:49:39.270570 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:39 crc kubenswrapper[4183]: I0813 19:49:39.509832 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:39 crc kubenswrapper[4183]: E0813 19:49:39.555643 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Aug 13 19:49:39 crc kubenswrapper[4183]: I0813 19:49:39.587743 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:49:39 crc kubenswrapper[4183]: I0813 19:49:39.588049 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:49:39 crc kubenswrapper[4183]: I0813 19:49:39.589302 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:49:39 crc kubenswrapper[4183]: I0813 19:49:39.589501 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:49:39 crc kubenswrapper[4183]: I0813 19:49:39.589547 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:49:39 crc kubenswrapper[4183]: I0813 19:49:39.594720 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:49:39 crc kubenswrapper[4183]: I0813 19:49:39.843881 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:49:39 crc kubenswrapper[4183]: I0813 19:49:39.845419 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:49:39 crc kubenswrapper[4183]: I0813 19:49:39.845608 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:49:39 crc kubenswrapper[4183]: I0813 19:49:39.845727 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:49:39 crc kubenswrapper[4183]: I0813 19:49:39.845971 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:49:39 crc kubenswrapper[4183]: E0813 19:49:39.853543 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Aug 13 19:49:40 crc kubenswrapper[4183]: I0813 19:49:40.219245 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:49:40 crc kubenswrapper[4183]: I0813 19:49:40.220210 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:49:40 crc kubenswrapper[4183]: I0813 19:49:40.220264 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:49:40 crc kubenswrapper[4183]: I0813 19:49:40.220283 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:49:40 crc kubenswrapper[4183]: E0813 19:49:40.270720 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:40 crc kubenswrapper[4183]: I0813 19:49:40.513496 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:41 crc kubenswrapper[4183]: E0813 19:49:41.270528 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:41 crc kubenswrapper[4183]: I0813 19:49:41.511039 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:41 crc kubenswrapper[4183]: W0813 19:49:41.624712 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Aug 13 19:49:41 crc kubenswrapper[4183]: E0813 19:49:41.624885 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Aug 13 19:49:42 crc kubenswrapper[4183]: E0813 19:49:42.270521 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:42 crc kubenswrapper[4183]: I0813 19:49:42.510642 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:43 crc kubenswrapper[4183]: E0813 19:49:43.270599 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:43 crc kubenswrapper[4183]: I0813 19:49:43.510273 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:44 crc kubenswrapper[4183]: E0813 19:49:44.270172 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:44 crc kubenswrapper[4183]: E0813 19:49:44.291062 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:44 crc kubenswrapper[4183]: I0813 19:49:44.510192 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:45 crc kubenswrapper[4183]: E0813 19:49:45.270530 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:45 crc kubenswrapper[4183]: E0813 19:49:45.432637 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:49:45 crc kubenswrapper[4183]: I0813 19:49:45.518078 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:46 crc kubenswrapper[4183]: E0813 19:49:46.270379 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:46 crc kubenswrapper[4183]: I0813 19:49:46.509589 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:46 crc kubenswrapper[4183]: E0813 19:49:46.562571 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Aug 13 19:49:46 crc kubenswrapper[4183]: I0813 19:49:46.854766 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:49:46 crc kubenswrapper[4183]: I0813 19:49:46.856664 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:49:46 crc kubenswrapper[4183]: I0813 19:49:46.856751 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:49:46 crc kubenswrapper[4183]: I0813 19:49:46.856820 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:49:46 crc kubenswrapper[4183]: I0813 19:49:46.856861 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:49:46 crc kubenswrapper[4183]: E0813 19:49:46.862298 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Aug 13 19:49:47 crc kubenswrapper[4183]: I0813 19:49:47.208883 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:49:47 crc kubenswrapper[4183]: I0813 19:49:47.210220 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:49:47 crc kubenswrapper[4183]: I0813 19:49:47.210505 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:49:47 crc kubenswrapper[4183]: I0813 19:49:47.210528 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:49:47 crc kubenswrapper[4183]: I0813 19:49:47.211829 4183 scope.go:117] "RemoveContainer" containerID="42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf" Aug 13 19:49:47 crc kubenswrapper[4183]: E0813 19:49:47.212249 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:49:47 crc kubenswrapper[4183]: E0813 19:49:47.270192 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:47 crc kubenswrapper[4183]: I0813 19:49:47.509999 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:48 crc kubenswrapper[4183]: E0813 19:49:48.270137 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:48 crc kubenswrapper[4183]: I0813 19:49:48.510012 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:49 crc kubenswrapper[4183]: E0813 19:49:49.270426 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:49 crc kubenswrapper[4183]: I0813 19:49:49.515265 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:50 crc kubenswrapper[4183]: E0813 19:49:50.271060 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:50 crc kubenswrapper[4183]: I0813 19:49:50.511214 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:51 crc kubenswrapper[4183]: W0813 19:49:51.139011 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Aug 13 19:49:51 crc kubenswrapper[4183]: E0813 19:49:51.139082 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Aug 13 19:49:51 crc kubenswrapper[4183]: E0813 19:49:51.270920 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:51 crc kubenswrapper[4183]: I0813 19:49:51.512307 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:52 crc kubenswrapper[4183]: E0813 19:49:52.270037 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:52 crc kubenswrapper[4183]: I0813 19:49:52.510453 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:53 crc kubenswrapper[4183]: E0813 19:49:53.269932 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:53 crc kubenswrapper[4183]: I0813 19:49:53.510636 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:53 crc kubenswrapper[4183]: E0813 19:49:53.569575 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Aug 13 19:49:53 crc kubenswrapper[4183]: I0813 19:49:53.862843 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:49:53 crc kubenswrapper[4183]: I0813 19:49:53.864560 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:49:53 crc kubenswrapper[4183]: I0813 19:49:53.864628 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:49:53 crc kubenswrapper[4183]: I0813 19:49:53.864650 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:49:53 crc kubenswrapper[4183]: I0813 19:49:53.864681 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:49:53 crc kubenswrapper[4183]: E0813 19:49:53.870484 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Aug 13 19:49:54 crc kubenswrapper[4183]: E0813 19:49:54.269682 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:54 crc kubenswrapper[4183]: E0813 19:49:54.291339 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:54 crc kubenswrapper[4183]: I0813 19:49:54.512971 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:54 crc kubenswrapper[4183]: I0813 19:49:54.663943 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 19:49:54 crc kubenswrapper[4183]: I0813 19:49:54.664078 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 19:49:54 crc kubenswrapper[4183]: I0813 19:49:54.664111 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 19:49:54 crc kubenswrapper[4183]: I0813 19:49:54.664141 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 19:49:54 crc kubenswrapper[4183]: I0813 19:49:54.664185 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 19:49:54 crc kubenswrapper[4183]: I0813 19:49:54.969279 4183 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Aug 13 19:49:54 crc kubenswrapper[4183]: I0813 19:49:54.989173 4183 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Aug 13 19:49:55 crc kubenswrapper[4183]: E0813 19:49:55.270095 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:55 crc kubenswrapper[4183]: E0813 19:49:55.433830 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:49:55 crc kubenswrapper[4183]: I0813 19:49:55.510264 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:56 crc kubenswrapper[4183]: E0813 19:49:56.270142 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:56 crc kubenswrapper[4183]: I0813 19:49:56.506012 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:57 crc kubenswrapper[4183]: E0813 19:49:57.269926 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:57 crc kubenswrapper[4183]: I0813 19:49:57.541656 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:57 crc kubenswrapper[4183]: W0813 19:49:57.811287 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Aug 13 19:49:57 crc kubenswrapper[4183]: E0813 19:49:57.811355 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Aug 13 19:49:58 crc kubenswrapper[4183]: E0813 19:49:58.271088 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:58 crc kubenswrapper[4183]: I0813 19:49:58.513900 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:59 crc kubenswrapper[4183]: E0813 19:49:59.269943 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:59 crc kubenswrapper[4183]: I0813 19:49:59.519147 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:59 crc kubenswrapper[4183]: I0813 19:49:59.759430 4183 csr.go:261] certificate signing request csr-lhhqv is approved, waiting to be issued Aug 13 19:49:59 crc kubenswrapper[4183]: I0813 19:49:59.783983 4183 csr.go:257] certificate signing request csr-lhhqv is issued Aug 13 19:49:59 crc kubenswrapper[4183]: I0813 19:49:59.877575 4183 reconstruct_new.go:210] "DevicePaths of reconstructed volumes updated" Aug 13 19:50:00 crc kubenswrapper[4183]: I0813 19:50:00.270621 4183 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Aug 13 19:50:00 crc kubenswrapper[4183]: I0813 19:50:00.785669 4183 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-06-27 13:05:20 +0000 UTC, rotation deadline is 2026-03-25 02:29:24.474296861 +0000 UTC Aug 13 19:50:00 crc kubenswrapper[4183]: I0813 19:50:00.786022 4183 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 5358h39m23.688281563s for next certificate rotation Aug 13 19:50:00 crc kubenswrapper[4183]: I0813 19:50:00.870735 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:50:00 crc kubenswrapper[4183]: I0813 19:50:00.875250 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:00 crc kubenswrapper[4183]: I0813 19:50:00.875388 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:00 crc kubenswrapper[4183]: I0813 19:50:00.875411 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:00 crc kubenswrapper[4183]: I0813 19:50:00.875534 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.042192 4183 kubelet_node_status.go:116] "Node was previously registered" node="crc" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.042571 4183 kubelet_node_status.go:80] "Successfully registered node" node="crc" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.047273 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.047373 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.047388 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.047410 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.047664 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:01Z","lastTransitionTime":"2025-08-13T19:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:01 crc kubenswrapper[4183]: E0813 19:50:01.081841 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.089710 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.089845 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.089866 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.089888 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.089919 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:01Z","lastTransitionTime":"2025-08-13T19:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:01 crc kubenswrapper[4183]: E0813 19:50:01.111413 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.122042 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.122164 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.122222 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.122252 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.122285 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:01Z","lastTransitionTime":"2025-08-13T19:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:01 crc kubenswrapper[4183]: E0813 19:50:01.138858 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.149109 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.149201 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.149228 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.149255 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.149326 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:01Z","lastTransitionTime":"2025-08-13T19:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:01 crc kubenswrapper[4183]: E0813 19:50:01.167689 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.192306 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.192458 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.192483 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.192513 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.192549 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:01Z","lastTransitionTime":"2025-08-13T19:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:01 crc kubenswrapper[4183]: E0813 19:50:01.205447 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:01 crc kubenswrapper[4183]: E0813 19:50:01.205512 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:50:01 crc kubenswrapper[4183]: E0813 19:50:01.205543 4183 kubelet_node_status.go:512] "Error getting the current node from lister" err="node \"crc\" not found" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.208655 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.210144 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.210216 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.210234 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.211710 4183 scope.go:117] "RemoveContainer" containerID="42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf" Aug 13 19:50:01 crc kubenswrapper[4183]: E0813 19:50:01.212117 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:50:01 crc kubenswrapper[4183]: E0813 19:50:01.305759 4183 kubelet_node_status.go:506] "Node not becoming ready in time after startup" Aug 13 19:50:05 crc kubenswrapper[4183]: E0813 19:50:05.313867 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:50:05 crc kubenswrapper[4183]: E0813 19:50:05.434581 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:50:10 crc kubenswrapper[4183]: E0813 19:50:10.316000 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:50:10 crc kubenswrapper[4183]: I0813 19:50:10.885620 4183 reflector.go:351] Caches populated for *v1.CSIDriver from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.212015 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.212090 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.212107 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.212125 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.212160 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:11Z","lastTransitionTime":"2025-08-13T19:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:11 crc kubenswrapper[4183]: E0813 19:50:11.223490 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.228245 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.228330 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.228348 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.228367 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.228396 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:11Z","lastTransitionTime":"2025-08-13T19:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:11 crc kubenswrapper[4183]: E0813 19:50:11.239346 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.244231 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.244548 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.244689 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.244966 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.245102 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:11Z","lastTransitionTime":"2025-08-13T19:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:11 crc kubenswrapper[4183]: E0813 19:50:11.257632 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.263600 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.263666 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.263688 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.263712 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.263741 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:11Z","lastTransitionTime":"2025-08-13T19:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:11 crc kubenswrapper[4183]: E0813 19:50:11.275510 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.281195 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.281302 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.281566 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.281599 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.281625 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:11Z","lastTransitionTime":"2025-08-13T19:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:11 crc kubenswrapper[4183]: E0813 19:50:11.294314 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:11 crc kubenswrapper[4183]: E0813 19:50:11.294375 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:50:13 crc kubenswrapper[4183]: I0813 19:50:13.208952 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:50:13 crc kubenswrapper[4183]: I0813 19:50:13.210507 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:13 crc kubenswrapper[4183]: I0813 19:50:13.210688 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:13 crc kubenswrapper[4183]: I0813 19:50:13.210736 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:13 crc kubenswrapper[4183]: I0813 19:50:13.212190 4183 scope.go:117] "RemoveContainer" containerID="42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf" Aug 13 19:50:14 crc kubenswrapper[4183]: I0813 19:50:14.208746 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:50:14 crc kubenswrapper[4183]: I0813 19:50:14.211445 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:14 crc kubenswrapper[4183]: I0813 19:50:14.211521 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:14 crc kubenswrapper[4183]: I0813 19:50:14.211539 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:14 crc kubenswrapper[4183]: I0813 19:50:14.333580 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/5.log" Aug 13 19:50:14 crc kubenswrapper[4183]: I0813 19:50:14.337214 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"53c1db1508241fbac1bedf9130341ffe","Type":"ContainerStarted","Data":"d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92"} Aug 13 19:50:14 crc kubenswrapper[4183]: I0813 19:50:14.337372 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:50:14 crc kubenswrapper[4183]: I0813 19:50:14.338387 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:14 crc kubenswrapper[4183]: I0813 19:50:14.338495 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:14 crc kubenswrapper[4183]: I0813 19:50:14.338517 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:15 crc kubenswrapper[4183]: E0813 19:50:15.318135 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:50:15 crc kubenswrapper[4183]: E0813 19:50:15.435056 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:50:17 crc kubenswrapper[4183]: I0813 19:50:17.564190 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:50:17 crc kubenswrapper[4183]: I0813 19:50:17.564518 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:50:17 crc kubenswrapper[4183]: I0813 19:50:17.566442 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:17 crc kubenswrapper[4183]: I0813 19:50:17.566636 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:17 crc kubenswrapper[4183]: I0813 19:50:17.566657 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:20 crc kubenswrapper[4183]: E0813 19:50:20.321676 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.421167 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.421256 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.421273 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.421303 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.421367 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:21Z","lastTransitionTime":"2025-08-13T19:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:21 crc kubenswrapper[4183]: E0813 19:50:21.613232 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.621172 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.621510 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.621647 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.621849 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.621979 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:21Z","lastTransitionTime":"2025-08-13T19:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:21 crc kubenswrapper[4183]: E0813 19:50:21.635751 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.641260 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.641422 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.641531 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.641679 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.641904 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:21Z","lastTransitionTime":"2025-08-13T19:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:21 crc kubenswrapper[4183]: E0813 19:50:21.655538 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.661330 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.661382 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.661451 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.661876 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.661905 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:21Z","lastTransitionTime":"2025-08-13T19:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:21 crc kubenswrapper[4183]: E0813 19:50:21.675383 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.681015 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.681072 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.681086 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.681105 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.681127 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:21Z","lastTransitionTime":"2025-08-13T19:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:21 crc kubenswrapper[4183]: E0813 19:50:21.695490 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:21 crc kubenswrapper[4183]: E0813 19:50:21.695561 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.992377 4183 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Aug 13 19:50:25 crc kubenswrapper[4183]: E0813 19:50:25.324011 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:50:25 crc kubenswrapper[4183]: E0813 19:50:25.436171 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:50:27 crc kubenswrapper[4183]: I0813 19:50:27.570672 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:50:27 crc kubenswrapper[4183]: I0813 19:50:27.571207 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:50:27 crc kubenswrapper[4183]: I0813 19:50:27.573151 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:27 crc kubenswrapper[4183]: I0813 19:50:27.573306 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:27 crc kubenswrapper[4183]: I0813 19:50:27.573342 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:29 crc kubenswrapper[4183]: I0813 19:50:29.245026 4183 reflector.go:351] Caches populated for *v1.RuntimeClass from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Aug 13 19:50:30 crc kubenswrapper[4183]: E0813 19:50:30.326466 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.814172 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.814215 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.814231 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.814253 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.814288 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:31Z","lastTransitionTime":"2025-08-13T19:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:31 crc kubenswrapper[4183]: E0813 19:50:31.829151 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.835325 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.835378 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.835394 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.835413 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.835434 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:31Z","lastTransitionTime":"2025-08-13T19:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:31 crc kubenswrapper[4183]: E0813 19:50:31.847619 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.853860 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.854067 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.854174 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.854270 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.854367 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:31Z","lastTransitionTime":"2025-08-13T19:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:31 crc kubenswrapper[4183]: E0813 19:50:31.868884 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.877119 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.877197 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.877216 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.877241 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.877280 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:31Z","lastTransitionTime":"2025-08-13T19:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:31 crc kubenswrapper[4183]: E0813 19:50:31.891400 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.896583 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.896662 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.896679 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.896700 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.896724 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:31Z","lastTransitionTime":"2025-08-13T19:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:31 crc kubenswrapper[4183]: E0813 19:50:31.909018 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:31 crc kubenswrapper[4183]: E0813 19:50:31.909106 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:50:35 crc kubenswrapper[4183]: E0813 19:50:35.328375 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:50:35 crc kubenswrapper[4183]: E0813 19:50:35.437419 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.517928 4183 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.756603 4183 apiserver.go:52] "Watching apiserver" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.776022 4183 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.778291 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7","openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw","openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7","openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-machine-config-operator/machine-config-daemon-zpnhg","openshift-marketplace/certified-operators-7287f","openshift-network-node-identity/network-node-identity-7xghp","openshift-network-operator/network-operator-767c585db5-zd56b","openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh","openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b","openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb","openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz","openshift-etcd-operator/etcd-operator-768d5b5d86-722mg","openshift-ingress/router-default-5c9bf7bc58-6jctv","openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh","openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm","openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m","openshift-authentication/oauth-openshift-765b47f944-n2lhl","openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z","openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-apiserver/apiserver-67cbf64bc9-mtx25","openshift-machine-config-operator/machine-config-server-v65wr","openshift-marketplace/redhat-operators-f4jkp","openshift-dns/dns-default-gbw49","openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd","openshift-dns-operator/dns-operator-75f687757b-nz2xb","openshift-image-registry/image-registry-585546dd8b-v5m4t","openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv","openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc","openshift-multus/multus-admission-controller-6c7c885997-4hbbc","openshift-multus/network-metrics-daemon-qdfr4","openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf","openshift-ovn-kubernetes/ovnkube-node-44qcg","openshift-kube-controller-manager/revision-pruner-8-crc","openshift-image-registry/node-ca-l92hr","openshift-network-operator/iptables-alerter-wwpnd","openshift-service-ca/service-ca-666f99b6f-vlbxv","openshift-console/console-84fccc7b6-mkncc","openshift-controller-manager/controller-manager-6ff78978b4-q4vv8","openshift-marketplace/community-operators-8jhz6","hostpath-provisioner/csi-hostpathplugin-hvm8g","openshift-console/downloads-65476884b9-9wcvx","openshift-marketplace/marketplace-operator-8b455464d-f9xdt","openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5","openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz","openshift-console-operator/console-conversion-webhook-595f9969b-l6z49","openshift-dns/node-resolver-dn27q","openshift-ingress-canary/ingress-canary-2vhcn","openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7","openshift-multus/multus-additional-cni-plugins-bzj2p","openshift-multus/multus-q88th","openshift-network-diagnostics/network-check-target-v54bt","openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9","openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg","openshift-etcd/etcd-crc","openshift-marketplace/redhat-marketplace-8s8pc","openshift-marketplace/redhat-marketplace-rmwfn","openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr","openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2","openshift-console-operator/console-operator-5dbbc74dc9-cp5cd"] Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.778476 4183 topology_manager.go:215] "Topology Admit Handler" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" podNamespace="openshift-etcd-operator" podName="etcd-operator-768d5b5d86-722mg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.778870 4183 topology_manager.go:215] "Topology Admit Handler" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" podNamespace="openshift-marketplace" podName="marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.778954 4183 topology_manager.go:215] "Topology Admit Handler" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" podNamespace="openshift-machine-config-operator" podName="machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.779016 4183 topology_manager.go:215] "Topology Admit Handler" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" podNamespace="openshift-service-ca-operator" podName="service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.779108 4183 topology_manager.go:215] "Topology Admit Handler" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" podNamespace="openshift-operator-lifecycle-manager" podName="catalog-operator-857456c46-7f5wf" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.779177 4183 topology_manager.go:215] "Topology Admit Handler" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" podNamespace="openshift-operator-lifecycle-manager" podName="package-server-manager-84d578d794-jw7r2" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.779239 4183 topology_manager.go:215] "Topology Admit Handler" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" podNamespace="openshift-kube-apiserver-operator" podName="kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.779450 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.779533 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.779599 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.779620 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.779463 4183 topology_manager.go:215] "Topology Admit Handler" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" podNamespace="openshift-machine-api" podName="machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.779660 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.779742 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.779873 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.779920 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.780119 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.780162 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.780226 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.780280 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.780288 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.780352 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.780375 4183 topology_manager.go:215] "Topology Admit Handler" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" podNamespace="openshift-network-operator" podName="network-operator-767c585db5-zd56b" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.780584 4183 topology_manager.go:215] "Topology Admit Handler" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" podNamespace="openshift-operator-lifecycle-manager" podName="olm-operator-6d8474f75f-x54mh" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.780935 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-767c585db5-zd56b" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.781189 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.781417 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.781191 4183 topology_manager.go:215] "Topology Admit Handler" podUID="71af81a9-7d43-49b2-9287-c375900aa905" podNamespace="openshift-kube-scheduler-operator" podName="openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.781258 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.781953 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.782210 4183 topology_manager.go:215] "Topology Admit Handler" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" podNamespace="openshift-kube-controller-manager-operator" podName="kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.782325 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.782757 4183 topology_manager.go:215] "Topology Admit Handler" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" podNamespace="openshift-kube-storage-version-migrator-operator" podName="kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.783099 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.783203 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.783420 4183 topology_manager.go:215] "Topology Admit Handler" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" podNamespace="openshift-machine-api" podName="control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.783663 4183 topology_manager.go:215] "Topology Admit Handler" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" podNamespace="openshift-authentication-operator" podName="authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.784040 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.784116 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.784228 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.784384 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.784462 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.784737 4183 topology_manager.go:215] "Topology Admit Handler" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" podNamespace="openshift-config-operator" podName="openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.785160 4183 topology_manager.go:215] "Topology Admit Handler" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" podNamespace="openshift-apiserver-operator" podName="openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.785318 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.785639 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.785336 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.785713 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.786231 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.786269 4183 topology_manager.go:215] "Topology Admit Handler" podUID="10603adc-d495-423c-9459-4caa405960bb" podNamespace="openshift-dns-operator" podName="dns-operator-75f687757b-nz2xb" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.786671 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.786963 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.787040 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.786749 4183 topology_manager.go:215] "Topology Admit Handler" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" podNamespace="openshift-controller-manager-operator" podName="openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.787193 4183 topology_manager.go:215] "Topology Admit Handler" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" podNamespace="openshift-image-registry" podName="cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.787327 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.787479 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.787564 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.787367 4183 topology_manager.go:215] "Topology Admit Handler" podUID="7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8" podNamespace="openshift-multus" podName="multus-additional-cni-plugins-bzj2p" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.787875 4183 topology_manager.go:215] "Topology Admit Handler" podUID="475321a1-8b7e-4033-8f72-b05a8b377347" podNamespace="openshift-multus" podName="multus-q88th" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.788101 4183 topology_manager.go:215] "Topology Admit Handler" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" podNamespace="openshift-multus" podName="network-metrics-daemon-qdfr4" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.788490 4183 topology_manager.go:215] "Topology Admit Handler" podUID="410cf605-1970-4691-9c95-53fdc123b1f3" podNamespace="openshift-ovn-kubernetes" podName="ovnkube-control-plane-77c846df58-6l97b" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.788736 4183 topology_manager.go:215] "Topology Admit Handler" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" podNamespace="openshift-network-diagnostics" podName="network-check-source-5c5478f8c-vqvt7" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.790616 4183 topology_manager.go:215] "Topology Admit Handler" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" podNamespace="openshift-network-diagnostics" podName="network-check-target-v54bt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.791383 4183 topology_manager.go:215] "Topology Admit Handler" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" podNamespace="openshift-network-node-identity" podName="network-node-identity-7xghp" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.792040 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.792215 4183 topology_manager.go:215] "Topology Admit Handler" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" podNamespace="openshift-ovn-kubernetes" podName="ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.792420 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-q88th" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.787431 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.798866 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.793065 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.799077 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.787459 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.793268 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.799527 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.794364 4183 topology_manager.go:215] "Topology Admit Handler" podUID="2b6d14a5-ca00-40c7-af7a-051a98a24eed" podNamespace="openshift-network-operator" podName="iptables-alerter-wwpnd" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.800281 4183 topology_manager.go:215] "Topology Admit Handler" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" podNamespace="openshift-kube-storage-version-migrator" podName="migrator-f7c6d88df-q2fnv" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.794555 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.794555 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.794644 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-7xghp" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.794704 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.795116 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.811906 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.812489 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.812676 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-wwpnd" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.813056 4183 topology_manager.go:215] "Topology Admit Handler" podUID="378552fd-5e53-4882-87ff-95f3d9198861" podNamespace="openshift-service-ca" podName="service-ca-666f99b6f-vlbxv" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.813646 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.813888 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.814482 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.814668 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.813932 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.816490 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.816766 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.820457 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.820702 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.821071 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.821437 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.821974 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.822161 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.822350 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.814377 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.823768 4183 topology_manager.go:215] "Topology Admit Handler" podUID="6a23c0ee-5648-448c-b772-83dced2891ce" podNamespace="openshift-dns" podName="node-resolver-dn27q" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.823996 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.824160 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.824236 4183 topology_manager.go:215] "Topology Admit Handler" podUID="13045510-8717-4a71-ade4-be95a76440a7" podNamespace="openshift-dns" podName="dns-default-gbw49" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.824337 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.824564 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.824876 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.824900 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.825129 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-dn27q" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.825235 4183 topology_manager.go:215] "Topology Admit Handler" podUID="9fb762d1-812f-43f1-9eac-68034c1ecec7" podNamespace="openshift-cluster-version" podName="cluster-version-operator-6d5d9649f6-x6d46" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.825346 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.825452 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.825508 4183 topology_manager.go:215] "Topology Admit Handler" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" podNamespace="openshift-oauth-apiserver" podName="apiserver-69c565c9b6-vbdpd" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.825610 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.825892 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.826256 4183 topology_manager.go:215] "Topology Admit Handler" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" podNamespace="openshift-operator-lifecycle-manager" podName="packageserver-8464bcc55b-sjnqz" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.826588 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.826923 4183 topology_manager.go:215] "Topology Admit Handler" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" podNamespace="openshift-ingress-operator" podName="ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.827276 4183 topology_manager.go:215] "Topology Admit Handler" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" podNamespace="openshift-cluster-samples-operator" podName="cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.827581 4183 topology_manager.go:215] "Topology Admit Handler" podUID="ec1bae8b-3200-4ad9-b33b-cf8701f3027c" podNamespace="openshift-cluster-machine-approver" podName="machine-approver-7874c8775-kh4j9" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.827734 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.827954 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.828020 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.828070 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.828349 4183 topology_manager.go:215] "Topology Admit Handler" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" podNamespace="openshift-ingress" podName="router-default-5c9bf7bc58-6jctv" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.828484 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.828586 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.828739 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.829150 4183 topology_manager.go:215] "Topology Admit Handler" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" podNamespace="openshift-machine-config-operator" podName="machine-config-daemon-zpnhg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.829735 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.829931 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.830195 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.829292 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.830220 4183 topology_manager.go:215] "Topology Admit Handler" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" podNamespace="openshift-console-operator" podName="console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.829370 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.830751 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.831073 4183 topology_manager.go:215] "Topology Admit Handler" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" podNamespace="openshift-console-operator" podName="console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.831430 4183 topology_manager.go:215] "Topology Admit Handler" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" podNamespace="openshift-machine-config-operator" podName="machine-config-controller-6df6df6b6b-58shh" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.831593 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.831702 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.831130 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.831956 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.832299 4183 topology_manager.go:215] "Topology Admit Handler" podUID="6268b7fe-8910-4505-b404-6f1df638105c" podNamespace="openshift-console" podName="downloads-65476884b9-9wcvx" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.832628 4183 topology_manager.go:215] "Topology Admit Handler" podUID="bf1a8b70-3856-486f-9912-a2de1d57c3fb" podNamespace="openshift-machine-config-operator" podName="machine-config-server-v65wr" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.832763 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.832975 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.832721 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.833167 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.833551 4183 topology_manager.go:215] "Topology Admit Handler" podUID="f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e" podNamespace="openshift-image-registry" podName="node-ca-l92hr" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.834086 4183 topology_manager.go:215] "Topology Admit Handler" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" podNamespace="openshift-ingress-canary" podName="ingress-canary-2vhcn" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.834287 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-v65wr" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.834596 4183 topology_manager.go:215] "Topology Admit Handler" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" podNamespace="openshift-multus" podName="multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.834759 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-l92hr" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.835213 4183 topology_manager.go:215] "Topology Admit Handler" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" podNamespace="hostpath-provisioner" podName="csi-hostpathplugin-hvm8g" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.835384 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.835477 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.835878 4183 topology_manager.go:215] "Topology Admit Handler" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" podNamespace="openshift-image-registry" podName="image-registry-585546dd8b-v5m4t" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.836253 4183 topology_manager.go:215] "Topology Admit Handler" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" podNamespace="openshift-console" podName="console-84fccc7b6-mkncc" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.836600 4183 topology_manager.go:215] "Topology Admit Handler" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" podNamespace="openshift-route-controller-manager" podName="route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.837070 4183 topology_manager.go:215] "Topology Admit Handler" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" podNamespace="openshift-apiserver" podName="apiserver-67cbf64bc9-mtx25" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.837483 4183 topology_manager.go:215] "Topology Admit Handler" podUID="13ad7555-5f28-4555-a563-892713a8433a" podNamespace="openshift-authentication" podName="oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.837759 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.838100 4183 topology_manager.go:215] "Topology Admit Handler" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" podNamespace="openshift-controller-manager" podName="controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.838255 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.838630 4183 topology_manager.go:215] "Topology Admit Handler" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" podNamespace="openshift-marketplace" podName="certified-operators-7287f" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.838756 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.839190 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.839427 4183 topology_manager.go:215] "Topology Admit Handler" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" podNamespace="openshift-marketplace" podName="community-operators-8jhz6" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.839941 4183 topology_manager.go:215] "Topology Admit Handler" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" podNamespace="openshift-marketplace" podName="redhat-operators-f4jkp" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.840167 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.840286 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.838200 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.839898 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.840548 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.840698 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.841006 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.841257 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.841374 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.841606 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.841929 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.842007 4183 topology_manager.go:215] "Topology Admit Handler" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" podNamespace="openshift-marketplace" podName="redhat-marketplace-8s8pc" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.842205 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.842322 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.842337 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.842378 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.842442 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.842497 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.842525 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.842616 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.842723 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.842995 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.843163 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.843357 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.838713 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.843931 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.844006 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.842107 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.842081 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.840689 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.844632 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.844737 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.844653 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.845071 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.845436 4183 topology_manager.go:215] "Topology Admit Handler" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" podNamespace="openshift-marketplace" podName="redhat-marketplace-rmwfn" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.845496 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.845252 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.845887 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.845342 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.845968 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.845355 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.846052 4183 topology_manager.go:215] "Topology Admit Handler" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" podNamespace="openshift-kube-controller-manager" podName="revision-pruner-8-crc" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.845455 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.846134 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.846372 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.846376 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.846471 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.846498 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.846621 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.863009 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.880047 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.897584 4183 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.898734 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit-dir\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.898908 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wrd8h\" (UniqueName: \"kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.898940 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.898972 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dt5cx\" (UniqueName: \"kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx\") pod \"ingress-canary-2vhcn\" (UID: \"0b5d722a-1123-4935-9740-52a08d018bc9\") " pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.898996 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/51a02bbf-2d40-4f84-868a-d399ea18a846-webhook-cert\") pod \"network-node-identity-7xghp\" (UID: \"51a02bbf-2d40-4f84-868a-d399ea18a846\") " pod="openshift-network-node-identity/network-node-identity-7xghp" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899018 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2b6d14a5-ca00-40c7-af7a-051a98a24eed-host-slash\") pod \"iptables-alerter-wwpnd\" (UID: \"2b6d14a5-ca00-40c7-af7a-051a98a24eed\") " pod="openshift-network-operator/iptables-alerter-wwpnd" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899045 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899068 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899090 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899118 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899139 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-c2f8t\" (UniqueName: \"kubernetes.io/projected/475321a1-8b7e-4033-8f72-b05a8b377347-kube-api-access-c2f8t\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899164 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899188 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899219 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899248 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-run-openvswitch\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899303 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8svnk\" (UniqueName: \"kubernetes.io/projected/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-kube-api-access-8svnk\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899328 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899356 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j7zrh\" (UniqueName: \"kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899380 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/475321a1-8b7e-4033-8f72-b05a8b377347-multus-daemon-config\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899401 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9fb762d1-812f-43f1-9eac-68034c1ecec7-service-ca\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899428 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d7jw8\" (UniqueName: \"kubernetes.io/projected/f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e-kube-api-access-d7jw8\") pod \"node-ca-l92hr\" (UID: \"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\") " pod="openshift-image-registry/node-ca-l92hr" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899452 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-khtlk\" (UniqueName: \"kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-kube-api-access-khtlk\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899476 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899509 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899535 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899572 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/13ad7555-5f28-4555-a563-892713a8433a-audit-dir\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899604 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-certificates\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899632 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899654 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-bound-sa-token\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899682 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-system-cni-dir\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899711 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nf4t2\" (UniqueName: \"kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899732 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899762 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-fqnmc\" (UniqueName: \"kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899864 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vhj\" (UniqueName: \"kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899897 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-var-lib-openvswitch\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899924 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-etc-openvswitch\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899949 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtrv\" (UniqueName: \"kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899976 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.900006 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-x5d97\" (UniqueName: \"kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.900032 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-slash\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.900057 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.900080 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.900109 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j4qn7\" (UniqueName: \"kubernetes.io/projected/2b6d14a5-ca00-40c7-af7a-051a98a24eed-kube-api-access-j4qn7\") pod \"iptables-alerter-wwpnd\" (UID: \"2b6d14a5-ca00-40c7-af7a-051a98a24eed\") " pod="openshift-network-operator/iptables-alerter-wwpnd" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.900131 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.900162 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-cx4f9\" (UniqueName: \"kubernetes.io/projected/410cf605-1970-4691-9c95-53fdc123b1f3-kube-api-access-cx4f9\") pod \"ovnkube-control-plane-77c846df58-6l97b\" (UID: \"410cf605-1970-4691-9c95-53fdc123b1f3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.900190 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.900264 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxcp\" (UniqueName: \"kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp\") pod \"network-check-source-5c5478f8c-vqvt7\" (UID: \"d0f40333-c860-4c04-8058-a0bf572dcf12\") " pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.900291 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-cni-netd\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.900319 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9d0dcce3-d96e-48cb-9b9f-362105911589-mcd-auth-proxy-config\") pod \"machine-config-daemon-zpnhg\" (UID: \"9d0dcce3-d96e-48cb-9b9f-362105911589\") " pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.900518 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-os-release\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.900540 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.900565 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.900596 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/aa90b3c2-febd-4588-a063-7fbbe82f00c1-stats-auth\") pod \"router-default-5c9bf7bc58-6jctv\" (UID: \"aa90b3c2-febd-4588-a063-7fbbe82f00c1\") " pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.900626 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.900650 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-run-ovn-kubernetes\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.900681 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.900703 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-xkzjk\" (UniqueName: \"kubernetes.io/projected/9d0dcce3-d96e-48cb-9b9f-362105911589-kube-api-access-xkzjk\") pod \"machine-config-daemon-zpnhg\" (UID: \"9d0dcce3-d96e-48cb-9b9f-362105911589\") " pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.900747 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-run-multus-certs\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.900898 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.901044 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e-host\") pod \"node-ca-l92hr\" (UID: \"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\") " pod="openshift-image-registry/node-ca-l92hr" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.901077 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-multus-socket-dir-parent\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.901101 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.901287 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.901399 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bwbqm\" (UniqueName: \"kubernetes.io/projected/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-kube-api-access-bwbqm\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.901756 4183 configmap.go:199] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.902158 4183 secret.go:194] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.902212 4183 secret.go:194] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.902295 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.902424 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.402238295 +0000 UTC m=+407.094902923 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"serving-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.902466 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.902508 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.402486162 +0000 UTC m=+407.095150820 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"serving-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.902630 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.902742 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovnkube-config\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.903086 4183 configmap.go:199] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.903454 4183 secret.go:194] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.903594 4183 configmap.go:199] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.903871 4183 secret.go:194] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.903991 4183 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.904087 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.904148 4183 configmap.go:199] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.904321 4183 secret.go:194] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.904399 4183 secret.go:194] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.905258 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.905873 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.906056 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/475321a1-8b7e-4033-8f72-b05a8b377347-multus-daemon-config\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.906263 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.906335 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/410cf605-1970-4691-9c95-53fdc123b1f3-ovnkube-config\") pod \"ovnkube-control-plane-77c846df58-6l97b\" (UID: \"410cf605-1970-4691-9c95-53fdc123b1f3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.906408 4183 configmap.go:199] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.906571 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.906756 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.909890 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.909949 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d7ntf\" (UniqueName: \"kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.910096 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.910154 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5rpl7\" (UniqueName: \"kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.910190 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-kubelet\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.910233 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.910374 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.910411 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.910435 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/bf1a8b70-3856-486f-9912-a2de1d57c3fb-node-bootstrap-token\") pod \"machine-config-server-v65wr\" (UID: \"bf1a8b70-3856-486f-9912-a2de1d57c3fb\") " pod="openshift-machine-config-operator/machine-config-server-v65wr" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.910462 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.910486 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.910511 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmhq\" (UniqueName: \"kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq\") pod \"migrator-f7c6d88df-q2fnv\" (UID: \"cf1a8966-f594-490a-9fbb-eec5bafd13d3\") " pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.910537 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.910570 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.910602 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.910625 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.910648 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/9d0dcce3-d96e-48cb-9b9f-362105911589-rootfs\") pod \"machine-config-daemon-zpnhg\" (UID: \"9d0dcce3-d96e-48cb-9b9f-362105911589\") " pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.910730 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tl5kg\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.910765 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.910854 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/cc291782-27d2-4a74-af79-c7dcb31535d2-host-etc-kube\") pod \"network-operator-767c585db5-zd56b\" (UID: \"cc291782-27d2-4a74-af79-c7dcb31535d2\") " pod="openshift-network-operator/network-operator-767c585db5-zd56b" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.910881 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b54e8941-2fc4-432a-9e51-39684df9089e-bound-sa-token\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.910925 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2zpsk\" (UniqueName: \"kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk\") pod \"downloads-65476884b9-9wcvx\" (UID: \"6268b7fe-8910-4505-b404-6f1df638105c\") " pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.911024 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-env-overrides\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.911233 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovnkube-script-lib\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.911270 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.911294 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2h9\" (UniqueName: \"kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.911316 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-systemd-units\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.911337 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-cni-bin\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.911358 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914042 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/475321a1-8b7e-4033-8f72-b05a8b377347-cni-binary-copy\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914068 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914106 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4qr9t\" (UniqueName: \"kubernetes.io/projected/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-kube-api-access-4qr9t\") pod \"machine-approver-7874c8775-kh4j9\" (UID: \"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\") " pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914135 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-run-k8s-cni-cncf-io\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914158 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9fb762d1-812f-43f1-9eac-68034c1ecec7-etc-ssl-certs\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914182 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-ca-trust-extracted\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914207 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914233 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bwvjb\" (UniqueName: \"kubernetes.io/projected/120b38dc-8236-4fa6-a452-642b8ad738ee-kube-api-access-bwvjb\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914275 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-76gl8\" (UniqueName: \"kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8\") pod \"network-check-target-v54bt\" (UID: \"34a48baf-1bee-4921-8bb2-9b7320e76f79\") " pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914300 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-run-ovn\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914331 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kgvs\" (UniqueName: \"kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914354 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914386 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914415 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914453 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914479 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-gsxd9\" (UniqueName: \"kubernetes.io/projected/6a23c0ee-5648-448c-b772-83dced2891ce-kube-api-access-gsxd9\") pod \"node-resolver-dn27q\" (UID: \"6a23c0ee-5648-448c-b772-83dced2891ce\") " pod="openshift-dns/node-resolver-dn27q" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914509 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-mountpoint-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914535 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pzb57\" (UniqueName: \"kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914565 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914593 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/887d596e-c519-4bfa-af90-3edd9e1b2f0f-utilities\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914616 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914642 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-plugins-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914756 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-run-netns\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914902 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914948 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-socket-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914974 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c782cf62-a827-4677-b3c2-6f82c5f09cbb-catalog-content\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.915003 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/51a02bbf-2d40-4f84-868a-d399ea18a846-ovnkube-identity-cm\") pod \"network-node-identity-7xghp\" (UID: \"51a02bbf-2d40-4f84-868a-d399ea18a846\") " pod="openshift-network-node-identity/network-node-identity-7xghp" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.915030 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.915053 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.915083 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.915114 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-var-lib-cni-multus\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.915137 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.915162 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9fb762d1-812f-43f1-9eac-68034c1ecec7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.915186 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ad279b4-d9dc-42a8-a1c8-a002bd063482-utilities\") pod \"redhat-marketplace-rmwfn\" (UID: \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\") " pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.915209 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dtjml\" (UniqueName: \"kubernetes.io/projected/13045510-8717-4a71-ade4-be95a76440a7-kube-api-access-dtjml\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.915235 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-node-pullsecrets\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.915261 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.915287 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-auth-proxy-config\") pod \"machine-approver-7874c8775-kh4j9\" (UID: \"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\") " pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.915336 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c782cf62-a827-4677-b3c2-6f82c5f09cbb-utilities\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.915365 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-w4r68\" (UniqueName: \"kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.921592 4183 secret.go:194] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.902543 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.922372 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-certificates\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.906261 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.927437 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9d0dcce3-d96e-48cb-9b9f-362105911589-mcd-auth-proxy-config\") pod \"machine-config-daemon-zpnhg\" (UID: \"9d0dcce3-d96e-48cb-9b9f-362105911589\") " pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.906577 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.927612 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/2b6d14a5-ca00-40c7-af7a-051a98a24eed-iptables-alerter-script\") pod \"iptables-alerter-wwpnd\" (UID: \"2b6d14a5-ca00-40c7-af7a-051a98a24eed\") " pod="openshift-network-operator/iptables-alerter-wwpnd" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.927649 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.42762797 +0000 UTC m=+407.120292559 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.927681 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.427661471 +0000 UTC m=+407.120326149 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"serving-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.927698 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.427690392 +0000 UTC m=+407.120354980 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.927716 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.427709523 +0000 UTC m=+407.120374121 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.928118 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.428093264 +0000 UTC m=+407.120757962 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-images" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.928223 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.428211547 +0000 UTC m=+407.120876145 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.928321 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.42830978 +0000 UTC m=+407.120974448 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.928411 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.428400983 +0000 UTC m=+407.121065581 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"config" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.928505 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.428493755 +0000 UTC m=+407.121158343 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.928585 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.428575108 +0000 UTC m=+407.121239696 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.928710 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.428699471 +0000 UTC m=+407.121364059 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : object "openshift-service-ca"/"signing-key" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.928927 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.428905857 +0000 UTC m=+407.121570575 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.929153 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.429137544 +0000 UTC m=+407.121802252 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"console-operator-config" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.929261 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.429249537 +0000 UTC m=+407.121914125 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.929361 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.42935045 +0000 UTC m=+407.122015058 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.929458 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.429448262 +0000 UTC m=+407.122112851 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.930252 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/bf1a8b70-3856-486f-9912-a2de1d57c3fb-node-bootstrap-token\") pod \"machine-config-server-v65wr\" (UID: \"bf1a8b70-3856-486f-9912-a2de1d57c3fb\") " pod="openshift-machine-config-operator/machine-config-server-v65wr" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.930440 4183 secret.go:194] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.930734 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/51a02bbf-2d40-4f84-868a-d399ea18a846-webhook-cert\") pod \"network-node-identity-7xghp\" (UID: \"51a02bbf-2d40-4f84-868a-d399ea18a846\") " pod="openshift-network-node-identity/network-node-identity-7xghp" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.933915 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovnkube-config\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.934582 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c782cf62-a827-4677-b3c2-6f82c5f09cbb-catalog-content\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.935125 4183 secret.go:194] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.935163 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/aa90b3c2-febd-4588-a063-7fbbe82f00c1-stats-auth\") pod \"router-default-5c9bf7bc58-6jctv\" (UID: \"aa90b3c2-febd-4588-a063-7fbbe82f00c1\") " pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.935222 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.435201617 +0000 UTC m=+407.127866355 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.935281 4183 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.935349 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.435329601 +0000 UTC m=+407.127994339 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"client-ca" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.935391 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.935440 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.435432574 +0000 UTC m=+407.128097302 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.935723 4183 secret.go:194] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.935938 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.435926588 +0000 UTC m=+407.128591236 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : object "openshift-dns-operator"/"metrics-tls" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.936011 4183 secret.go:194] Couldn't get secret openshift-image-registry/installation-pull-secrets: object "openshift-image-registry"/"installation-pull-secrets" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.936088 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.436057161 +0000 UTC m=+407.128721889 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "installation-pull-secrets" (UniqueName: "kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"installation-pull-secrets" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.936130 4183 secret.go:194] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.936164 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls podName:297ab9b6-2186-4d5b-a952-2bfd59af63c4 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.436154044 +0000 UTC m=+407.128818772 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls") pod "machine-config-controller-6df6df6b6b-58shh" (UID: "297ab9b6-2186-4d5b-a952-2bfd59af63c4") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.936341 4183 configmap.go:199] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.936405 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.436395861 +0000 UTC m=+407.129060499 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : object "openshift-service-ca"/"signing-cabundle" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.935288 4183 configmap.go:199] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.936593 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.436583686 +0000 UTC m=+407.129248304 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"oauth-serving-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.936642 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.936682 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.436673389 +0000 UTC m=+407.129338017 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.939937 4183 secret.go:194] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.940023 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.440010504 +0000 UTC m=+407.132675122 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.940080 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.440068636 +0000 UTC m=+407.132733254 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"serving-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.941101 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.941239 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/887d596e-c519-4bfa-af90-3edd9e1b2f0f-catalog-content\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.941346 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.941449 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e-serviceca\") pod \"node-ca-l92hr\" (UID: \"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\") " pod="openshift-image-registry/node-ca-l92hr" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.941547 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.941642 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.941769 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovn-node-metrics-cert\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.942016 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lz9qh\" (UniqueName: \"kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.942135 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.942229 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9d0dcce3-d96e-48cb-9b9f-362105911589-proxy-tls\") pod \"machine-config-daemon-zpnhg\" (UID: \"9d0dcce3-d96e-48cb-9b9f-362105911589\") " pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.942318 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6z2n9\" (UniqueName: \"kubernetes.io/projected/bf1a8b70-3856-486f-9912-a2de1d57c3fb-kube-api-access-6z2n9\") pod \"machine-config-server-v65wr\" (UID: \"bf1a8b70-3856-486f-9912-a2de1d57c3fb\") " pod="openshift-machine-config-operator/machine-config-server-v65wr" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.942424 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\") " pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.942515 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.942606 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9x6dp\" (UniqueName: \"kubernetes.io/projected/b54e8941-2fc4-432a-9e51-39684df9089e-kube-api-access-9x6dp\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.942693 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-run-netns\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.944945 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.945063 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-registration-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.945152 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6a23c0ee-5648-448c-b772-83dced2891ce-hosts-file\") pod \"node-resolver-dn27q\" (UID: \"6a23c0ee-5648-448c-b772-83dced2891ce\") " pod="openshift-dns/node-resolver-dn27q" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.945302 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.945403 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c782cf62-a827-4677-b3c2-6f82c5f09cbb-utilities\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.945402 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kp86\" (UniqueName: \"kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.945480 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-os-release\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.945534 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.945566 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8dcvj\" (UniqueName: \"kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.945592 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-log-socket\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.945618 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.945650 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hpzhn\" (UniqueName: \"kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.945675 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r8qj9\" (UniqueName: \"kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.946078 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.946226 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.446204751 +0000 UTC m=+407.138869529 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"image-import-ca" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.946387 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.946506 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.44649172 +0000 UTC m=+407.139156338 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.946609 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.946719 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-node-log\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.946963 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cc291782-27d2-4a74-af79-c7dcb31535d2-metrics-tls\") pod \"network-operator-767c585db5-zd56b\" (UID: \"cc291782-27d2-4a74-af79-c7dcb31535d2\") " pod="openshift-network-operator/network-operator-767c585db5-zd56b" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.947095 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.947176 4183 projected.go:294] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.947282 4183 projected.go:294] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.947340 4183 projected.go:200] Error preparing data for projected volume kube-api-access-j7zrh for pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.947398 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.447387085 +0000 UTC m=+407.140051813 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-j7zrh" (UniqueName: "kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.947411 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-var-lib-kubelet\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.947611 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.947718 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f4dca86-e6ee-4ec9-8324-86aff960225e-catalog-content\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.947923 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.948031 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4w8wh\" (UniqueName: \"kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.948181 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-multus-conf-dir\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.948302 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.948408 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-machine-approver-tls\") pod \"machine-approver-7874c8775-kh4j9\" (UID: \"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\") " pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.948506 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-f9495\" (UniqueName: \"kubernetes.io/projected/3e19f9e8-9a37-4ca8-9790-c219750ab482-kube-api-access-f9495\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.948604 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-csi-data-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.948696 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.948996 4183 secret.go:194] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.949269 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.949314 4183 configmap.go:199] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.949377 4183 projected.go:269] Couldn't get secret openshift-image-registry/image-registry-tls: object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.949387 4183 projected.go:200] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-585546dd8b-v5m4t: object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.949400 4183 secret.go:194] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.949887 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-ca-trust-extracted\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.949954 4183 configmap.go:199] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.950021 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.950070 4183 secret.go:194] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.950128 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.950326 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/887d596e-c519-4bfa-af90-3edd9e1b2f0f-utilities\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.950369 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.950902 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.950965 4183 secret.go:194] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.951022 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovnkube-script-lib\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.951026 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9fb762d1-812f-43f1-9eac-68034c1ecec7-service-ca\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.951061 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/2b6d14a5-ca00-40c7-af7a-051a98a24eed-iptables-alerter-script\") pod \"iptables-alerter-wwpnd\" (UID: \"2b6d14a5-ca00-40c7-af7a-051a98a24eed\") " pod="openshift-network-operator/iptables-alerter-wwpnd" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.951346 4183 projected.go:294] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.951364 4183 projected.go:294] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.951382 4183 projected.go:200] Error preparing data for projected volume kube-api-access-dt5cx for pod openshift-ingress-canary/ingress-canary-2vhcn: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.951577 4183 configmap.go:199] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.952003 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.952069 4183 secret.go:194] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.954085 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.954197 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.954239 4183 projected.go:200] Error preparing data for projected volume kube-api-access-wrd8h for pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.954630 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.955715 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.956034 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/51a02bbf-2d40-4f84-868a-d399ea18a846-ovnkube-identity-cm\") pod \"network-node-identity-7xghp\" (UID: \"51a02bbf-2d40-4f84-868a-d399ea18a846\") " pod="openshift-network-node-identity/network-node-identity-7xghp" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.956247 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/475321a1-8b7e-4033-8f72-b05a8b377347-cni-binary-copy\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.956378 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.956593 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.456561177 +0000 UTC m=+407.149225965 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default-metrics-tls" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.956740 4183 projected.go:294] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.956761 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.957096 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.457077892 +0000 UTC m=+407.149742580 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.957331 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-env-overrides\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.945240 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.957879 4183 secret.go:194] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.958726 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/887d596e-c519-4bfa-af90-3edd9e1b2f0f-catalog-content\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.958958 4183 secret.go:194] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.964044 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4sfhc\" (UniqueName: \"kubernetes.io/projected/cc291782-27d2-4a74-af79-c7dcb31535d2-kube-api-access-4sfhc\") pod \"network-operator-767c585db5-zd56b\" (UID: \"cc291782-27d2-4a74-af79-c7dcb31535d2\") " pod="openshift-network-operator/network-operator-767c585db5-zd56b" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.964223 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.964344 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981001 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.480932494 +0000 UTC m=+407.173597112 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981078 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.481054117 +0000 UTC m=+407.173718835 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-oauth-config" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981110 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.481090728 +0000 UTC m=+407.173755326 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"config" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981135 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.481119049 +0000 UTC m=+407.173783647 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"trusted-ca-bundle" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981157 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.48114988 +0000 UTC m=+407.173814468 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981179 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.481170981 +0000 UTC m=+407.173835579 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981201 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.481193371 +0000 UTC m=+407.173858079 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981221 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.481211322 +0000 UTC m=+407.173876030 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981240 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.481230432 +0000 UTC m=+407.173895030 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981264 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.481256113 +0000 UTC m=+407.173920821 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981281 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.481274004 +0000 UTC m=+407.173938772 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981302 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.481293824 +0000 UTC m=+407.173958422 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"metrics-tls" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981324 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx podName:0b5d722a-1123-4935-9740-52a08d018bc9 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.481314785 +0000 UTC m=+407.173979383 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-dt5cx" (UniqueName: "kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx") pod "ingress-canary-2vhcn" (UID: "0b5d722a-1123-4935-9740-52a08d018bc9") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981342 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.481333585 +0000 UTC m=+407.173998283 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981369 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.481353386 +0000 UTC m=+407.174018164 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.981390 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwbqm\" (UniqueName: \"kubernetes.io/projected/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-kube-api-access-bwbqm\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981398 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.481387827 +0000 UTC m=+407.174052615 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"encryption-config-1" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981506 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 podName: nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.48148407 +0000 UTC m=+407.174148758 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981531 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.481523171 +0000 UTC m=+407.174187759 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wrd8h" (UniqueName: "kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981540 4183 configmap.go:199] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981553 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.481543491 +0000 UTC m=+407.174208179 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981582 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.481574512 +0000 UTC m=+407.174239400 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981604 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.481596023 +0000 UTC m=+407.174260751 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-config" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.981659 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.981704 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.946976 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/410cf605-1970-4691-9c95-53fdc123b1f3-ovnkube-config\") pod \"ovnkube-control-plane-77c846df58-6l97b\" (UID: \"410cf605-1970-4691-9c95-53fdc123b1f3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981881 4183 configmap.go:199] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981933 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.481912432 +0000 UTC m=+407.174577170 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981932 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.947533 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981981 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.982001 4183 projected.go:200] Error preparing data for projected volume kube-api-access-qcxcp for pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.982028 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.482009175 +0000 UTC m=+407.174673943 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.982062 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp podName:d0f40333-c860-4c04-8058-a0bf572dcf12 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.482044396 +0000 UTC m=+407.174709194 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qcxcp" (UniqueName: "kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp") pod "network-check-source-5c5478f8c-vqvt7" (UID: "d0f40333-c860-4c04-8058-a0bf572dcf12") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.982119 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.982174 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-bound-sa-token\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.982225 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.982276 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.982329 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.982375 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-cnibin\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.982443 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-auth-proxy-config\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.982505 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.982549 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-cnibin\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.982850 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.982907 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/410cf605-1970-4691-9c95-53fdc123b1f3-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-77c846df58-6l97b\" (UID: \"410cf605-1970-4691-9c95-53fdc123b1f3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.982942 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.982975 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/530553aa-0a1d-423e-8a22-f5eb4bdbb883-available-featuregates\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.983007 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-multus-cni-dir\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.983035 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f4dca86-e6ee-4ec9-8324-86aff960225e-utilities\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.983065 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/410cf605-1970-4691-9c95-53fdc123b1f3-env-overrides\") pod \"ovnkube-control-plane-77c846df58-6l97b\" (UID: \"410cf605-1970-4691-9c95-53fdc123b1f3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.983093 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.983135 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-var-lib-cni-bin\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.983162 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rkkfv\" (UniqueName: \"kubernetes.io/projected/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-kube-api-access-rkkfv\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.983195 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-hostroot\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.983241 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-serving-cert\" (UniqueName: \"kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.983276 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa90b3c2-febd-4588-a063-7fbbe82f00c1-service-ca-bundle\") pod \"router-default-5c9bf7bc58-6jctv\" (UID: \"aa90b3c2-febd-4588-a063-7fbbe82f00c1\") " pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.983312 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.983343 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9fb762d1-812f-43f1-9eac-68034c1ecec7-kube-api-access\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.983370 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/bd556935-a077-45df-ba3f-d42c39326ccd-tmpfs\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.983401 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.983427 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-cni-binary-copy\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.983464 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/297ab9b6-2186-4d5b-a952-2bfd59af63c4-mcc-auth-proxy-config\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.947642 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.986704 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.986915 4183 projected.go:200] Error preparing data for projected volume kube-api-access-x5d97 for pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.944961 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-auth-proxy-config\") pod \"machine-approver-7874c8775-kh4j9\" (UID: \"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\") " pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.988040 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2f8t\" (UniqueName: \"kubernetes.io/projected/475321a1-8b7e-4033-8f72-b05a8b377347-kube-api-access-c2f8t\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.988586 4183 projected.go:294] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.988727 4183 projected.go:294] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.988902 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tl5kg for pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.989071 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.989156 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.989230 4183 projected.go:200] Error preparing data for projected volume kube-api-access-6kgvs for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.989388 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.995425 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cc291782-27d2-4a74-af79-c7dcb31535d2-metrics-tls\") pod \"network-operator-767c585db5-zd56b\" (UID: \"cc291782-27d2-4a74-af79-c7dcb31535d2\") " pod="openshift-network-operator/network-operator-767c585db5-zd56b" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.995917 4183 projected.go:294] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.995939 4183 projected.go:294] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.995958 4183 projected.go:200] Error preparing data for projected volume kube-api-access-pzb57 for pod openshift-controller-manager/controller-manager-6ff78978b4-q4vv8: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.946154 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.997174 4183 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.997195 4183 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.997217 4183 projected.go:200] Error preparing data for projected volume kube-api-access-55f7t for pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.997338 4183 projected.go:294] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.997418 4183 projected.go:294] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.997432 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hpzhn for pod openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.997579 4183 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.997622 4183 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.997640 4183 projected.go:200] Error preparing data for projected volume kube-api-access-6kp86 for pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.997764 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vtgqn\" (UniqueName: \"kubernetes.io/projected/297ab9b6-2186-4d5b-a952-2bfd59af63c4-kube-api-access-vtgqn\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.997890 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.997923 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7dbp\" (UniqueName: \"kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp\") pod \"redhat-marketplace-rmwfn\" (UID: \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\") " pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.997966 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hjlhw\" (UniqueName: \"kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.998053 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-system-cni-dir\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.998099 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-config\") pod \"machine-approver-7874c8775-kh4j9\" (UID: \"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\") " pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.998155 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ad279b4-d9dc-42a8-a1c8-a002bd063482-catalog-content\") pod \"redhat-marketplace-rmwfn\" (UID: \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\") " pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.998191 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bxr\" (UniqueName: \"kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.998234 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/bf1a8b70-3856-486f-9912-a2de1d57c3fb-certs\") pod \"machine-config-server-v65wr\" (UID: \"bf1a8b70-3856-486f-9912-a2de1d57c3fb\") " pod="openshift-machine-config-operator/machine-config-server-v65wr" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.998279 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-v45vm\" (UniqueName: \"kubernetes.io/projected/aa90b3c2-febd-4588-a063-7fbbe82f00c1-kube-api-access-v45vm\") pod \"router-default-5c9bf7bc58-6jctv\" (UID: \"aa90b3c2-febd-4588-a063-7fbbe82f00c1\") " pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.998313 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.998341 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\") " pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.998365 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4092a9f8-5acc-4932-9e90-ef962eeb301a-catalog-content\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.998397 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.998428 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.998461 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.998488 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.998524 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-zjg2w\" (UniqueName: \"kubernetes.io/projected/51a02bbf-2d40-4f84-868a-d399ea18a846-kube-api-access-zjg2w\") pod \"network-node-identity-7xghp\" (UID: \"51a02bbf-2d40-4f84-868a-d399ea18a846\") " pod="openshift-network-node-identity/network-node-identity-7xghp" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.998560 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tvc4r\" (UniqueName: \"kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.998598 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aa90b3c2-febd-4588-a063-7fbbe82f00c1-metrics-certs\") pod \"router-default-5c9bf7bc58-6jctv\" (UID: \"aa90b3c2-febd-4588-a063-7fbbe82f00c1\") " pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.998631 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.998658 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.998689 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.998718 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.998751 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.003069 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.003535 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.003614 4183 projected.go:200] Error preparing data for projected volume kube-api-access-76gl8 for pod openshift-network-diagnostics/network-check-target-v54bt: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.003114 4183 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.003184 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-idp-0-file-data: object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.004121 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.004323 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-dir\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.004493 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bm986\" (UniqueName: \"kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.004671 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.007460 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e-serviceca\") pod \"node-ca-l92hr\" (UID: \"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\") " pod="openshift-image-registry/node-ca-l92hr" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.007519 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8 podName:34a48baf-1bee-4921-8bb2-9b7320e76f79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.507495343 +0000 UTC m=+407.200160031 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-76gl8" (UniqueName: "kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8") pod "network-check-target-v54bt" (UID: "34a48baf-1bee-4921-8bb2-9b7320e76f79") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.012179 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.512153866 +0000 UTC m=+407.204818564 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tl5kg" (UniqueName: "kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.012200 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.512192467 +0000 UTC m=+407.204857165 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.012222 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.512211868 +0000 UTC m=+407.204876586 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-user-idp-0-file-data" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.012249 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.512235249 +0000 UTC m=+407.204900067 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6kgvs" (UniqueName: "kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.012277 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57 podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.512261349 +0000 UTC m=+407.204926047 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pzb57" (UniqueName: "kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.012295 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.5122873 +0000 UTC m=+407.204951998 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.012330 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.012374 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9724w\" (UniqueName: \"kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.012938 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-auth-proxy-config\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.012985 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.013172 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.013296 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9fb762d1-812f-43f1-9eac-68034c1ecec7-serving-cert\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.013453 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/aa90b3c2-febd-4588-a063-7fbbe82f00c1-default-certificate\") pod \"router-default-5c9bf7bc58-6jctv\" (UID: \"aa90b3c2-febd-4588-a063-7fbbe82f00c1\") " pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.013574 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/51a02bbf-2d40-4f84-868a-d399ea18a846-env-overrides\") pod \"network-node-identity-7xghp\" (UID: \"51a02bbf-2d40-4f84-868a-d399ea18a846\") " pod="openshift-network-node-identity/network-node-identity-7xghp" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.013737 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.015081 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-etc-kubernetes\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.015210 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.015408 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.015593 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4092a9f8-5acc-4932-9e90-ef962eeb301a-utilities\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.015740 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.016143 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/297ab9b6-2186-4d5b-a952-2bfd59af63c4-mcc-auth-proxy-config\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.016602 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.016871 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtjml\" (UniqueName: \"kubernetes.io/projected/13045510-8717-4a71-ade4-be95a76440a7-kube-api-access-dtjml\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.018521 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-cx4f9\" (UniqueName: \"kubernetes.io/projected/410cf605-1970-4691-9c95-53fdc123b1f3-kube-api-access-cx4f9\") pod \"ovnkube-control-plane-77c846df58-6l97b\" (UID: \"410cf605-1970-4691-9c95-53fdc123b1f3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.019119 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4qn7\" (UniqueName: \"kubernetes.io/projected/2b6d14a5-ca00-40c7-af7a-051a98a24eed-kube-api-access-j4qn7\") pod \"iptables-alerter-wwpnd\" (UID: \"2b6d14a5-ca00-40c7-af7a-051a98a24eed\") " pod="openshift-network-operator/iptables-alerter-wwpnd" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.020741 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-8svnk\" (UniqueName: \"kubernetes.io/projected/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-kube-api-access-8svnk\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.021678 4183 secret.go:194] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.021971 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-cni-binary-copy\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.022410 4183 secret.go:194] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.023291 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.023624 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.516762998 +0000 UTC m=+407.209427816 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"audit-1" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.023677 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.523662685 +0000 UTC m=+407.216327283 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-55f7t" (UniqueName: "kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.026600 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwvjb\" (UniqueName: \"kubernetes.io/projected/120b38dc-8236-4fa6-a452-642b8ad738ee-kube-api-access-bwvjb\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.027004 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-config\") pod \"machine-approver-7874c8775-kh4j9\" (UID: \"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\") " pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.029345 4183 secret.go:194] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.029578 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.030344 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.031228 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/530553aa-0a1d-423e-8a22-f5eb4bdbb883-available-featuregates\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.031653 4183 projected.go:294] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.007909 4183 projected.go:294] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.038765 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.007958 4183 secret.go:194] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.007988 4183 configmap.go:199] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.008075 4183 secret.go:194] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.008656 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.008885 4183 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.009447 4183 secret.go:194] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.009483 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.007588 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.032241 4183 configmap.go:199] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.032259 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4092a9f8-5acc-4932-9e90-ef962eeb301a-catalog-content\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.032332 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-machine-approver-tls\") pod \"machine-approver-7874c8775-kh4j9\" (UID: \"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\") " pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.032367 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.032386 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.026884 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.523689636 +0000 UTC m=+407.216354334 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hpzhn" (UniqueName: "kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.039489 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.539465937 +0000 UTC m=+407.232130645 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.039511 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86 podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.539502438 +0000 UTC m=+407.232167036 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6kp86" (UniqueName: "kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.039528 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.539521218 +0000 UTC m=+407.232185816 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.039549 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.539537429 +0000 UTC m=+407.232202017 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.039565 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.53955768 +0000 UTC m=+407.232222278 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.039587 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.5395749 +0000 UTC m=+407.232239598 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.039612 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.539599681 +0000 UTC m=+407.232264279 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.039631 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97 podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.539619691 +0000 UTC m=+407.232284279 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-x5d97" (UniqueName: "kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.039646 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.539638172 +0000 UTC m=+407.232302830 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.039663 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.539657612 +0000 UTC m=+407.232322200 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"etcd-client" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.039679 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.539671033 +0000 UTC m=+407.232335631 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.039692 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.539686603 +0000 UTC m=+407.232351191 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.039709 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.539703274 +0000 UTC m=+407.232367872 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-session" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.039730 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.539721844 +0000 UTC m=+407.232386432 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.039749 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.539743465 +0000 UTC m=+407.232408073 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.039767 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.539761385 +0000 UTC m=+407.232426093 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.039995 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.539974411 +0000 UTC m=+407.232639019 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"audit-1" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.040023 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.540013773 +0000 UTC m=+407.232678481 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.040042 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.540036533 +0000 UTC m=+407.232701121 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"trusted-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.040056 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.540050534 +0000 UTC m=+407.232715122 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-client" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.033427 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.040101 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.540094455 +0000 UTC m=+407.232759073 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.033626 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f4dca86-e6ee-4ec9-8324-86aff960225e-utilities\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.033959 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/410cf605-1970-4691-9c95-53fdc123b1f3-env-overrides\") pod \"ovnkube-control-plane-77c846df58-6l97b\" (UID: \"410cf605-1970-4691-9c95-53fdc123b1f3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.034014 4183 secret.go:194] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.040177 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.540170457 +0000 UTC m=+407.232835175 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.034115 4183 secret.go:194] Couldn't get secret openshift-console-operator/webhook-serving-cert: object "openshift-console-operator"/"webhook-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.040227 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.540210798 +0000 UTC m=+407.232875486 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-serving-cert" (UniqueName: "kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : object "openshift-console-operator"/"webhook-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.035141 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa90b3c2-febd-4588-a063-7fbbe82f00c1-service-ca-bundle\") pod \"router-default-5c9bf7bc58-6jctv\" (UID: \"aa90b3c2-febd-4588-a063-7fbbe82f00c1\") " pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.035194 4183 configmap.go:199] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.040290 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.54027947 +0000 UTC m=+407.232944088 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"audit" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.035334 4183 configmap.go:199] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.040334 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.540328002 +0000 UTC m=+407.232992620 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.040459 4183 projected.go:294] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.040473 4183 projected.go:200] Error preparing data for projected volume kube-api-access-8dcvj for pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.040510 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.540499606 +0000 UTC m=+407.233164214 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8dcvj" (UniqueName: "kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.041073 4183 projected.go:294] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.041122 4183 projected.go:294] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.041137 4183 projected.go:200] Error preparing data for projected volume kube-api-access-r8qj9 for pod openshift-apiserver/apiserver-67cbf64bc9-mtx25: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.041219 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9 podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.541186746 +0000 UTC m=+407.233851444 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-r8qj9" (UniqueName: "kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.041359 4183 projected.go:294] Couldn't get configMap hostpath-provisioner/kube-root-ca.crt: object "hostpath-provisioner"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.041440 4183 projected.go:294] Couldn't get configMap hostpath-provisioner/openshift-service-ca.crt: object "hostpath-provisioner"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.041449 4183 projected.go:200] Error preparing data for projected volume kube-api-access-vvtrv for pod hostpath-provisioner/csi-hostpathplugin-hvm8g: [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.041511 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv podName:12e733dd-0939-4f1b-9cbb-13897e093787 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.541499335 +0000 UTC m=+407.234164093 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vvtrv" (UniqueName: "kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv") pod "csi-hostpathplugin-hvm8g" (UID: "12e733dd-0939-4f1b-9cbb-13897e093787") : [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.041594 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.041611 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.041626 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tf29r for pod openshift-marketplace/redhat-marketplace-8s8pc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.041689 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r podName:c782cf62-a827-4677-b3c2-6f82c5f09cbb nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.54168032 +0000 UTC m=+407.234345138 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tf29r" (UniqueName: "kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r") pod "redhat-marketplace-8s8pc" (UID: "c782cf62-a827-4677-b3c2-6f82c5f09cbb") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.041931 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f4dca86-e6ee-4ec9-8324-86aff960225e-catalog-content\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.042124 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.042173 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.542155684 +0000 UTC m=+407.234820392 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.007619 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.059388 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.559359415 +0000 UTC m=+407.252024053 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.013478 4183 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.060348 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.560333093 +0000 UTC m=+407.252997891 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-client" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.013312 4183 secret.go:194] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.058596 4183 secret.go:194] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.061324 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.561312631 +0000 UTC m=+407.253977259 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.061504 4183 configmap.go:199] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.061342 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.059226 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4092a9f8-5acc-4932-9e90-ef962eeb301a-utilities\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.059265 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.059304 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.061420 4183 secret.go:194] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.061715 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.061727 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.061740 4183 projected.go:200] Error preparing data for projected volume kube-api-access-n6sqt for pod openshift-marketplace/community-operators-8jhz6: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.058708 4183 secret.go:194] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.062594 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/51a02bbf-2d40-4f84-868a-d399ea18a846-env-overrides\") pod \"network-node-identity-7xghp\" (UID: \"51a02bbf-2d40-4f84-868a-d399ea18a846\") " pod="openshift-network-node-identity/network-node-identity-7xghp" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.058765 4183 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.064511 4183 projected.go:294] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.066485 4183 projected.go:294] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.066499 4183 projected.go:200] Error preparing data for projected volume kube-api-access-w4r68 for pod openshift-authentication/oauth-openshift-765b47f944-n2lhl: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.065179 4183 configmap.go:199] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.065330 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.065490 4183 configmap.go:199] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.065585 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.066226 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.561543078 +0000 UTC m=+407.254207706 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.066549 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.56653355 +0000 UTC m=+407.259198258 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"service-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.066566 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.566558681 +0000 UTC m=+407.259223269 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.066583 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.566576462 +0000 UTC m=+407.259241060 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.066599 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.566593442 +0000 UTC m=+407.259258180 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.066615 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.566609253 +0000 UTC m=+407.259273951 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.066631 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt podName:3f4dca86-e6ee-4ec9-8324-86aff960225e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.566623623 +0000 UTC m=+407.259288241 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-n6sqt" (UniqueName: "kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt") pod "community-operators-8jhz6" (UID: "3f4dca86-e6ee-4ec9-8324-86aff960225e") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.066650 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.566642104 +0000 UTC m=+407.259306722 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"openshift-global-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.066664 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs podName:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.566658264 +0000 UTC m=+407.259322862 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs") pod "multus-admission-controller-6c7c885997-4hbbc" (UID: "d5025cb4-ddb0-4107-88c1-bcbcdb779ac0") : object "openshift-multus"/"multus-admission-controller-secret" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.066679 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs podName:a702c6d2-4dde-4077-ab8c-0f8df804bf7a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.566674144 +0000 UTC m=+407.259338743 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs") pod "network-metrics-daemon-qdfr4" (UID: "a702c6d2-4dde-4077-ab8c-0f8df804bf7a") : object "openshift-multus"/"metrics-daemon-secret" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.066693 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.566687595 +0000 UTC m=+407.259352193 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.066755 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.566750137 +0000 UTC m=+407.259414725 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.069227 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68 podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.569214847 +0000 UTC m=+407.261879465 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-w4r68" (UniqueName: "kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.069310 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.569295549 +0000 UTC m=+407.261960157 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.069349 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.569342581 +0000 UTC m=+407.262007189 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"client-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.069392 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.569385832 +0000 UTC m=+407.262050440 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.069427 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.569421753 +0000 UTC m=+407.262086371 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.070518 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.070560 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.070592 4183 projected.go:200] Error preparing data for projected volume kube-api-access-ncrf5 for pod openshift-marketplace/certified-operators-7287f: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.070625 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5 podName:887d596e-c519-4bfa-af90-3edd9e1b2f0f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.570616207 +0000 UTC m=+407.263280825 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ncrf5" (UniqueName: "kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5") pod "certified-operators-7287f" (UID: "887d596e-c519-4bfa-af90-3edd9e1b2f0f") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.070711 4183 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.070725 4183 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.070733 4183 projected.go:200] Error preparing data for projected volume kube-api-access-2zpsk for pod openshift-console/downloads-65476884b9-9wcvx: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.070759 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk podName:6268b7fe-8910-4505-b404-6f1df638105c nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.570751461 +0000 UTC m=+407.263416079 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2zpsk" (UniqueName: "kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk") pod "downloads-65476884b9-9wcvx" (UID: "6268b7fe-8910-4505-b404-6f1df638105c") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.070879 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/bd556935-a077-45df-ba3f-d42c39326ccd-tmpfs\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.071762 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovn-node-metrics-cert\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.072578 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/410cf605-1970-4691-9c95-53fdc123b1f3-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-77c846df58-6l97b\" (UID: \"410cf605-1970-4691-9c95-53fdc123b1f3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.073055 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.082900 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.083099 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hqmhq for pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.083416 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq podName:cf1a8966-f594-490a-9fbb-eec5bafd13d3 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.583389592 +0000 UTC m=+407.276054330 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hqmhq" (UniqueName: "kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq") pod "migrator-f7c6d88df-q2fnv" (UID: "cf1a8966-f594-490a-9fbb-eec5bafd13d3") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.073269 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/bf1a8b70-3856-486f-9912-a2de1d57c3fb-certs\") pod \"machine-config-server-v65wr\" (UID: \"bf1a8b70-3856-486f-9912-a2de1d57c3fb\") " pod="openshift-machine-config-operator/machine-config-server-v65wr" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.073889 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qr9t\" (UniqueName: \"kubernetes.io/projected/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-kube-api-access-4qr9t\") pod \"machine-approver-7874c8775-kh4j9\" (UID: \"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\") " pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.074001 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9d0dcce3-d96e-48cb-9b9f-362105911589-proxy-tls\") pod \"machine-config-daemon-zpnhg\" (UID: \"9d0dcce3-d96e-48cb-9b9f-362105911589\") " pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.075109 4183 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.086579 4183 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.083145 4183 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.083009 4183 projected.go:294] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.085385 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7jw8\" (UniqueName: \"kubernetes.io/projected/f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e-kube-api-access-d7jw8\") pod \"node-ca-l92hr\" (UID: \"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\") " pod="openshift-image-registry/node-ca-l92hr" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.085453 4183 projected.go:294] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.085571 4183 projected.go:294] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.085625 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.085662 4183 projected.go:294] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.085730 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtgqn\" (UniqueName: \"kubernetes.io/projected/297ab9b6-2186-4d5b-a952-2bfd59af63c4-kube-api-access-vtgqn\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.085874 4183 projected.go:294] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.086334 4183 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.086382 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.089156 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.089372 4183 projected.go:200] Error preparing data for projected volume kube-api-access-ptdrb for pod openshift-marketplace/redhat-operators-f4jkp: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.089723 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb podName:4092a9f8-5acc-4932-9e90-ef962eeb301a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.589694212 +0000 UTC m=+407.282359040 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ptdrb" (UniqueName: "kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb") pod "redhat-operators-f4jkp" (UID: "4092a9f8-5acc-4932-9e90-ef962eeb301a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.089740 4183 projected.go:294] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.090358 4183 projected.go:200] Error preparing data for projected volume kube-api-access-lx2h9 for pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.089758 4183 projected.go:200] Error preparing data for projected volume kube-api-access-5rpl7 for pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.089771 4183 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.090892 4183 projected.go:200] Error preparing data for projected volume kube-api-access-fqnmc for pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.089855 4183 projected.go:294] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.090934 4183 projected.go:200] Error preparing data for projected volume kube-api-access-9724w for pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.089923 4183 projected.go:294] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.090975 4183 projected.go:200] Error preparing data for projected volume kube-api-access-d9vhj for pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.089957 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.091011 4183 projected.go:200] Error preparing data for projected volume kube-api-access-rg2zg for pod openshift-marketplace/marketplace-operator-8b455464d-f9xdt: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.089966 4183 projected.go:294] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.091109 4183 projected.go:200] Error preparing data for projected volume kube-api-access-d7ntf for pod openshift-service-ca/service-ca-666f99b6f-vlbxv: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.089981 4183 projected.go:294] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.091145 4183 projected.go:200] Error preparing data for projected volume kube-api-access-nf4t2 for pod openshift-dns-operator/dns-operator-75f687757b-nz2xb: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.089996 4183 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.091210 4183 projected.go:200] Error preparing data for projected volume kube-api-access-bm986 for pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.086652 4183 projected.go:294] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.091255 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.090673 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9 podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.59065947 +0000 UTC m=+407.283324208 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lx2h9" (UniqueName: "kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.091317 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7 podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.591298398 +0000 UTC m=+407.283962996 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5rpl7" (UniqueName: "kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.091333 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.591327129 +0000 UTC m=+407.283991717 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-fqnmc" (UniqueName: "kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.091352 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.59134372 +0000 UTC m=+407.284008428 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9724w" (UniqueName: "kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.091365 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.59135985 +0000 UTC m=+407.284024438 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-d9vhj" (UniqueName: "kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.091380 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.59137304 +0000 UTC m=+407.284037638 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rg2zg" (UniqueName: "kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.091396 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.591390511 +0000 UTC m=+407.284055219 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-d7ntf" (UniqueName: "kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.091411 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2 podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.591405241 +0000 UTC m=+407.284069839 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nf4t2" (UniqueName: "kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.091426 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986 podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.591419782 +0000 UTC m=+407.284084370 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bm986" (UniqueName: "kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.091442 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.591433582 +0000 UTC m=+407.284098170 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.093285 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9fb762d1-812f-43f1-9eac-68034c1ecec7-serving-cert\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.093382 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/aa90b3c2-febd-4588-a063-7fbbe82f00c1-default-certificate\") pod \"router-default-5c9bf7bc58-6jctv\" (UID: \"aa90b3c2-febd-4588-a063-7fbbe82f00c1\") " pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.093667 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-bound-sa-token\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.094314 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b54e8941-2fc4-432a-9e51-39684df9089e-bound-sa-token\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.094496 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkzjk\" (UniqueName: \"kubernetes.io/projected/9d0dcce3-d96e-48cb-9b9f-362105911589-kube-api-access-xkzjk\") pod \"machine-config-daemon-zpnhg\" (UID: \"9d0dcce3-d96e-48cb-9b9f-362105911589\") " pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.095386 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-khtlk\" (UniqueName: \"kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-kube-api-access-khtlk\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.096028 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsxd9\" (UniqueName: \"kubernetes.io/projected/6a23c0ee-5648-448c-b772-83dced2891ce-kube-api-access-gsxd9\") pod \"node-resolver-dn27q\" (UID: \"6a23c0ee-5648-448c-b772-83dced2891ce\") " pod="openshift-dns/node-resolver-dn27q" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.097052 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sfhc\" (UniqueName: \"kubernetes.io/projected/cc291782-27d2-4a74-af79-c7dcb31535d2-kube-api-access-4sfhc\") pod \"network-operator-767c585db5-zd56b\" (UID: \"cc291782-27d2-4a74-af79-c7dcb31535d2\") " pod="openshift-network-operator/network-operator-767c585db5-zd56b" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.097620 4183 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.097913 4183 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.098147 4183 projected.go:200] Error preparing data for projected volume kube-api-access-4w8wh for pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.098447 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.598416612 +0000 UTC m=+407.291081300 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4w8wh" (UniqueName: "kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.100236 4183 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.100358 4183 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.100446 4183 projected.go:200] Error preparing data for projected volume kube-api-access-l8bxr for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.100562 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.600549023 +0000 UTC m=+407.293213641 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l8bxr" (UniqueName: "kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.104282 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9495\" (UniqueName: \"kubernetes.io/projected/3e19f9e8-9a37-4ca8-9790-c219750ab482-kube-api-access-f9495\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.105100 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aa90b3c2-febd-4588-a063-7fbbe82f00c1-metrics-certs\") pod \"router-default-5c9bf7bc58-6jctv\" (UID: \"aa90b3c2-febd-4588-a063-7fbbe82f00c1\") " pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.105922 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-bound-sa-token\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.106114 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.109218 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.109303 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.109319 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hjlhw for pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.109399 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.609377205 +0000 UTC m=+407.302041823 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hjlhw" (UniqueName: "kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.117099 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-node-pullsecrets\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.117199 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9fb762d1-812f-43f1-9eac-68034c1ecec7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.117234 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ad279b4-d9dc-42a8-a1c8-a002bd063482-utilities\") pod \"redhat-marketplace-rmwfn\" (UID: \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\") " pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.117310 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.117394 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-run-netns\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.117439 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\") " pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.117472 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-registration-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.117499 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6a23c0ee-5648-448c-b772-83dced2891ce-hosts-file\") pod \"node-resolver-dn27q\" (UID: \"6a23c0ee-5648-448c-b772-83dced2891ce\") " pod="openshift-dns/node-resolver-dn27q" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.117534 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-log-socket\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.117585 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-os-release\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.117609 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.117663 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-node-log\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.117686 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-var-lib-kubelet\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.117744 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-csi-data-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.117875 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-multus-conf-dir\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.118005 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-cnibin\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.118381 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-cnibin\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.118560 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-multus-cni-dir\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.118622 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-var-lib-cni-bin\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.118646 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-hostroot\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.118721 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-system-cni-dir\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.118744 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ad279b4-d9dc-42a8-a1c8-a002bd063482-catalog-content\") pod \"redhat-marketplace-rmwfn\" (UID: \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\") " pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.118766 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r7dbp\" (UniqueName: \"kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp\") pod \"redhat-marketplace-rmwfn\" (UID: \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\") " pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.118885 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\") " pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.119084 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-dir\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.119154 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-etc-kubernetes\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.119507 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.119658 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2b6d14a5-ca00-40c7-af7a-051a98a24eed-host-slash\") pod \"iptables-alerter-wwpnd\" (UID: \"2b6d14a5-ca00-40c7-af7a-051a98a24eed\") " pod="openshift-network-operator/iptables-alerter-wwpnd" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.119767 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-multus-cni-dir\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.119907 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-node-log\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.119965 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-var-lib-kubelet\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.120064 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-csi-data-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.120117 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-multus-conf-dir\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.120187 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-cnibin\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.120230 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-cnibin\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.120374 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-run-netns\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.120395 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-var-lib-cni-bin\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.120417 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-hostroot\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.120551 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit-dir\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.121024 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-etc-kubernetes\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.121115 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6a23c0ee-5648-448c-b772-83dced2891ce-hosts-file\") pod \"node-resolver-dn27q\" (UID: \"6a23c0ee-5648-448c-b772-83dced2891ce\") " pod="openshift-dns/node-resolver-dn27q" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.121328 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2b6d14a5-ca00-40c7-af7a-051a98a24eed-host-slash\") pod \"iptables-alerter-wwpnd\" (UID: \"2b6d14a5-ca00-40c7-af7a-051a98a24eed\") " pod="openshift-network-operator/iptables-alerter-wwpnd" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.121373 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit-dir\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.121851 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-run-openvswitch\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.122055 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-registration-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.122146 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-node-pullsecrets\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.122187 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.120701 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-log-socket\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.120932 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\") " pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.120936 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-os-release\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.120956 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-dir\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.120969 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-system-cni-dir\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.122565 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ad279b4-d9dc-42a8-a1c8-a002bd063482-utilities\") pod \"redhat-marketplace-rmwfn\" (UID: \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\") " pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.122724 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-run-openvswitch\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.122551 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9fb762d1-812f-43f1-9eac-68034c1ecec7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.125983 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ad279b4-d9dc-42a8-a1c8-a002bd063482-catalog-content\") pod \"redhat-marketplace-rmwfn\" (UID: \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\") " pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.126546 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/13ad7555-5f28-4555-a563-892713a8433a-audit-dir\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.126735 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/13ad7555-5f28-4555-a563-892713a8433a-audit-dir\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.127133 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-system-cni-dir\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.127341 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-slash\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.130481 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-var-lib-openvswitch\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.127269 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-system-cni-dir\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.130843 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-etc-openvswitch\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.130563 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-var-lib-openvswitch\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.127698 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-slash\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.128566 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-v45vm\" (UniqueName: \"kubernetes.io/projected/aa90b3c2-febd-4588-a063-7fbbe82f00c1-kube-api-access-v45vm\") pod \"router-default-5c9bf7bc58-6jctv\" (UID: \"aa90b3c2-febd-4588-a063-7fbbe82f00c1\") " pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.130422 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjg2w\" (UniqueName: \"kubernetes.io/projected/51a02bbf-2d40-4f84-868a-d399ea18a846-kube-api-access-zjg2w\") pod \"network-node-identity-7xghp\" (UID: \"51a02bbf-2d40-4f84-868a-d399ea18a846\") " pod="openshift-network-node-identity/network-node-identity-7xghp" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.131232 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-etc-openvswitch\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.131327 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-cni-netd\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.131442 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-os-release\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.131483 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-run-ovn-kubernetes\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.131551 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-run-multus-certs\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.131586 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e-host\") pod \"node-ca-l92hr\" (UID: \"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\") " pod="openshift-image-registry/node-ca-l92hr" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.131722 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-multus-socket-dir-parent\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.131675 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-cni-netd\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.132135 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e-host\") pod \"node-ca-l92hr\" (UID: \"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\") " pod="openshift-image-registry/node-ca-l92hr" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.132338 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-multus-socket-dir-parent\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.132142 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-kubelet\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.132170 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-kubelet\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.132163 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-os-release\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.132204 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-run-multus-certs\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.132744 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-run-ovn-kubernetes\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.132909 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/9d0dcce3-d96e-48cb-9b9f-362105911589-rootfs\") pod \"machine-config-daemon-zpnhg\" (UID: \"9d0dcce3-d96e-48cb-9b9f-362105911589\") " pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.134085 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/cc291782-27d2-4a74-af79-c7dcb31535d2-host-etc-kube\") pod \"network-operator-767c585db5-zd56b\" (UID: \"cc291782-27d2-4a74-af79-c7dcb31535d2\") " pod="openshift-network-operator/network-operator-767c585db5-zd56b" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.134290 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-systemd-units\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.134483 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-cni-bin\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.134637 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-run-k8s-cni-cncf-io\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.132938 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/9d0dcce3-d96e-48cb-9b9f-362105911589-rootfs\") pod \"machine-config-daemon-zpnhg\" (UID: \"9d0dcce3-d96e-48cb-9b9f-362105911589\") " pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.135971 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-systemd-units\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.136029 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.136102 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.136117 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tvc4r for pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.136618 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-cni-bin\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.136632 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-run-k8s-cni-cncf-io\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.136711 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.636685536 +0000 UTC m=+407.329350374 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tvc4r" (UniqueName: "kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.139030 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9fb762d1-812f-43f1-9eac-68034c1ecec7-etc-ssl-certs\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.139129 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9fb762d1-812f-43f1-9eac-68034c1ecec7-etc-ssl-certs\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.140384 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-run-ovn\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.140426 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-run-ovn\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.142102 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-mountpoint-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.142351 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.142871 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-plugins-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.143111 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-plugins-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.143001 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-mountpoint-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.143590 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-run-netns\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.143754 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-run-netns\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.144419 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-socket-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.144900 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-var-lib-cni-multus\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.144629 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-socket-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.145027 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-var-lib-cni-multus\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.145886 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/cc291782-27d2-4a74-af79-c7dcb31535d2-host-etc-kube\") pod \"network-operator-767c585db5-zd56b\" (UID: \"cc291782-27d2-4a74-af79-c7dcb31535d2\") " pod="openshift-network-operator/network-operator-767c585db5-zd56b" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.160910 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.170534 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkkfv\" (UniqueName: \"kubernetes.io/projected/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-kube-api-access-rkkfv\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.175171 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.182246 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-l92hr" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.187836 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.196391 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9fb762d1-812f-43f1-9eac-68034c1ecec7-kube-api-access\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.197400 4183 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.197445 4183 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.197465 4183 projected.go:200] Error preparing data for projected volume kube-api-access-lz9qh for pod openshift-console/console-84fccc7b6-mkncc: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.197537 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.697515744 +0000 UTC m=+407.390180472 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lz9qh" (UniqueName: "kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.203657 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.221286 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-6z2n9\" (UniqueName: \"kubernetes.io/projected/bf1a8b70-3856-486f-9912-a2de1d57c3fb-kube-api-access-6z2n9\") pod \"machine-config-server-v65wr\" (UID: \"bf1a8b70-3856-486f-9912-a2de1d57c3fb\") " pod="openshift-machine-config-operator/machine-config-server-v65wr" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.237889 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-wwpnd" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.253051 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.254531 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x6dp\" (UniqueName: \"kubernetes.io/projected/b54e8941-2fc4-432a-9e51-39684df9089e-kube-api-access-9x6dp\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:40 crc kubenswrapper[4183]: W0813 19:50:40.268483 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod410cf605_1970_4691_9c95_53fdc123b1f3.slice/crio-5716d33776fee1b3bfd908d86257b9ae48c1c339a2b3cc6d4177c4c9b6ba094e WatchSource:0}: Error finding container 5716d33776fee1b3bfd908d86257b9ae48c1c339a2b3cc6d4177c4c9b6ba094e: Status 404 returned error can't find the container with id 5716d33776fee1b3bfd908d86257b9ae48c1c339a2b3cc6d4177c4c9b6ba094e Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.279875 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.280084 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.280199 4183 projected.go:200] Error preparing data for projected volume kube-api-access-r7dbp for pod openshift-marketplace/redhat-marketplace-rmwfn: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.280339 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp podName:9ad279b4-d9dc-42a8-a1c8-a002bd063482 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.780315891 +0000 UTC m=+407.472980619 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-r7dbp" (UniqueName: "kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp") pod "redhat-marketplace-rmwfn" (UID: "9ad279b4-d9dc-42a8-a1c8-a002bd063482") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.296267 4183 projected.go:294] Couldn't get configMap openshift-kube-controller-manager/kube-root-ca.crt: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.296334 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/revision-pruner-8-crc: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.296430 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access podName:72854c1e-5ae2-4ed6-9e50-ff3bccde2635 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.79640468 +0000 UTC m=+407.489069408 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access") pod "revision-pruner-8-crc" (UID: "72854c1e-5ae2-4ed6-9e50-ff3bccde2635") : object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.298240 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-7xghp" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.339918 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.341980 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.367083 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.375130 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.396203 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-dn27q" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.419432 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.428216 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-767c585db5-zd56b" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.428929 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.454329 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-v65wr" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.454950 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.455056 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.455097 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.455129 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.455162 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.455563 4183 configmap.go:199] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.455613 4183 configmap.go:199] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.455667 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.455638061 +0000 UTC m=+408.148302839 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : object "openshift-service-ca"/"signing-cabundle" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.455645 4183 secret.go:194] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.455697 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.455706 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.455685313 +0000 UTC m=+408.148350061 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"oauth-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.455763 4183 secret.go:194] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.458383 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.458365089 +0000 UTC m=+408.151029697 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : object "openshift-dns-operator"/"metrics-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.459226 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.459324 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.459358 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.459406 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.459437 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.459470 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.459505 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.459539 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.459578 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.459615 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j7zrh\" (UniqueName: \"kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.459648 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.459683 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.459716 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.459961 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.460015 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.460050 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.460107 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.460171 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.460214 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.460251 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.460282 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.460442 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls podName:297ab9b6-2186-4d5b-a952-2bfd59af63c4 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.460420198 +0000 UTC m=+408.153084796 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls") pod "machine-config-controller-6df6df6b6b-58shh" (UID: "297ab9b6-2186-4d5b-a952-2bfd59af63c4") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.460464 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.460456719 +0000 UTC m=+408.153121307 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.460522 4183 configmap.go:199] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.460562 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.460553902 +0000 UTC m=+408.153218520 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"console-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.460611 4183 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.460639 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.460632514 +0000 UTC m=+408.153297132 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.460692 4183 secret.go:194] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.460696 4183 configmap.go:199] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.461516 4183 secret.go:194] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.461749 4183 secret.go:194] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.461956 4183 configmap.go:199] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.462026 4183 secret.go:194] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.462389 4183 projected.go:294] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.462410 4183 projected.go:294] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.462422 4183 projected.go:200] Error preparing data for projected volume kube-api-access-j7zrh for pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.462710 4183 secret.go:194] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.463178 4183 secret.go:194] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.463411 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.463496 4183 secret.go:194] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.463562 4183 configmap.go:199] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.463636 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.463727 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.463927 4183 projected.go:294] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.463948 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.464018 4183 secret.go:194] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.464090 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.464170 4183 configmap.go:199] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.460764 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.460727 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.460713906 +0000 UTC m=+408.153378524 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.467622 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.467600923 +0000 UTC m=+408.160265521 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.467661 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.468046 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.468075 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.468108 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.468376 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.469257 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.469301 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.469291952 +0000 UTC m=+408.161956570 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.469333 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.469323893 +0000 UTC m=+408.161988491 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default-metrics-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.469516 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.469504128 +0000 UTC m=+408.162168726 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.469542 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.469528268 +0000 UTC m=+408.162193716 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.469566 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.469557289 +0000 UTC m=+408.162221887 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : object "openshift-service-ca"/"signing-key" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.469586 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.46957859 +0000 UTC m=+408.162243188 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-j7zrh" (UniqueName: "kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.469757 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.469738414 +0000 UTC m=+408.162403012 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.469837 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.469768885 +0000 UTC m=+408.162433483 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.469890 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.469852168 +0000 UTC m=+408.162516846 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.470911 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.471023 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.47100016 +0000 UTC m=+408.163664788 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"image-import-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.471278 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.471367 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.471356171 +0000 UTC m=+408.164020769 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.471769 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.471755912 +0000 UTC m=+408.164420610 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.471530 4183 secret.go:194] Couldn't get secret openshift-image-registry/installation-pull-secrets: object "openshift-image-registry"/"installation-pull-secrets" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.471565 4183 secret.go:194] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.473353 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.473339137 +0000 UTC m=+408.166003755 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.476701 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.476679103 +0000 UTC m=+408.169343841 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.476725 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.476714784 +0000 UTC m=+408.169379372 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.476740 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.476733884 +0000 UTC m=+408.169398482 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.476756 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.476750155 +0000 UTC m=+408.169414753 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.476873 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.476764075 +0000 UTC m=+408.169428663 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-images" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.476920 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.476908999 +0000 UTC m=+408.169573587 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.476936 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.47692974 +0000 UTC m=+408.169594448 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.476958 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.476952801 +0000 UTC m=+408.169617389 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "installation-pull-secrets" (UniqueName: "kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"installation-pull-secrets" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.476981 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.476969741 +0000 UTC m=+408.169634329 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.477047 4183 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.477090 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.477080434 +0000 UTC m=+408.169745292 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"client-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.477122 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.477180 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.477220 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.477358 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.477398 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.477382853 +0000 UTC m=+408.170047471 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.477447 4183 secret.go:194] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.477472 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.477465965 +0000 UTC m=+408.170130573 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.489692 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.523155 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" event={"ID":"ec1bae8b-3200-4ad9-b33b-cf8701f3027c","Type":"ContainerStarted","Data":"13eba7880abbfbef1344a579dab2a0b19cce315561153e251e3263ed0687b3e7"} Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.523402 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.548115 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.593376 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9572cbf27a025e52f8350ba1f90df2f73ac013d88644e34f555a7ae71822234\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2024-06-27T13:23:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:07Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.597211 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q88th" event={"ID":"475321a1-8b7e-4033-8f72-b05a8b377347","Type":"ContainerStarted","Data":"221a24b0d917be98aa8fdfcfe9dbbefc5cd678c5dd905ae1ce5de6a160842882"} Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.610491 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-w4r68\" (UniqueName: \"kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.610644 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.610692 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.610731 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.610768 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.611144 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.611178 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.611207 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kp86\" (UniqueName: \"kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.611237 4183 secret.go:194] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.611340 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs podName:a702c6d2-4dde-4077-ab8c-0f8df804bf7a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.611320121 +0000 UTC m=+408.303984859 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs") pod "network-metrics-daemon-qdfr4" (UID: "a702c6d2-4dde-4077-ab8c-0f8df804bf7a") : object "openshift-multus"/"metrics-daemon-secret" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.611377 4183 projected.go:294] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.611400 4183 projected.go:294] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.611417 4183 projected.go:200] Error preparing data for projected volume kube-api-access-8dcvj for pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.611428 4183 projected.go:294] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.611451 4183 projected.go:294] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.611465 4183 projected.go:200] Error preparing data for projected volume kube-api-access-w4r68 for pod openshift-authentication/oauth-openshift-765b47f944-n2lhl: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.611482 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.611463675 +0000 UTC m=+408.304128403 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-8dcvj" (UniqueName: "kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.611249 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8dcvj\" (UniqueName: \"kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.611511 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68 podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.611501356 +0000 UTC m=+408.304165954 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-w4r68" (UniqueName: "kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.611551 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.611560 4183 configmap.go:199] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.611591 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.611600 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.611592309 +0000 UTC m=+408.304257037 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.611629 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hpzhn\" (UniqueName: \"kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.611646 4183 secret.go:194] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.611661 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r8qj9\" (UniqueName: \"kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.611766 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs podName:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.611668531 +0000 UTC m=+408.304333209 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs") pod "multus-admission-controller-6c7c885997-4hbbc" (UID: "d5025cb4-ddb0-4107-88c1-bcbcdb779ac0") : object "openshift-multus"/"multus-admission-controller-secret" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.612124 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.612165 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.612199 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.612225 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.612287 4183 configmap.go:199] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.612310 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.612332 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.61232174 +0000 UTC m=+408.304986358 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.612348 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.61233993 +0000 UTC m=+408.305004558 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.611862 4183 projected.go:294] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.612369 4183 projected.go:294] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.612383 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.612414 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.612406382 +0000 UTC m=+408.305070990 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.612386 4183 projected.go:200] Error preparing data for projected volume kube-api-access-r8qj9 for pod openshift-apiserver/apiserver-67cbf64bc9-mtx25: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.612449 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9 podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.612442863 +0000 UTC m=+408.305107581 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-r8qj9" (UniqueName: "kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.611906 4183 projected.go:294] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.612473 4183 projected.go:294] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.612485 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hpzhn for pod openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.612523 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.612515185 +0000 UTC m=+408.305179913 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-hpzhn" (UniqueName: "kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.611929 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.612555 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.612547486 +0000 UTC m=+408.305212094 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.611939 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.612577 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.611968 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.611977 4183 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.612627 4183 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.612635 4183 projected.go:200] Error preparing data for projected volume kube-api-access-6kp86 for pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.612582 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.612576787 +0000 UTC m=+408.305241395 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"client-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.612668 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.612661629 +0000 UTC m=+408.305326227 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.612682 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.61267664 +0000 UTC m=+408.305341228 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.612696 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86 podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.61269076 +0000 UTC m=+408.305355348 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kp86" (UniqueName: "kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.611995 4183 configmap.go:199] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.612730 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.612721631 +0000 UTC m=+408.305386489 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.611895 4183 secret.go:194] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.612763 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.612757312 +0000 UTC m=+408.305421930 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.624197 4183 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.624285 4183 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.624301 4183 projected.go:200] Error preparing data for projected volume kube-api-access-4w8wh for pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.624598 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.6245759 +0000 UTC m=+408.317240508 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-4w8wh" (UniqueName: "kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.624674 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4w8wh\" (UniqueName: \"kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.624873 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-idp-0-file-data: object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.624916 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.624902479 +0000 UTC m=+408.317567087 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-idp-0-file-data" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.624954 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.625186 4183 projected.go:294] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.625201 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.625461 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.625442705 +0000 UTC m=+408.318107313 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.625313 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.625686 4183 secret.go:194] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.625842 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.625771574 +0000 UTC m=+408.318492644 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"etcd-client" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.625870 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.625940 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.626040 4183 configmap.go:199] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.626080 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.626071212 +0000 UTC m=+408.318735830 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.626327 4183 configmap.go:199] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.626382 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.626369941 +0000 UTC m=+408.319034559 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.626550 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.626884 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.627156 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.627138403 +0000 UTC m=+408.319803111 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.627190 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.627273 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.627418 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.627715 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.627700459 +0000 UTC m=+408.320365287 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"audit-1" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.634892 4183 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.635391 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.635379579 +0000 UTC m=+408.328044317 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.635013 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.635525 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.635601 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.635650 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.635706 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.635974 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.636026 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.636077 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.636256 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-serving-cert\" (UniqueName: \"kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.636431 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.636477 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.637002 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.637053 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hjlhw\" (UniqueName: \"kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.637104 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.637356 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.637405 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.637440 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bxr\" (UniqueName: \"kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.637476 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.637549 4183 secret.go:194] Couldn't get secret openshift-console-operator/webhook-serving-cert: object "openshift-console-operator"/"webhook-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.637587 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.637626 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tvc4r\" (UniqueName: \"kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.637656 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.637626603 +0000 UTC m=+408.330291271 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-serving-cert" (UniqueName: "kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : object "openshift-console-operator"/"webhook-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.637707 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.637762 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.638749 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.638767 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tvc4r for pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.638871 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.638931 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.638898609 +0000 UTC m=+408.331563447 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-tvc4r" (UniqueName: "kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.642340 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.642307307 +0000 UTC m=+408.334972075 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.638000 4183 secret.go:194] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.642396 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.642385259 +0000 UTC m=+408.335049937 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.638343 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.642446 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.642467 4183 projected.go:200] Error preparing data for projected volume kube-api-access-ncrf5 for pod openshift-marketplace/certified-operators-7287f: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.642531 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5 podName:887d596e-c519-4bfa-af90-3edd9e1b2f0f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.642518783 +0000 UTC m=+408.335183461 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-ncrf5" (UniqueName: "kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5") pod "certified-operators-7287f" (UID: "887d596e-c519-4bfa-af90-3edd9e1b2f0f") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.638426 4183 secret.go:194] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.642590 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.642579444 +0000 UTC m=+408.335244132 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.637768 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.642677 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.642726 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.642885 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.642944 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bm986\" (UniqueName: \"kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.642993 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.643037 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.643072 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9724w\" (UniqueName: \"kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.643112 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.643155 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.643201 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.643252 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.643287 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.643330 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.643369 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.643410 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wrd8h\" (UniqueName: \"kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.643458 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dt5cx\" (UniqueName: \"kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx\") pod \"ingress-canary-2vhcn\" (UID: \"0b5d722a-1123-4935-9740-52a08d018bc9\") " pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.643898 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nf4t2\" (UniqueName: \"kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.644115 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.644164 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-fqnmc\" (UniqueName: \"kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.644192 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vhj\" (UniqueName: \"kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.644219 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.644254 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtrv\" (UniqueName: \"kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.644309 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-x5d97\" (UniqueName: \"kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.644337 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.644387 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxcp\" (UniqueName: \"kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp\") pod \"network-check-source-5c5478f8c-vqvt7\" (UID: \"d0f40333-c860-4c04-8058-a0bf572dcf12\") " pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.644430 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.644465 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.644501 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.644542 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.644620 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.644654 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.644682 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d7ntf\" (UniqueName: \"kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.652329 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.652375 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5rpl7\" (UniqueName: \"kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.652421 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.652649 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.652698 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.652734 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.653137 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmhq\" (UniqueName: \"kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq\") pod \"migrator-f7c6d88df-q2fnv\" (UID: \"cf1a8966-f594-490a-9fbb-eec5bafd13d3\") " pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.653177 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.653222 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.653462 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tl5kg\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.653506 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.653545 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2zpsk\" (UniqueName: \"kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk\") pod \"downloads-65476884b9-9wcvx\" (UID: \"6268b7fe-8910-4505-b404-6f1df638105c\") " pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.653694 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2h9\" (UniqueName: \"kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.653740 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.653857 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.664754 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.667430 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.667539 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.667560 4183 projected.go:200] Error preparing data for projected volume kube-api-access-x5d97 for pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.667659 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97 podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.66762997 +0000 UTC m=+408.360294598 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-x5d97" (UniqueName: "kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.638983 4183 secret.go:194] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.667740 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.667720143 +0000 UTC m=+408.360384861 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.669223 4183 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.669389 4183 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.669424 4183 projected.go:200] Error preparing data for projected volume kube-api-access-5rpl7 for pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.669507 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7 podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.669483293 +0000 UTC m=+408.362147991 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-5rpl7" (UniqueName: "kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.669624 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.669672 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.669657688 +0000 UTC m=+408.362322306 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.669957 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.669979 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.670224 4183 projected.go:200] Error preparing data for projected volume kube-api-access-qcxcp for pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.670275 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp podName:d0f40333-c860-4c04-8058-a0bf572dcf12 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.670259045 +0000 UTC m=+408.362923784 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-qcxcp" (UniqueName: "kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp") pod "network-check-source-5c5478f8c-vqvt7" (UID: "d0f40333-c860-4c04-8058-a0bf572dcf12") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.670360 4183 secret.go:194] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.670603 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.670591155 +0000 UTC m=+408.363256073 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.671119 4183 projected.go:294] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.671139 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.671306 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.671187692 +0000 UTC m=+408.363852400 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.671532 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.671596 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.671576703 +0000 UTC m=+408.364241381 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.671604 4183 configmap.go:199] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.671664 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.671651935 +0000 UTC m=+408.364316653 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.671692 4183 secret.go:194] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.671752 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.671736518 +0000 UTC m=+408.364401816 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"metrics-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.671754 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.671999 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.671988625 +0000 UTC m=+408.364653243 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.672531 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.672704 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.672687545 +0000 UTC m=+408.365352173 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"openshift-global-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.672716 4183 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.672764 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.672756287 +0000 UTC m=+408.365420895 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.639242 4183 configmap.go:199] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.672971 4183 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.673014 4183 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.673033 4183 projected.go:200] Error preparing data for projected volume kube-api-access-bm986 for pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.673099 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986 podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.673081226 +0000 UTC m=+408.365745884 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-bm986" (UniqueName: "kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.673173 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.673221 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.67321258 +0000 UTC m=+408.365877198 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-session" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.673264 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.673248731 +0000 UTC m=+408.365913349 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"audit" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.639324 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.673286 4183 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.673336 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.673316633 +0000 UTC m=+408.365981241 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-client" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.673408 4183 projected.go:294] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.673426 4183 projected.go:294] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.673437 4183 projected.go:200] Error preparing data for projected volume kube-api-access-9724w for pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.673488 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.673478917 +0000 UTC m=+408.366143535 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-9724w" (UniqueName: "kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.673539 4183 configmap.go:199] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.673578 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.67356302 +0000 UTC m=+408.366227628 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"service-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.673643 4183 secret.go:194] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.673692 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.673683063 +0000 UTC m=+408.366347691 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.673757 4183 secret.go:194] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.674115 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.674094425 +0000 UTC m=+408.366759153 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.674173 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.674220 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.674205408 +0000 UTC m=+408.366870036 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.674282 4183 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.674322 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.674309221 +0000 UTC m=+408.366973829 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.674398 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.674421 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.674440 4183 projected.go:200] Error preparing data for projected volume kube-api-access-rg2zg for pod openshift-marketplace/marketplace-operator-8b455464d-f9xdt: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.674479 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.674465796 +0000 UTC m=+408.367130424 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-rg2zg" (UniqueName: "kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.674534 4183 secret.go:194] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.674566 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.674551678 +0000 UTC m=+408.367216396 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.674648 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.674664 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.674677 4183 projected.go:200] Error preparing data for projected volume kube-api-access-wrd8h for pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.674713 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.674704263 +0000 UTC m=+408.367368891 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-wrd8h" (UniqueName: "kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.675003 4183 projected.go:294] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.675019 4183 projected.go:294] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.675032 4183 projected.go:200] Error preparing data for projected volume kube-api-access-dt5cx for pod openshift-ingress-canary/ingress-canary-2vhcn: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.675075 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx podName:0b5d722a-1123-4935-9740-52a08d018bc9 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.675066653 +0000 UTC m=+408.367731371 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-dt5cx" (UniqueName: "kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx") pod "ingress-canary-2vhcn" (UID: "0b5d722a-1123-4935-9740-52a08d018bc9") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.675151 4183 projected.go:294] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.675165 4183 projected.go:294] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.675173 4183 projected.go:200] Error preparing data for projected volume kube-api-access-nf4t2 for pod openshift-dns-operator/dns-operator-75f687757b-nz2xb: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.675210 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2 podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.675198537 +0000 UTC m=+408.367863255 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-nf4t2" (UniqueName: "kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.675272 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.675312 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.675294489 +0000 UTC m=+408.367959107 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.675388 4183 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.675404 4183 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.675422 4183 projected.go:200] Error preparing data for projected volume kube-api-access-fqnmc for pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.675466 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.675448834 +0000 UTC m=+408.368113632 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-fqnmc" (UniqueName: "kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.675554 4183 projected.go:294] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.675569 4183 projected.go:294] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.675577 4183 projected.go:200] Error preparing data for projected volume kube-api-access-d9vhj for pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.675615 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.675600998 +0000 UTC m=+408.368265726 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-d9vhj" (UniqueName: "kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.676129 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 podName: nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.676118663 +0000 UTC m=+408.368783501 (durationBeforeRetry 1s). Error: MountVolume.MountDevice failed for volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.676208 4183 projected.go:294] Couldn't get configMap hostpath-provisioner/kube-root-ca.crt: object "hostpath-provisioner"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.676222 4183 projected.go:294] Couldn't get configMap hostpath-provisioner/openshift-service-ca.crt: object "hostpath-provisioner"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.676230 4183 projected.go:200] Error preparing data for projected volume kube-api-access-vvtrv for pod hostpath-provisioner/csi-hostpathplugin-hvm8g: [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.676282 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv podName:12e733dd-0939-4f1b-9cbb-13897e093787 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.676273397 +0000 UTC m=+408.368938015 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-vvtrv" (UniqueName: "kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv") pod "csi-hostpathplugin-hvm8g" (UID: "12e733dd-0939-4f1b-9cbb-13897e093787") : [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.639039 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.676330 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.676322999 +0000 UTC m=+408.368987687 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.639667 4183 secret.go:194] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.676374 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.67636126 +0000 UTC m=+408.369025868 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.639693 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.676422 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.676411631 +0000 UTC m=+408.369076239 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.639744 4183 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.676454 4183 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.676466 4183 projected.go:200] Error preparing data for projected volume kube-api-access-l8bxr for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.676511 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.676497444 +0000 UTC m=+408.369162182 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l8bxr" (UniqueName: "kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.640267 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.676561 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.676553685 +0000 UTC m=+408.369218303 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-client" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.640674 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.676608 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.676595417 +0000 UTC m=+408.369260025 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.640955 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.676653 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.676637398 +0000 UTC m=+408.369302016 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.641007 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.676699 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.676688709 +0000 UTC m=+408.369353317 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"audit-1" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.641252 4183 secret.go:194] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.677108 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.677093321 +0000 UTC m=+408.369757929 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.641305 4183 secret.go:194] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.677151 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.677144412 +0000 UTC m=+408.369809130 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.641344 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.677342 4183 projected.go:294] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.677364 4183 projected.go:294] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.677509 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.677498012 +0000 UTC m=+408.370162740 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.677625 4183 configmap.go:199] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.677668 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.677648977 +0000 UTC m=+408.370313595 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"trusted-ca-bundle" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.677624 4183 projected.go:200] Error preparing data for projected volume kube-api-access-d7ntf for pod openshift-service-ca/service-ca-666f99b6f-vlbxv: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.677717 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.677706218 +0000 UTC m=+408.370370906 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-d7ntf" (UniqueName: "kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.678032 4183 secret.go:194] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.678071 4183 configmap.go:199] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.678225 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.678378 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.678399 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.678414 4183 projected.go:200] Error preparing data for projected volume kube-api-access-n6sqt for pod openshift-marketplace/community-operators-8jhz6: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.680703 4183 configmap.go:199] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.682398 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.682423 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.682440 4183 projected.go:200] Error preparing data for projected volume kube-api-access-ptdrb for pod openshift-marketplace/redhat-operators-f4jkp: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.687249 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.688482 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"41d80ed1b6b3289201cf615c5e532a96845a5c98c79088b67161733f63882539"} Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.688504 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.688567 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hqmhq for pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.689052 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.689161 4183 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.689177 4183 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.689186 4183 projected.go:200] Error preparing data for projected volume kube-api-access-55f7t for pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.689288 4183 projected.go:294] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.689303 4183 projected.go:294] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.689312 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tl5kg for pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.689418 4183 projected.go:294] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.689432 4183 projected.go:294] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.689440 4183 projected.go:200] Error preparing data for projected volume kube-api-access-lx2h9 for pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.689530 4183 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.689543 4183 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.689552 4183 projected.go:200] Error preparing data for projected volume kube-api-access-2zpsk for pod openshift-console/downloads-65476884b9-9wcvx: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.689626 4183 secret.go:194] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.690089 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.694406 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.678068569 +0000 UTC m=+408.370733357 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"encryption-config-1" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.694531 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.694516299 +0000 UTC m=+408.387180897 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.694557 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.694542109 +0000 UTC m=+408.387206697 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.694717 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt podName:3f4dca86-e6ee-4ec9-8324-86aff960225e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.69456639 +0000 UTC m=+408.387230978 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-n6sqt" (UniqueName: "kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt") pod "community-operators-8jhz6" (UID: "3f4dca86-e6ee-4ec9-8324-86aff960225e") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.694739 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.694730915 +0000 UTC m=+408.387395613 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.694759 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb podName:4092a9f8-5acc-4932-9e90-ef962eeb301a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.694747315 +0000 UTC m=+408.387412013 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-ptdrb" (UniqueName: "kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb") pod "redhat-operators-f4jkp" (UID: "4092a9f8-5acc-4932-9e90-ef962eeb301a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.639365 4183 configmap.go:199] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.699059 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.702968 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hjlhw for pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.699630 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.709018 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq podName:cf1a8966-f594-490a-9fbb-eec5bafd13d3 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.694766776 +0000 UTC m=+408.387431374 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqmhq" (UniqueName: "kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq") pod "migrator-f7c6d88df-q2fnv" (UID: "cf1a8966-f594-490a-9fbb-eec5bafd13d3") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.709300 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.70926622 +0000 UTC m=+408.401930828 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.709512 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.709495977 +0000 UTC m=+408.402160775 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-55f7t" (UniqueName: "kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.709550 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.709529308 +0000 UTC m=+408.402193966 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-tl5kg" (UniqueName: "kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.709577 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9 podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.709565979 +0000 UTC m=+408.402230657 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-lx2h9" (UniqueName: "kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.714404 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk podName:6268b7fe-8910-4505-b404-6f1df638105c nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.714227782 +0000 UTC m=+408.406892390 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-2zpsk" (UniqueName: "kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk") pod "downloads-65476884b9-9wcvx" (UID: "6268b7fe-8910-4505-b404-6f1df638105c") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.714517 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.71450534 +0000 UTC m=+408.407169938 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-oauth-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.714548 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.714531291 +0000 UTC m=+408.407196129 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.714580 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.714570242 +0000 UTC m=+408.407234840 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"trusted-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.714627 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.714703 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.714750 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kgvs\" (UniqueName: \"kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.714913 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-76gl8\" (UniqueName: \"kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8\") pod \"network-check-target-v54bt\" (UID: \"34a48baf-1bee-4921-8bb2-9b7320e76f79\") " pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.714958 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.715021 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pzb57\" (UniqueName: \"kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.715468 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.715519 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.715737 4183 secret.go:194] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.715893 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.715874899 +0000 UTC m=+408.408539737 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.715975 4183 projected.go:269] Couldn't get secret openshift-image-registry/image-registry-tls: object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.715989 4183 projected.go:200] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-585546dd8b-v5m4t: object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.716030 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.716022723 +0000 UTC m=+408.408687341 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.716072 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.716051654 +0000 UTC m=+408.408716242 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-hjlhw" (UniqueName: "kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.716131 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.716198 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.716190428 +0000 UTC m=+408.408855046 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.716287 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.716313 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.716336 4183 projected.go:200] Error preparing data for projected volume kube-api-access-6kgvs for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.716388 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.716376064 +0000 UTC m=+408.409040762 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kgvs" (UniqueName: "kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.716477 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.719348 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.719369 4183 projected.go:200] Error preparing data for projected volume kube-api-access-76gl8 for pod openshift-network-diagnostics/network-check-target-v54bt: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.719425 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8 podName:34a48baf-1bee-4921-8bb2-9b7320e76f79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.71940179 +0000 UTC m=+408.412066418 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-76gl8" (UniqueName: "kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8") pod "network-check-target-v54bt" (UID: "34a48baf-1bee-4921-8bb2-9b7320e76f79") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.719496 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.719514 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.719522 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tf29r for pod openshift-marketplace/redhat-marketplace-8s8pc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.719569 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r podName:c782cf62-a827-4677-b3c2-6f82c5f09cbb nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.719549504 +0000 UTC m=+408.412214122 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-tf29r" (UniqueName: "kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r") pod "redhat-marketplace-8s8pc" (UID: "c782cf62-a827-4677-b3c2-6f82c5f09cbb") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.719639 4183 projected.go:294] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.719663 4183 projected.go:294] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.719672 4183 projected.go:200] Error preparing data for projected volume kube-api-access-pzb57 for pod openshift-controller-manager/controller-manager-6ff78978b4-q4vv8: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.719718 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57 podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.719706979 +0000 UTC m=+408.412371597 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-pzb57" (UniqueName: "kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.719877 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.719924 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.719909655 +0000 UTC m=+408.412574383 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.757513 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.764569 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-l92hr" event={"ID":"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e","Type":"ContainerStarted","Data":"9bb711518b1fc4ac72f4ad05c59c2bd3bc932c94879c31183df088652e4ed2c3"} Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.790268 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerStarted","Data":"815c16566f290b783ea9eced9544573db3088d99a58cb4d87a1fd8ab2b69614e"} Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.797291 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:40 crc kubenswrapper[4183]: W0813 19:50:40.810977 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9fb762d1_812f_43f1_9eac_68034c1ecec7.slice/crio-44d24fb11db7ae2742519239309e3254a495fb0556d8e82e16f4cb9c4b64108c WatchSource:0}: Error finding container 44d24fb11db7ae2742519239309e3254a495fb0556d8e82e16f4cb9c4b64108c: Status 404 returned error can't find the container with id 44d24fb11db7ae2742519239309e3254a495fb0556d8e82e16f4cb9c4b64108c Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.822586 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" event={"ID":"410cf605-1970-4691-9c95-53fdc123b1f3","Type":"ContainerStarted","Data":"5716d33776fee1b3bfd908d86257b9ae48c1c339a2b3cc6d4177c4c9b6ba094e"} Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.833599 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lz9qh\" (UniqueName: \"kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.833751 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\") " pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.834230 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r7dbp\" (UniqueName: \"kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp\") pod \"redhat-marketplace-rmwfn\" (UID: \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\") " pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.834607 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.834649 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.834664 4183 projected.go:200] Error preparing data for projected volume kube-api-access-r7dbp for pod openshift-marketplace/redhat-marketplace-rmwfn: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.834725 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp podName:9ad279b4-d9dc-42a8-a1c8-a002bd063482 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.834708246 +0000 UTC m=+408.527372974 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-r7dbp" (UniqueName: "kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp") pod "redhat-marketplace-rmwfn" (UID: "9ad279b4-d9dc-42a8-a1c8-a002bd063482") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.834866 4183 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.834883 4183 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.834892 4183 projected.go:200] Error preparing data for projected volume kube-api-access-lz9qh for pod openshift-console/console-84fccc7b6-mkncc: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.834927 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.834913601 +0000 UTC m=+408.527578409 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-lz9qh" (UniqueName: "kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.834978 4183 projected.go:294] Couldn't get configMap openshift-kube-controller-manager/kube-root-ca.crt: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.834988 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/revision-pruner-8-crc: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.835013 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access podName:72854c1e-5ae2-4ed6-9e50-ff3bccde2635 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.835004674 +0000 UTC m=+408.527669292 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access") pod "revision-pruner-8-crc" (UID: "72854c1e-5ae2-4ed6-9e50-ff3bccde2635") : object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.837241 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.849250 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" event={"ID":"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8","Type":"ContainerStarted","Data":"e76d945a8cb210681a40e3f9356115ebf38b8c8873e7d7a82afbf363f496a845"} Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.873331 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.888954 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" event={"ID":"2b6d14a5-ca00-40c7-af7a-051a98a24eed","Type":"ContainerStarted","Data":"807117e45707932fb04c35eb8f8cd7233e9fecc547b5e6d3e81e84b6f57d09af"} Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.900523 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.927267 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" event={"ID":"51a02bbf-2d40-4f84-868a-d399ea18a846","Type":"ContainerStarted","Data":"e4abca68aabfc809ca21711270325e201599e8b85acaf41371638a0414333adf"} Aug 13 19:50:40 crc kubenswrapper[4183]: W0813 19:50:40.932948 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a23c0ee_5648_448c_b772_83dced2891ce.slice/crio-7bbc561a16cc9a56f4d08fa72e19c57f5c5cdb54ee1a9b77e752effc42fb180f WatchSource:0}: Error finding container 7bbc561a16cc9a56f4d08fa72e19c57f5c5cdb54ee1a9b77e752effc42fb180f: Status 404 returned error can't find the container with id 7bbc561a16cc9a56f4d08fa72e19c57f5c5cdb54ee1a9b77e752effc42fb180f Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.933327 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.960512 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.978547 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.029130 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.057008 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.105141 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.140646 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.198939 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.215139 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.215402 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.216639 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.227373 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.219273 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.227519 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.219327 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.227623 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.219325 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.219362 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.235359 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.219397 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.235483 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.219398 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.235600 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.219444 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.235938 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.219464 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.236348 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.219468 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.236468 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.219509 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.236593 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.219509 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.236690 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.219553 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.236976 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.219953 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.237322 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220003 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.237482 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220030 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.237596 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220063 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.237688 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220100 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.237876 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220115 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.238125 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220121 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.238248 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220161 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.238531 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220162 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.238657 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220199 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.239620 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220220 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.239725 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220227 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.239927 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220258 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.240038 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220266 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.240170 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220290 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220294 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220323 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220338 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220359 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220373 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220408 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220406 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220439 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220447 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220479 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220490 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.219665 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220525 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.249612 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.249891 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.249990 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.250077 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.250199 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.250423 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.250536 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.251216 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.252463 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.252573 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.252657 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.252740 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.252950 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.256207 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.300629 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.315526 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.334712 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.378702 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:41 crc kubenswrapper[4183]: W0813 19:50:41.431345 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc291782_27d2_4a74_af79_c7dcb31535d2.slice/crio-8d494f516ab462fe0efca4e10a5bd10552cb52fe8198ca66dbb92b9402c1eae4 WatchSource:0}: Error finding container 8d494f516ab462fe0efca4e10a5bd10552cb52fe8198ca66dbb92b9402c1eae4: Status 404 returned error can't find the container with id 8d494f516ab462fe0efca4e10a5bd10552cb52fe8198ca66dbb92b9402c1eae4 Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.464467 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.473207 4183 secret.go:194] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.473312 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.473293997 +0000 UTC m=+410.165958705 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : object "openshift-dns-operator"/"metrics-tls" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.473343 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.473384 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.473415 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.473454 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.473484 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.473627 4183 secret.go:194] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.473669 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls podName:297ab9b6-2186-4d5b-a952-2bfd59af63c4 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.473660538 +0000 UTC m=+410.166325246 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls") pod "machine-config-controller-6df6df6b6b-58shh" (UID: "297ab9b6-2186-4d5b-a952-2bfd59af63c4") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.473663 4183 configmap.go:199] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.473715 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.473743 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.47373438 +0000 UTC m=+410.166398998 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.473866 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.473765421 +0000 UTC m=+410.166430119 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : object "openshift-service-ca"/"signing-cabundle" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.474724 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.474885 4183 configmap.go:199] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.474898 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.474937 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.474923424 +0000 UTC m=+410.167588172 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"console-operator-config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.474971 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.474988 4183 secret.go:194] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475039 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.475018606 +0000 UTC m=+410.167683294 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475055 4183 configmap.go:199] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475090 4183 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475112 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.475080758 +0000 UTC m=+410.167745376 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.475062 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475137 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.47512656 +0000 UTC m=+410.167791298 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.475202 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.475235 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.475273 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.475301 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j7zrh\" (UniqueName: \"kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.475329 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.475355 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.475389 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.475498 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475503 4183 secret.go:194] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475553 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.475540571 +0000 UTC m=+410.168205349 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : object "openshift-service-ca"/"signing-key" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.475584 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475587 4183 secret.go:194] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475592 4183 projected.go:294] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475614 4183 projected.go:294] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475626 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.475614703 +0000 UTC m=+410.168279391 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475632 4183 projected.go:200] Error preparing data for projected volume kube-api-access-j7zrh for pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475656 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475669 4183 secret.go:194] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475675 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.475662325 +0000 UTC m=+410.168327093 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-j7zrh" (UniqueName: "kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475697 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.475687716 +0000 UTC m=+410.168352494 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475703 4183 configmap.go:199] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475716 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.475706626 +0000 UTC m=+410.168371284 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475720 4183 secret.go:194] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.475616 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475730 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.475723857 +0000 UTC m=+410.168388455 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475745 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.475737937 +0000 UTC m=+410.168402635 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475509 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475879 4183 configmap.go:199] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475766 4183 secret.go:194] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.482341 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.482455 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.482495 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.482521 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.482560 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.482594 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.482725 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.482866 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.482770288 +0000 UTC m=+410.175434906 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.482930 4183 secret.go:194] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.482959 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.482951963 +0000 UTC m=+410.175616581 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.482992 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.483014 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.483000085 +0000 UTC m=+410.175664723 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.483014 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.483039 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.483029485 +0000 UTC m=+410.175694083 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.483043 4183 secret.go:194] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.483054 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.483046476 +0000 UTC m=+410.175711064 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.482991 4183 projected.go:294] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.483074 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.483075 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.483062736 +0000 UTC m=+410.175727414 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.483099 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.483092237 +0000 UTC m=+410.175756865 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.483131 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.483113728 +0000 UTC m=+410.175778356 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.483155 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.483145469 +0000 UTC m=+410.175810157 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default-metrics-tls" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.483313 4183 configmap.go:199] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.483355 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.483346034 +0000 UTC m=+410.176010642 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"oauth-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.532082 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.564233 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.584270 4183 configmap.go:199] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.584384 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.584368602 +0000 UTC m=+410.277033220 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-images" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.584097 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.584529 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.585072 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.585285 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.585561 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.585653 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.585643078 +0000 UTC m=+410.278307816 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"image-import-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.585718 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.585879 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.585899 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.585890855 +0000 UTC m=+410.278555453 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.585980 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.586115 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.586173 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.586213 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.586248 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.586644 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.588029 4183 secret.go:194] Couldn't get secret openshift-image-registry/installation-pull-secrets: object "openshift-image-registry"/"installation-pull-secrets" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.588496 4183 secret.go:194] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.588555 4183 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.588628 4183 secret.go:194] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.597944 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.58676001 +0000 UTC m=+410.279427598 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.598293 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.598272999 +0000 UTC m=+410.290937717 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "installation-pull-secrets" (UniqueName: "kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"installation-pull-secrets" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.598413 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.598391513 +0000 UTC m=+410.291056211 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.598442 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.598431534 +0000 UTC m=+410.291096192 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.598457 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.598450934 +0000 UTC m=+410.291115522 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"client-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.598472 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.598465835 +0000 UTC m=+410.291130423 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.610340 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.656593 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.687893 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688018 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688058 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688110 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688160 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688194 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8dcvj\" (UniqueName: \"kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688223 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688254 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688280 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kp86\" (UniqueName: \"kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688314 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hpzhn\" (UniqueName: \"kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688350 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r8qj9\" (UniqueName: \"kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688378 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688402 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688437 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688466 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688498 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4w8wh\" (UniqueName: \"kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688528 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688552 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688583 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688614 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688642 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688664 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688694 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688722 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688746 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689104 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689144 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689172 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689203 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689228 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689255 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689290 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-serving-cert\" (UniqueName: \"kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689316 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689344 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689374 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689434 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689462 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689488 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bxr\" (UniqueName: \"kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689520 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689543 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689568 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tvc4r\" (UniqueName: \"kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689600 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689626 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689649 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689688 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bm986\" (UniqueName: \"kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689713 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689737 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689851 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9724w\" (UniqueName: \"kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689887 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689910 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689937 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689966 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.690007 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.690031 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.690054 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.690079 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.690101 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.690123 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dt5cx\" (UniqueName: \"kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx\") pod \"ingress-canary-2vhcn\" (UID: \"0b5d722a-1123-4935-9740-52a08d018bc9\") " pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.690128 4183 secret.go:194] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.690167 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wrd8h\" (UniqueName: \"kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.690184 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs podName:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.690170346 +0000 UTC m=+410.382834964 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs") pod "multus-admission-controller-6c7c885997-4hbbc" (UID: "d5025cb4-ddb0-4107-88c1-bcbcdb779ac0") : object "openshift-multus"/"multus-admission-controller-secret" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.690323 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.690337 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nf4t2\" (UniqueName: \"kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.690369 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.690378 4183 configmap.go:199] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.690393 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-fqnmc\" (UniqueName: \"kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.690411 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.690402152 +0000 UTC m=+410.383066880 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.690433 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vhj\" (UniqueName: \"kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.690450 4183 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.690461 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.690497 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-x5d97\" (UniqueName: \"kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.690509 4183 projected.go:294] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.690522 4183 projected.go:294] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.690524 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtrv\" (UniqueName: \"kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.690532 4183 projected.go:200] Error preparing data for projected volume kube-api-access-nf4t2 for pod openshift-dns-operator/dns-operator-75f687757b-nz2xb: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.690561 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2 podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.690552977 +0000 UTC m=+410.383217655 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-nf4t2" (UniqueName: "kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.690583 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.690593 4183 projected.go:294] Couldn't get configMap hostpath-provisioner/kube-root-ca.crt: object "hostpath-provisioner"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.690605 4183 projected.go:294] Couldn't get configMap hostpath-provisioner/openshift-service-ca.crt: object "hostpath-provisioner"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.690609 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxcp\" (UniqueName: \"kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp\") pod \"network-check-source-5c5478f8c-vqvt7\" (UID: \"d0f40333-c860-4c04-8058-a0bf572dcf12\") " pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.690613 4183 projected.go:200] Error preparing data for projected volume kube-api-access-vvtrv for pod hostpath-provisioner/csi-hostpathplugin-hvm8g: [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.690639 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv podName:12e733dd-0939-4f1b-9cbb-13897e093787 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.690629299 +0000 UTC m=+410.383294037 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-vvtrv" (UniqueName: "kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv") pod "csi-hostpathplugin-hvm8g" (UID: "12e733dd-0939-4f1b-9cbb-13897e093787") : [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.690663 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.690667 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.690680 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.690688 4183 projected.go:200] Error preparing data for projected volume kube-api-access-qcxcp for pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.690711 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp podName:d0f40333-c860-4c04-8058-a0bf572dcf12 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.690704621 +0000 UTC m=+410.383369349 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-qcxcp" (UniqueName: "kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp") pod "network-check-source-5c5478f8c-vqvt7" (UID: "d0f40333-c860-4c04-8058-a0bf572dcf12") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.690688 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.712456 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.712535 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.712639 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.712681 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.712717 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d7ntf\" (UniqueName: \"kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.712754 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.712940 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5rpl7\" (UniqueName: \"kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.712980 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.713007 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.713042 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.713077 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmhq\" (UniqueName: \"kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq\") pod \"migrator-f7c6d88df-q2fnv\" (UID: \"cf1a8966-f594-490a-9fbb-eec5bafd13d3\") " pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.713113 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.713142 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.713166 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.713255 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tl5kg\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.713293 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.713337 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2h9\" (UniqueName: \"kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.713679 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-w4r68\" (UniqueName: \"kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.713888 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.714150 4183 secret.go:194] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.714505 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs podName:a702c6d2-4dde-4077-ab8c-0f8df804bf7a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.71447942 +0000 UTC m=+410.407144068 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs") pod "network-metrics-daemon-qdfr4" (UID: "a702c6d2-4dde-4077-ab8c-0f8df804bf7a") : object "openshift-multus"/"metrics-daemon-secret" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.703566 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.714557 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.714544292 +0000 UTC m=+410.407208900 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.705091 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.714604 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.714593884 +0000 UTC m=+410.407258612 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.705144 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.714639 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.714632695 +0000 UTC m=+410.407297303 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"client-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.705183 4183 configmap.go:199] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.714682 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.714674726 +0000 UTC m=+410.407339334 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.705281 4183 projected.go:294] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.714719 4183 projected.go:294] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.714739 4183 projected.go:200] Error preparing data for projected volume kube-api-access-8dcvj for pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.714917 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.714769609 +0000 UTC m=+410.407434287 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-8dcvj" (UniqueName: "kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.705329 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.714971 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.714957694 +0000 UTC m=+410.407622322 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.705387 4183 secret.go:194] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715010 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.715002725 +0000 UTC m=+410.407667343 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.705450 4183 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715033 4183 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715043 4183 projected.go:200] Error preparing data for projected volume kube-api-access-6kp86 for pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715071 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86 podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.715065347 +0000 UTC m=+410.407729965 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kp86" (UniqueName: "kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.705499 4183 projected.go:294] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715089 4183 projected.go:294] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715103 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hpzhn for pod openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715147 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.715136119 +0000 UTC m=+410.407800947 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-hpzhn" (UniqueName: "kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.705898 4183 projected.go:294] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715168 4183 projected.go:294] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715180 4183 projected.go:200] Error preparing data for projected volume kube-api-access-r8qj9 for pod openshift-apiserver/apiserver-67cbf64bc9-mtx25: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715205 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9 podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.715197771 +0000 UTC m=+410.407862389 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-r8qj9" (UniqueName: "kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.705954 4183 configmap.go:199] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715254 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.715244502 +0000 UTC m=+410.407909310 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.705993 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715296 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.715284993 +0000 UTC m=+410.407949611 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.706313 4183 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715323 4183 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715332 4183 projected.go:200] Error preparing data for projected volume kube-api-access-4w8wh for pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715361 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.715351785 +0000 UTC m=+410.408016403 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-4w8wh" (UniqueName: "kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.706371 4183 projected.go:294] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715377 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715402 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.715393497 +0000 UTC m=+410.408058115 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.706420 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-idp-0-file-data: object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715456 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.715439978 +0000 UTC m=+410.408104646 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-idp-0-file-data" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.706515 4183 configmap.go:199] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715499 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.715485659 +0000 UTC m=+410.408150317 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.706554 4183 configmap.go:199] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715551 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.715539221 +0000 UTC m=+410.408203899 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.706588 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715605 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.715595002 +0000 UTC m=+410.408259680 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.706679 4183 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715650 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.715642164 +0000 UTC m=+410.408306782 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.706725 4183 secret.go:194] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715696 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.715687055 +0000 UTC m=+410.408351723 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.706761 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715739 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.715731526 +0000 UTC m=+410.408396204 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715931 4183 projected.go:294] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715974 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715988 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.716016 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.716027 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.716002084 +0000 UTC m=+410.408666832 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.716033 4183 projected.go:200] Error preparing data for projected volume kube-api-access-ncrf5 for pod openshift-marketplace/certified-operators-7287f: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.716096 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5 podName:887d596e-c519-4bfa-af90-3edd9e1b2f0f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.716078396 +0000 UTC m=+410.408743094 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-ncrf5" (UniqueName: "kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5") pod "certified-operators-7287f" (UID: "887d596e-c519-4bfa-af90-3edd9e1b2f0f") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.716174 4183 secret.go:194] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.716216 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.71620303 +0000 UTC m=+410.408867638 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"metrics-tls" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.716266 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.716304 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.716296222 +0000 UTC m=+410.408960840 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"openshift-global-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.716360 4183 configmap.go:199] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.716419 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.716399095 +0000 UTC m=+410.409063783 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"trusted-ca-bundle" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.716502 4183 projected.go:294] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.716517 4183 projected.go:294] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.716536 4183 projected.go:200] Error preparing data for projected volume kube-api-access-d7ntf for pod openshift-service-ca/service-ca-666f99b6f-vlbxv: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.716569 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.71656071 +0000 UTC m=+410.409225338 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-d7ntf" (UniqueName: "kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.717416 4183 secret.go:194] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.717468 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.717457456 +0000 UTC m=+410.410122084 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.717527 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.717571 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.717561238 +0000 UTC m=+410.410225856 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.717625 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.717657 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.717649561 +0000 UTC m=+410.410314179 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"audit-1" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.717711 4183 secret.go:194] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.717754 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.717744934 +0000 UTC m=+410.410409562 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.717921 4183 secret.go:194] Couldn't get secret openshift-console-operator/webhook-serving-cert: object "openshift-console-operator"/"webhook-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.717959 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.717946559 +0000 UTC m=+410.410611188 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-serving-cert" (UniqueName: "kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : object "openshift-console-operator"/"webhook-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.717999 4183 secret.go:194] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.718035 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.718026932 +0000 UTC m=+410.410691550 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.718082 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.718117 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.718105784 +0000 UTC m=+410.410770392 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.718167 4183 configmap.go:199] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.718198 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.718188206 +0000 UTC m=+410.410852814 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"audit" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.718250 4183 secret.go:194] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.718289 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.718280859 +0000 UTC m=+410.410945487 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.718331 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.718368 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.718355621 +0000 UTC m=+410.411020239 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.718428 4183 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.718446 4183 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.718460 4183 projected.go:200] Error preparing data for projected volume kube-api-access-l8bxr for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.718488 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.718480835 +0000 UTC m=+410.411145453 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l8bxr" (UniqueName: "kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.718555 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.718590 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.718582918 +0000 UTC m=+410.411247526 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-client" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.718642 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.718676 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.71866425 +0000 UTC m=+410.411328868 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.718731 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.718747 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.718759 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tvc4r for pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.716666 4183 secret.go:194] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.735182 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.735145851 +0000 UTC m=+410.427810479 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"encryption-config-1" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.690340 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.737022 4183 projected.go:200] Error preparing data for projected volume kube-api-access-wrd8h for pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.737118 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.737087597 +0000 UTC m=+410.429752215 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-wrd8h" (UniqueName: "kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.737248 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.737294 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.737283132 +0000 UTC m=+410.429947750 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.737451 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.737494 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.737477488 +0000 UTC m=+410.430142156 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.738586 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-version-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-version-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.739061 4183 secret.go:194] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.739120 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.739101254 +0000 UTC m=+410.431765942 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.739199 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.739249 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.739231228 +0000 UTC m=+410.431895916 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.739307 4183 secret.go:194] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.739347 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.739338121 +0000 UTC m=+410.432002739 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.739402 4183 configmap.go:199] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.739452 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.739437684 +0000 UTC m=+410.432102632 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.739549 4183 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.739569 4183 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.739581 4183 projected.go:200] Error preparing data for projected volume kube-api-access-bm986 for pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.739641 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986 podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.739626979 +0000 UTC m=+410.432291877 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-bm986" (UniqueName: "kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.739722 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.748345 4183 configmap.go:199] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.748656 4183 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.748993 4183 projected.go:294] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.749963 4183 projected.go:294] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.750098 4183 projected.go:200] Error preparing data for projected volume kube-api-access-9724w for pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.750337 4183 projected.go:294] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.750452 4183 projected.go:294] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.750547 4183 projected.go:200] Error preparing data for projected volume kube-api-access-d9vhj for pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.750722 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.751004 4183 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.751197 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.751296 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.751441 4183 projected.go:200] Error preparing data for projected volume kube-api-access-rg2zg for pod openshift-marketplace/marketplace-operator-8b455464d-f9xdt: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.751891 4183 secret.go:194] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.690463 4183 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.752368 4183 projected.go:200] Error preparing data for projected volume kube-api-access-fqnmc for pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.752335 4183 projected.go:294] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.752417 4183 projected.go:294] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.752425 4183 projected.go:200] Error preparing data for projected volume kube-api-access-dt5cx for pod openshift-ingress-canary/ingress-canary-2vhcn: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.752510 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.752532 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.752541 4183 projected.go:200] Error preparing data for projected volume kube-api-access-x5d97 for pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.752615 4183 secret.go:194] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.752659 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.752689 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.753277 4183 configmap.go:199] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.753568 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.753876 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.754000 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.754208 4183 projected.go:200] Error preparing data for projected volume kube-api-access-n6sqt for pod openshift-marketplace/community-operators-8jhz6: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.754650 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.755005 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.755084 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.755102 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hqmhq for pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.755199 4183 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.755282 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.755356 4183 configmap.go:199] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.755438 4183 secret.go:194] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.755524 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.755641 4183 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.755658 4183 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.755670 4183 projected.go:200] Error preparing data for projected volume kube-api-access-55f7t for pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.755847 4183 projected.go:294] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.755868 4183 projected.go:294] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.755878 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tl5kg for pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.755957 4183 projected.go:294] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.755973 4183 projected.go:294] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.755992 4183 projected.go:200] Error preparing data for projected volume kube-api-access-lx2h9 for pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.756083 4183 projected.go:294] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.756096 4183 projected.go:294] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.756104 4183 projected.go:200] Error preparing data for projected volume kube-api-access-w4r68 for pod openshift-authentication/oauth-openshift-765b47f944-n2lhl: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.756517 4183 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.756591 4183 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.756608 4183 projected.go:200] Error preparing data for projected volume kube-api-access-5rpl7 for pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.757937 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.756534 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.762895 4183 projected.go:200] Error preparing data for projected volume kube-api-access-ptdrb for pod openshift-marketplace/redhat-operators-f4jkp: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.772417 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.739765183 +0000 UTC m=+410.432429871 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-session" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.772602 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.772576971 +0000 UTC m=+410.465241569 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"service-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.772623 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.772615092 +0000 UTC m=+410.465279680 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-client" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.772648 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.772634573 +0000 UTC m=+410.465299171 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-tvc4r" (UniqueName: "kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.772676 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.772665973 +0000 UTC m=+410.465330571 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-9724w" (UniqueName: "kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.772693 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.772686474 +0000 UTC m=+410.465351072 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-d9vhj" (UniqueName: "kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.772724 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.772706715 +0000 UTC m=+410.465371313 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773183 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773166568 +0000 UTC m=+410.465831176 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773211 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773197619 +0000 UTC m=+410.465862767 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-rg2zg" (UniqueName: "kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773244 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773227879 +0000 UTC m=+410.465892537 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773267 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 podName: nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.77325492 +0000 UTC m=+410.465919588 (durationBeforeRetry 2s). Error: MountVolume.MountDevice failed for volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773283 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773275991 +0000 UTC m=+410.465940579 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-fqnmc" (UniqueName: "kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773302 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx podName:0b5d722a-1123-4935-9740-52a08d018bc9 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773291351 +0000 UTC m=+410.465955939 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-dt5cx" (UniqueName: "kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx") pod "ingress-canary-2vhcn" (UID: "0b5d722a-1123-4935-9740-52a08d018bc9") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773318 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97 podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773310462 +0000 UTC m=+410.465975170 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-x5d97" (UniqueName: "kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773341 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773325992 +0000 UTC m=+410.465990590 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773363 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773351593 +0000 UTC m=+410.466016191 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773381 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773373204 +0000 UTC m=+410.466037792 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773398 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773391064 +0000 UTC m=+410.466055662 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773424 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773415815 +0000 UTC m=+410.466080413 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773442 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt podName:3f4dca86-e6ee-4ec9-8324-86aff960225e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773434505 +0000 UTC m=+410.466099093 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-n6sqt" (UniqueName: "kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt") pod "community-operators-8jhz6" (UID: "3f4dca86-e6ee-4ec9-8324-86aff960225e") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773460 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq podName:cf1a8966-f594-490a-9fbb-eec5bafd13d3 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773450346 +0000 UTC m=+410.466114944 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqmhq" (UniqueName: "kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq") pod "migrator-f7c6d88df-q2fnv" (UID: "cf1a8966-f594-490a-9fbb-eec5bafd13d3") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773486 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773477737 +0000 UTC m=+410.466142335 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773501 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773493747 +0000 UTC m=+410.466158585 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773524 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773509958 +0000 UTC m=+410.466174656 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773545 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773538918 +0000 UTC m=+410.466203506 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773562 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773554819 +0000 UTC m=+410.466219527 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773584 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773570469 +0000 UTC m=+410.466235057 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-55f7t" (UniqueName: "kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773584 4183 secret.go:194] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773607 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.77360082 +0000 UTC m=+410.466265408 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-tl5kg" (UniqueName: "kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773628 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9 podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773620201 +0000 UTC m=+410.466284799 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-lx2h9" (UniqueName: "kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773648 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68 podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773637351 +0000 UTC m=+410.466301939 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-w4r68" (UniqueName: "kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773663 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7 podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773655842 +0000 UTC m=+410.466320440 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-5rpl7" (UniqueName: "kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773678 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773671472 +0000 UTC m=+410.466336070 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"audit-1" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773700 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb podName:4092a9f8-5acc-4932-9e90-ef962eeb301a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773686623 +0000 UTC m=+410.466351391 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-ptdrb" (UniqueName: "kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb") pod "redhat-operators-f4jkp" (UID: "4092a9f8-5acc-4932-9e90-ef962eeb301a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773726 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773715643 +0000 UTC m=+410.466380241 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"etcd-client" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.775512 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.816971 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2zpsk\" (UniqueName: \"kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk\") pod \"downloads-65476884b9-9wcvx\" (UID: \"6268b7fe-8910-4505-b404-6f1df638105c\") " pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.817039 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.817074 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.817105 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.817136 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.817170 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kgvs\" (UniqueName: \"kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.817217 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-76gl8\" (UniqueName: \"kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8\") pod \"network-check-target-v54bt\" (UID: \"34a48baf-1bee-4921-8bb2-9b7320e76f79\") " pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.817242 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.817286 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pzb57\" (UniqueName: \"kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.817340 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.817375 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.822371 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.822540 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.822520858 +0000 UTC m=+410.515185666 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.823111 4183 secret.go:194] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.827029 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.827017917 +0000 UTC m=+410.519682545 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-oauth-config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.823314 4183 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.827060 4183 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.827080 4183 projected.go:200] Error preparing data for projected volume kube-api-access-2zpsk for pod openshift-console/downloads-65476884b9-9wcvx: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.827112 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk podName:6268b7fe-8910-4505-b404-6f1df638105c nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.827104519 +0000 UTC m=+410.519769127 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-2zpsk" (UniqueName: "kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk") pod "downloads-65476884b9-9wcvx" (UID: "6268b7fe-8910-4505-b404-6f1df638105c") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.823368 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.827146 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.82713893 +0000 UTC m=+410.519803548 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.823422 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.827164 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.827171 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tf29r for pod openshift-marketplace/redhat-marketplace-8s8pc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.827192 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r podName:c782cf62-a827-4677-b3c2-6f82c5f09cbb nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.827186582 +0000 UTC m=+410.519851190 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-tf29r" (UniqueName: "kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r") pod "redhat-marketplace-8s8pc" (UID: "c782cf62-a827-4677-b3c2-6f82c5f09cbb") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.823486 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.827211 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.827224 4183 projected.go:200] Error preparing data for projected volume kube-api-access-6kgvs for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.827254 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.827246873 +0000 UTC m=+410.519911481 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kgvs" (UniqueName: "kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.823527 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.827274 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.827298 4183 projected.go:200] Error preparing data for projected volume kube-api-access-76gl8 for pod openshift-network-diagnostics/network-check-target-v54bt: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.827336 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8 podName:34a48baf-1bee-4921-8bb2-9b7320e76f79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.827328276 +0000 UTC m=+410.519992884 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-76gl8" (UniqueName: "kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8") pod "network-check-target-v54bt" (UID: "34a48baf-1bee-4921-8bb2-9b7320e76f79") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.823577 4183 projected.go:294] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.827361 4183 projected.go:294] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.827372 4183 projected.go:200] Error preparing data for projected volume kube-api-access-pzb57 for pod openshift-controller-manager/controller-manager-6ff78978b4-q4vv8: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.827412 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57 podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.827392608 +0000 UTC m=+410.520057226 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-pzb57" (UniqueName: "kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.823611 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.827475 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.82746769 +0000 UTC m=+410.520132368 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.823960 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.824019 4183 projected.go:269] Couldn't get secret openshift-image-registry/image-registry-tls: object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.827643 4183 projected.go:200] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-585546dd8b-v5m4t: object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.827672 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.827664485 +0000 UTC m=+410.520329103 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.824606 4183 secret.go:194] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.827714 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.827705556 +0000 UTC m=+410.520370174 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.828028 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hjlhw\" (UniqueName: \"kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.828078 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.834861 4183 configmap.go:199] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.836083 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.836070246 +0000 UTC m=+410.528734984 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"trusted-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.835018 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.836549 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.836639 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hjlhw for pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.836863 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.836757605 +0000 UTC m=+410.529422303 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-hjlhw" (UniqueName: "kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.900416 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.929755 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lz9qh\" (UniqueName: \"kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.929893 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\") " pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.930475 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r7dbp\" (UniqueName: \"kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp\") pod \"redhat-marketplace-rmwfn\" (UID: \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\") " pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.932632 4183 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.932688 4183 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.932702 4183 projected.go:200] Error preparing data for projected volume kube-api-access-lz9qh for pod openshift-console/console-84fccc7b6-mkncc: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.932873 4183 projected.go:294] Couldn't get configMap openshift-kube-controller-manager/kube-root-ca.crt: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.932894 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/revision-pruner-8-crc: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.932930 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access podName:72854c1e-5ae2-4ed6-9e50-ff3bccde2635 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.932914523 +0000 UTC m=+410.625579141 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access") pod "revision-pruner-8-crc" (UID: "72854c1e-5ae2-4ed6-9e50-ff3bccde2635") : object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.932993 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.933008 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.933016 4183 projected.go:200] Error preparing data for projected volume kube-api-access-r7dbp for pod openshift-marketplace/redhat-marketplace-rmwfn: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.933042 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp podName:9ad279b4-d9dc-42a8-a1c8-a002bd063482 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.933033907 +0000 UTC m=+410.625698525 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-r7dbp" (UniqueName: "kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp") pod "redhat-marketplace-rmwfn" (UID: "9ad279b4-d9dc-42a8-a1c8-a002bd063482") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.933284 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.933273204 +0000 UTC m=+410.625937902 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-lz9qh" (UniqueName: "kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.980329 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q88th" event={"ID":"475321a1-8b7e-4033-8f72-b05a8b377347","Type":"ContainerStarted","Data":"1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2"} Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:41.999954 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" event={"ID":"cc291782-27d2-4a74-af79-c7dcb31535d2","Type":"ContainerStarted","Data":"8d494f516ab462fe0efca4e10a5bd10552cb52fe8198ca66dbb92b9402c1eae4"} Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.001483 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.013623 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-dn27q" event={"ID":"6a23c0ee-5648-448c-b772-83dced2891ce","Type":"ContainerStarted","Data":"7bbc561a16cc9a56f4d08fa72e19c57f5c5cdb54ee1a9b77e752effc42fb180f"} Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.022652 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" event={"ID":"aa90b3c2-febd-4588-a063-7fbbe82f00c1","Type":"ContainerStarted","Data":"7f52ab4d1ec6be2d7d4c2b684f75557c65a5b3424d556a21053e8abd54d2afd9"} Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.037563 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" event={"ID":"bf1a8b70-3856-486f-9912-a2de1d57c3fb","Type":"ContainerStarted","Data":"55fa820b6afd0d7cad1d37a4f84deed3f0ce4495af292cdacc5f97f75e79113b"} Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.044591 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" event={"ID":"9fb762d1-812f-43f1-9eac-68034c1ecec7","Type":"ContainerStarted","Data":"44d24fb11db7ae2742519239309e3254a495fb0556d8e82e16f4cb9c4b64108c"} Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.038442 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.045093 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.045184 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.045279 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.045395 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:42Z","lastTransitionTime":"2025-08-13T19:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.056295 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:42 crc kubenswrapper[4183]: E0813 19:50:42.091309 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.125146 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.125191 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.125204 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.125228 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.125259 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:42Z","lastTransitionTime":"2025-08-13T19:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:42 crc kubenswrapper[4183]: E0813 19:50:42.149043 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.149323 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.164679 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.165008 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.165032 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.165056 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.165121 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:42Z","lastTransitionTime":"2025-08-13T19:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:42 crc kubenswrapper[4183]: E0813 19:50:42.203542 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.208455 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:50:42 crc kubenswrapper[4183]: E0813 19:50:42.208726 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.208959 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:42 crc kubenswrapper[4183]: E0813 19:50:42.209103 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.209181 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.209185 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.209291 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:42 crc kubenswrapper[4183]: E0813 19:50:42.209297 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.209394 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:42 crc kubenswrapper[4183]: E0813 19:50:42.209484 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:50:42 crc kubenswrapper[4183]: E0813 19:50:42.209586 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.209631 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:42 crc kubenswrapper[4183]: E0813 19:50:42.209709 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:50:42 crc kubenswrapper[4183]: E0813 19:50:42.210605 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.236393 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.305114 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-approver-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-approver-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.411594 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48c1471ee6eaa615e5b0e19686e3fafc0f687dc03625988c88b411dc682d223f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2024-06-27T13:27:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2024-06-27T13:24:26Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.417054 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.417101 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.417123 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.417160 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.417211 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:42Z","lastTransitionTime":"2025-08-13T19:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.484289 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:42 crc kubenswrapper[4183]: E0813 19:50:42.485059 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.511656 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.511714 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.511729 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.511747 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.511766 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:42Z","lastTransitionTime":"2025-08-13T19:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.548476 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:42 crc kubenswrapper[4183]: E0813 19:50:42.567581 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:42 crc kubenswrapper[4183]: E0813 19:50:42.567636 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.604743 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.632592 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.674444 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b065562fefc63a381832e1073dc188f7f27d20b65780f1c54a9aa34c767a3b80\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:28:38Z\\\",\\\"message\\\":\\\"Thu Jun 27 13:21:15 UTC 2024\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:14Z\\\"}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.762492 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.812684 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.840428 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.906099 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.004691 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.041613 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.062631 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerStarted","Data":"9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9"} Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.210345 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.210657 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.210731 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.211050 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.211117 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.211212 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.211270 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.211359 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.211422 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.211519 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.211569 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.211652 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.213022 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.213029 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.213170 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.213275 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.213302 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.213326 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.213391 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.213410 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.213515 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.213527 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.213552 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.213642 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.213648 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.213964 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.213974 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.214036 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.214077 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.214092 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.214123 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.214174 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.214180 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.214276 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.214304 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.214276 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.214375 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.214379 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.214457 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.214486 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.214537 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.214561 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.214537 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.214641 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.214692 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.214697 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.214763 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.214767 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.215025 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.215126 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.215149 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.215198 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.215158 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.215218 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.215195 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.215357 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.215446 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.215341 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.215512 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.215565 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.215584 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.216001 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.216027 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.216106 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.216227 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.216336 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.216348 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.216443 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.216527 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.216630 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.216673 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.216737 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.216929 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.216982 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.217047 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.217164 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.217240 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.217287 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.217336 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.217417 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.218253 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.218319 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.343986 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.422944 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.514687 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.514748 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.514897 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.515258 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.515439 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.515471 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.515514 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.515488903 +0000 UTC m=+414.208153801 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.515526 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.515575 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.515590 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.515573466 +0000 UTC m=+414.208238094 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.515625 4183 configmap.go:199] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.515659 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.515645408 +0000 UTC m=+414.208313616 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : object "openshift-service-ca"/"signing-cabundle" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.515626 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.515308 4183 secret.go:194] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.515699 4183 secret.go:194] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.515731 4183 configmap.go:199] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.515743 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls podName:297ab9b6-2186-4d5b-a952-2bfd59af63c4 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.51573433 +0000 UTC m=+414.208399048 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls") pod "machine-config-controller-6df6df6b6b-58shh" (UID: "297ab9b6-2186-4d5b-a952-2bfd59af63c4") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.515759 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.515751901 +0000 UTC m=+414.208416629 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"oauth-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.515696 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.515928 4183 secret.go:194] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.515990 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.515979677 +0000 UTC m=+414.208644585 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : object "openshift-dns-operator"/"metrics-tls" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.516530 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.516514443 +0000 UTC m=+414.209179041 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default-metrics-tls" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.516683 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.517085 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.517051798 +0000 UTC m=+414.209716786 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.518172 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.518245 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.518335 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.518379 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.518431 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.518444 4183 secret.go:194] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.518493 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.518483599 +0000 UTC m=+414.211148187 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.518501 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.518553 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j7zrh\" (UniqueName: \"kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.518563 4183 secret.go:194] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.518589 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.518600 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.518591882 +0000 UTC m=+414.211256610 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.518644 4183 configmap.go:199] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.519254 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.51923774 +0000 UTC m=+414.211902459 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"console-operator-config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.518648 4183 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.519321 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.519309783 +0000 UTC m=+414.211974501 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.518707 4183 configmap.go:199] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.519375 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.519361484 +0000 UTC m=+414.212026322 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.518723 4183 projected.go:294] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.519424 4183 projected.go:294] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.519450 4183 projected.go:200] Error preparing data for projected volume kube-api-access-j7zrh for pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.519500 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.519488108 +0000 UTC m=+414.212152826 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-j7zrh" (UniqueName: "kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.518725 4183 secret.go:194] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.519560 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.519543879 +0000 UTC m=+414.212208827 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : object "openshift-service-ca"/"signing-key" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.518744 4183 configmap.go:199] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.519607 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.519597711 +0000 UTC m=+414.212262419 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.520134 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.520193 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.520174697 +0000 UTC m=+414.212839425 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.520235 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.520625 4183 secret.go:194] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.520867 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.520846137 +0000 UTC m=+414.213510855 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.520928 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.521262 4183 secret.go:194] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.524465 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.52444889 +0000 UTC m=+414.217113698 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.526023 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.526585 4183 secret.go:194] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.526647 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.526632962 +0000 UTC m=+414.219297700 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.526707 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.527516 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.527570 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.527557568 +0000 UTC m=+414.220222276 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.527322 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.528049 4183 configmap.go:199] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.528102 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.528090394 +0000 UTC m=+414.220755172 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.528138 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.529140 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.529645 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.529621587 +0000 UTC m=+414.222286265 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.529743 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.530001 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.530276 4183 projected.go:294] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.530723 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.531437 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.531413499 +0000 UTC m=+414.224078277 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.531955 4183 secret.go:194] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.537273 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.537219314 +0000 UTC m=+414.229884073 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.537416 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.635495 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.640046 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.640160 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.640227 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.640281 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.641707 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.641996 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.642728 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.645430 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.645523 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.645496509 +0000 UTC m=+414.338161197 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.645598 4183 secret.go:194] Couldn't get secret openshift-image-registry/installation-pull-secrets: object "openshift-image-registry"/"installation-pull-secrets" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.645649 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.645637393 +0000 UTC m=+414.338302011 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "installation-pull-secrets" (UniqueName: "kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"installation-pull-secrets" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.645695 4183 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.645740 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.645724716 +0000 UTC m=+414.338389334 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"client-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.646011 4183 secret.go:194] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.646057 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.646046705 +0000 UTC m=+414.338711533 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.646110 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.646147 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.646139048 +0000 UTC m=+414.338803666 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.646248 4183 configmap.go:199] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.646300 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.646286252 +0000 UTC m=+414.338950920 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-images" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.646358 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.646398 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.646388525 +0000 UTC m=+414.339053233 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.646647 4183 secret.go:194] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.646703 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.646685713 +0000 UTC m=+414.339350411 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.647645 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.647740 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.648123 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.648200 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.648184686 +0000 UTC m=+414.340849784 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"image-import-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.662162 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.714059 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.750487 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.750667 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.750698 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.750731 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.750867 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.750922 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.750950 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8dcvj\" (UniqueName: \"kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.750977 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751001 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751024 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kp86\" (UniqueName: \"kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751056 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hpzhn\" (UniqueName: \"kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751088 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r8qj9\" (UniqueName: \"kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751117 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751150 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751208 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4w8wh\" (UniqueName: \"kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751242 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751276 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751324 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751354 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751388 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751434 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751469 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751503 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751536 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751567 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751597 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751625 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751657 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751687 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-serving-cert\" (UniqueName: \"kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751721 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751744 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751923 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.752002 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.752036 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.752069 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bxr\" (UniqueName: \"kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.752098 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.752121 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.752152 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.752174 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.752197 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.752224 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bm986\" (UniqueName: \"kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.752257 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.752688 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.752696 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.753503 4183 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.753528 4183 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.753544 4183 projected.go:200] Error preparing data for projected volume kube-api-access-6kp86 for pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.753632 4183 secret.go:194] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.753713 4183 secret.go:194] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.753901 4183 configmap.go:199] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.754029 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.754090 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.754142 4183 configmap.go:199] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.754205 4183 projected.go:294] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.754217 4183 projected.go:294] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.754227 4183 projected.go:200] Error preparing data for projected volume kube-api-access-8dcvj for pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.754311 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.754386 4183 secret.go:194] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.754638 4183 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.754658 4183 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.754666 4183 projected.go:200] Error preparing data for projected volume kube-api-access-4w8wh for pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.755092 4183 projected.go:294] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.755126 4183 projected.go:294] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.755136 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hpzhn for pod openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.755431 4183 projected.go:294] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.755446 4183 projected.go:294] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.755453 4183 projected.go:200] Error preparing data for projected volume kube-api-access-r8qj9 for pod openshift-apiserver/apiserver-67cbf64bc9-mtx25: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.762286 4183 configmap.go:199] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.762364 4183 configmap.go:199] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.762430 4183 projected.go:294] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.762444 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.762500 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-idp-0-file-data: object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.762545 4183 configmap.go:199] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.762599 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.763275 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.763423 4183 configmap.go:199] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.763463 4183 secret.go:194] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.763538 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.763608 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.763626 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.763638 4183 projected.go:200] Error preparing data for projected volume kube-api-access-ncrf5 for pod openshift-marketplace/certified-operators-7287f: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.763686 4183 secret.go:194] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.763747 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.763942 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.763995 4183 secret.go:194] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.764065 4183 secret.go:194] Couldn't get secret openshift-console-operator/webhook-serving-cert: object "openshift-console-operator"/"webhook-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.764110 4183 secret.go:194] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.764155 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.764215 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.764273 4183 secret.go:194] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.764319 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.764384 4183 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.764405 4183 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.764415 4183 projected.go:200] Error preparing data for projected volume kube-api-access-l8bxr for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.764469 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.764527 4183 configmap.go:199] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.764567 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.764628 4183 secret.go:194] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.764693 4183 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.764706 4183 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.764714 4183 projected.go:200] Error preparing data for projected volume kube-api-access-bm986 for pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.764764 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.765020 4183 secret.go:194] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.770704 4183 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.770865 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wrd8h\" (UniqueName: \"kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771036 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.771071 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nf4t2\" (UniqueName: \"kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771073 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771099 4183 projected.go:200] Error preparing data for projected volume kube-api-access-wrd8h for pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.771108 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771182 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.771155751 +0000 UTC m=+414.463820489 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-wrd8h" (UniqueName: "kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771200 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771237 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.771224952 +0000 UTC m=+414.463889581 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.771295 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtrv\" (UniqueName: \"kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771302 4183 projected.go:294] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771329 4183 projected.go:294] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771341 4183 projected.go:200] Error preparing data for projected volume kube-api-access-nf4t2 for pod openshift-dns-operator/dns-operator-75f687757b-nz2xb: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.771349 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771377 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2 podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.771366877 +0000 UTC m=+414.464031495 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-nf4t2" (UniqueName: "kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771406 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.771426 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxcp\" (UniqueName: \"kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp\") pod \"network-check-source-5c5478f8c-vqvt7\" (UID: \"d0f40333-c860-4c04-8058-a0bf572dcf12\") " pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771441 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.771428418 +0000 UTC m=+414.464093146 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771463 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs podName:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.771454529 +0000 UTC m=+414.464119237 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs") pod "multus-admission-controller-6c7c885997-4hbbc" (UID: "d5025cb4-ddb0-4107-88c1-bcbcdb779ac0") : object "openshift-multus"/"multus-admission-controller-secret" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771482 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.77147303 +0000 UTC m=+414.464137728 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771505 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.77149748 +0000 UTC m=+414.464162068 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"client-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771509 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771519 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.771512321 +0000 UTC m=+414.464176909 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771525 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771541 4183 projected.go:200] Error preparing data for projected volume kube-api-access-qcxcp for pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771542 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.771533591 +0000 UTC m=+414.464198189 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-8dcvj" (UniqueName: "kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771594 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.771582363 +0000 UTC m=+414.464246961 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771595 4183 projected.go:294] Couldn't get configMap hostpath-provisioner/kube-root-ca.crt: object "hostpath-provisioner"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771617 4183 projected.go:294] Couldn't get configMap hostpath-provisioner/openshift-service-ca.crt: object "hostpath-provisioner"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771618 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.771605553 +0000 UTC m=+414.464270151 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771626 4183 projected.go:200] Error preparing data for projected volume kube-api-access-vvtrv for pod hostpath-provisioner/csi-hostpathplugin-hvm8g: [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771638 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.771631324 +0000 UTC m=+414.464295922 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-4w8wh" (UniqueName: "kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771657 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.771649985 +0000 UTC m=+414.464314703 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-hpzhn" (UniqueName: "kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771674 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9 podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.771666855 +0000 UTC m=+414.464331573 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-r8qj9" (UniqueName: "kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771742 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.771732037 +0000 UTC m=+414.464396745 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771767 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.771754358 +0000 UTC m=+414.464421876 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.772466 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.772456498 +0000 UTC m=+414.465121096 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.772483 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.772475938 +0000 UTC m=+414.465140536 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-idp-0-file-data" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.772499 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.772492489 +0000 UTC m=+414.465157087 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.772522 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.772509489 +0000 UTC m=+414.465174077 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.772540 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.77253382 +0000 UTC m=+414.465198408 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.772558 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.7725487 +0000 UTC m=+414.465213298 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"audit" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.772573 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.772566491 +0000 UTC m=+414.465231089 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.772589 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.772581841 +0000 UTC m=+414.465246439 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.772604 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5 podName:887d596e-c519-4bfa-af90-3edd9e1b2f0f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.772597632 +0000 UTC m=+414.465262230 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-ncrf5" (UniqueName: "kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5") pod "certified-operators-7287f" (UID: "887d596e-c519-4bfa-af90-3edd9e1b2f0f") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.772626 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.772617542 +0000 UTC m=+414.465282140 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.772645 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.772638463 +0000 UTC m=+414.465303061 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.772660 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.772653053 +0000 UTC m=+414.465317641 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"audit-1" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.772678 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.772669304 +0000 UTC m=+414.465333902 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.772695 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.772685684 +0000 UTC m=+414.465350282 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-serving-cert" (UniqueName: "kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : object "openshift-console-operator"/"webhook-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.772711 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.772704965 +0000 UTC m=+414.465369563 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.772726 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.772720015 +0000 UTC m=+414.465384603 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.772741 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.772734486 +0000 UTC m=+414.465399074 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.772757 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.772750336 +0000 UTC m=+414.465415044 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.772974 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.772764336 +0000 UTC m=+414.465428924 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.773007 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.772989473 +0000 UTC m=+414.465654061 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l8bxr" (UniqueName: "kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.773031 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.773023504 +0000 UTC m=+414.465688102 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-client" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.773047 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.773039184 +0000 UTC m=+414.465703772 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.773064 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.773054745 +0000 UTC m=+414.465719343 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.773082 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.773074875 +0000 UTC m=+414.465739463 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.773098 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986 podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.773091126 +0000 UTC m=+414.465755714 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-bm986" (UniqueName: "kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.773114 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.773107246 +0000 UTC m=+414.465771844 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-session" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.773129 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.773122747 +0000 UTC m=+414.465787345 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.773153 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.773142567 +0000 UTC m=+414.465807165 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.773171 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs podName:a702c6d2-4dde-4077-ab8c-0f8df804bf7a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.773164398 +0000 UTC m=+414.465828996 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs") pod "network-metrics-daemon-qdfr4" (UID: "a702c6d2-4dde-4077-ab8c-0f8df804bf7a") : object "openshift-multus"/"metrics-daemon-secret" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.773194 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86 podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.773181148 +0000 UTC m=+414.465845746 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kp86" (UniqueName: "kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.773268 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.773309 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.773337 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.773432 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.773463 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.773499 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d7ntf\" (UniqueName: \"kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.773531 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.773569 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.773594 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.773642 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmhq\" (UniqueName: \"kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq\") pod \"migrator-f7c6d88df-q2fnv\" (UID: \"cf1a8966-f594-490a-9fbb-eec5bafd13d3\") " pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.773667 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.773702 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.773734 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.774431 4183 projected.go:294] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.774451 4183 projected.go:294] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.774466 4183 projected.go:200] Error preparing data for projected volume kube-api-access-d7ntf for pod openshift-service-ca/service-ca-666f99b6f-vlbxv: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.774535 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.774768 4183 projected.go:294] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.774878 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.774962 4183 secret.go:194] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.775033 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.775086 4183 configmap.go:199] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.775161 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.775174 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.775182 4183 projected.go:200] Error preparing data for projected volume kube-api-access-n6sqt for pod openshift-marketplace/community-operators-8jhz6: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.775248 4183 secret.go:194] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.775309 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.775364 4183 configmap.go:199] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.775443 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.775457 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.775466 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hqmhq for pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.775514 4183 configmap.go:199] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.776292 4183 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.776311 4183 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.776325 4183 projected.go:200] Error preparing data for projected volume kube-api-access-55f7t for pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.776367 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.776356609 +0000 UTC m=+414.469021237 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-55f7t" (UniqueName: "kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.780111 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.780096096 +0000 UTC m=+414.472760724 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-d7ntf" (UniqueName: "kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.780142 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp podName:d0f40333-c860-4c04-8058-a0bf572dcf12 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.780132587 +0000 UTC m=+414.472797185 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-qcxcp" (UniqueName: "kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp") pod "network-check-source-5c5478f8c-vqvt7" (UID: "d0f40333-c860-4c04-8058-a0bf572dcf12") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.780165 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv podName:12e733dd-0939-4f1b-9cbb-13897e093787 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.780157448 +0000 UTC m=+414.472822046 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-vvtrv" (UniqueName: "kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv") pod "csi-hostpathplugin-hvm8g" (UID: "12e733dd-0939-4f1b-9cbb-13897e093787") : [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.780186 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.780177648 +0000 UTC m=+414.472842236 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.780204 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.780197099 +0000 UTC m=+414.472861697 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.780223 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.78021662 +0000 UTC m=+414.472881218 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.780241 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.78023447 +0000 UTC m=+414.472899068 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"metrics-tls" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.780256 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.78024897 +0000 UTC m=+414.472913558 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"openshift-global-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.780272 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.780265931 +0000 UTC m=+414.472930519 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"trusted-ca-bundle" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.780298 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt podName:3f4dca86-e6ee-4ec9-8324-86aff960225e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.780289912 +0000 UTC m=+414.472954510 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-n6sqt" (UniqueName: "kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt") pod "community-operators-8jhz6" (UID: "3f4dca86-e6ee-4ec9-8324-86aff960225e") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.780313 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.780306622 +0000 UTC m=+414.472971350 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"encryption-config-1" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.780345 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.780336383 +0000 UTC m=+414.473000981 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.780365 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.780357064 +0000 UTC m=+414.473021662 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.780386 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq podName:cf1a8966-f594-490a-9fbb-eec5bafd13d3 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.780378114 +0000 UTC m=+414.473042702 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqmhq" (UniqueName: "kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq") pod "migrator-f7c6d88df-q2fnv" (UID: "cf1a8966-f594-490a-9fbb-eec5bafd13d3") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.780429 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.780414835 +0000 UTC m=+414.473079633 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.780470 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tl5kg\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.780511 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.780546 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2h9\" (UniqueName: \"kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.780706 4183 projected.go:294] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.780756 4183 projected.go:294] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.780948 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tl5kg for pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.780997 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.781017 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.780997852 +0000 UTC m=+414.473662700 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-tl5kg" (UniqueName: "kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.780954 4183 projected.go:294] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.781041 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.781032083 +0000 UTC m=+414.473696701 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.781057 4183 projected.go:294] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.781070 4183 projected.go:200] Error preparing data for projected volume kube-api-access-lx2h9 for pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.781134 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9 podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.781122675 +0000 UTC m=+414.473787603 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-lx2h9" (UniqueName: "kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.800352 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.859673 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.882719 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.882877 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.882935 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.882966 4183 secret.go:194] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.883009 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.882982557 +0000 UTC m=+414.575647395 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.883026 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.883020118 +0000 UTC m=+414.575684866 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.883117 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-w4r68\" (UniqueName: \"kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.883382 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.883418 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.883493 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.883551 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.883672 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.883699 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.883719 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.883708177 +0000 UTC m=+414.576372795 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.883745 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.883735428 +0000 UTC m=+414.576400736 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"audit-1" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.883759 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.883880 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.883868982 +0000 UTC m=+414.576533900 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.883914 4183 projected.go:294] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.883937 4183 projected.go:294] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.883951 4183 projected.go:200] Error preparing data for projected volume kube-api-access-w4r68 for pod openshift-authentication/oauth-openshift-765b47f944-n2lhl: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.883952 4183 secret.go:194] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.883980 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68 podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.883972025 +0000 UTC m=+414.576636743 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-w4r68" (UniqueName: "kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.884001 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.883990715 +0000 UTC m=+414.576655423 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"etcd-client" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.884158 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hjlhw\" (UniqueName: \"kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.884201 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.884309 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tvc4r\" (UniqueName: \"kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.884387 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.884414 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9724w\" (UniqueName: \"kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.884488 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.884514 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.884567 4183 configmap.go:199] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.884579 4183 configmap.go:199] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.884648 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.884652 4183 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.884592 4183 projected.go:294] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.884706 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.884720 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.884721 4183 projected.go:294] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.884728 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hjlhw for pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.884744 4183 projected.go:200] Error preparing data for projected volume kube-api-access-9724w for pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.884883 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.884899 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.884909 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tvc4r for pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.884602 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.884590163 +0000 UTC m=+414.577254881 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"service-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.885048 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.885105 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.885163 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.885200 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.885248 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.885283 4183 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.885286 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.885269792 +0000 UTC m=+414.577934380 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.885357 4183 secret.go:194] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.885380 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.885361995 +0000 UTC m=+414.578026673 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"trusted-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.885402 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.885392605 +0000 UTC m=+414.578057263 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-client" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.885417 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.885411186 +0000 UTC m=+414.578075784 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-hjlhw" (UniqueName: "kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.885438 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.885440 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.885432107 +0000 UTC m=+414.578096735 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-tvc4r" (UniqueName: "kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.885467 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.885460627 +0000 UTC m=+414.578125335 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-9724w" (UniqueName: "kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.885487 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.885480158 +0000 UTC m=+414.578144746 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.885506 4183 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.885520 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.885542 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.885532079 +0000 UTC m=+414.578196987 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.885575 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dt5cx\" (UniqueName: \"kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx\") pod \"ingress-canary-2vhcn\" (UID: \"0b5d722a-1123-4935-9740-52a08d018bc9\") " pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.885622 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.885638 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.885652 4183 projected.go:200] Error preparing data for projected volume kube-api-access-rg2zg for pod openshift-marketplace/marketplace-operator-8b455464d-f9xdt: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.885717 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.885677414 +0000 UTC m=+414.578342142 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-rg2zg" (UniqueName: "kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.885768 4183 secret.go:194] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.886079 4183 projected.go:294] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.886101 4183 projected.go:294] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.886112 4183 projected.go:200] Error preparing data for projected volume kube-api-access-dt5cx for pod openshift-ingress-canary/ingress-canary-2vhcn: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.886423 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-fqnmc\" (UniqueName: \"kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.886456 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vhj\" (UniqueName: \"kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.886486 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.886467606 +0000 UTC m=+414.579132234 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.886512 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.886504607 +0000 UTC m=+414.579169205 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.886524 4183 projected.go:294] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.886546 4183 projected.go:294] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.886556 4183 projected.go:200] Error preparing data for projected volume kube-api-access-d9vhj for pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.886525 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx podName:0b5d722a-1123-4935-9740-52a08d018bc9 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.886519508 +0000 UTC m=+414.579184096 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-dt5cx" (UniqueName: "kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx") pod "ingress-canary-2vhcn" (UID: "0b5d722a-1123-4935-9740-52a08d018bc9") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.886587 4183 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.886616 4183 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.886626 4183 projected.go:200] Error preparing data for projected volume kube-api-access-fqnmc for pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.886633 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.886625161 +0000 UTC m=+414.579289759 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.886649 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.886643371 +0000 UTC m=+414.579308079 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-d9vhj" (UniqueName: "kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.886619 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.886673 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.886664362 +0000 UTC m=+414.579329160 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-fqnmc" (UniqueName: "kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.886722 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-x5d97\" (UniqueName: \"kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.887002 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.887184 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5rpl7\" (UniqueName: \"kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.887236 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.887378 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2zpsk\" (UniqueName: \"kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk\") pod \"downloads-65476884b9-9wcvx\" (UID: \"6268b7fe-8910-4505-b404-6f1df638105c\") " pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.887448 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.887476 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.887501 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.887539 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.887572 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kgvs\" (UniqueName: \"kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.887608 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-76gl8\" (UniqueName: \"kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8\") pod \"network-check-target-v54bt\" (UID: \"34a48baf-1bee-4921-8bb2-9b7320e76f79\") " pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.887633 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.887706 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pzb57\" (UniqueName: \"kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.887942 4183 projected.go:294] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.887984 4183 projected.go:294] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.887997 4183 projected.go:200] Error preparing data for projected volume kube-api-access-pzb57 for pod openshift-controller-manager/controller-manager-6ff78978b4-q4vv8: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888029 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57 podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.888020291 +0000 UTC m=+414.580684909 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-pzb57" (UniqueName: "kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888083 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888098 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888108 4183 projected.go:200] Error preparing data for projected volume kube-api-access-x5d97 for pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888134 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97 podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.888126604 +0000 UTC m=+414.580791512 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-x5d97" (UniqueName: "kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888172 4183 secret.go:194] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888198 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.888192275 +0000 UTC m=+414.580856993 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888244 4183 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888255 4183 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888263 4183 projected.go:200] Error preparing data for projected volume kube-api-access-5rpl7 for pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888290 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7 podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.888280448 +0000 UTC m=+414.580945176 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-5rpl7" (UniqueName: "kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888295 4183 projected.go:269] Couldn't get secret openshift-image-registry/image-registry-tls: object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888313 4183 projected.go:200] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-585546dd8b-v5m4t: object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888328 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888350 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.88833916 +0000 UTC m=+414.581003778 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888370 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.8883598 +0000 UTC m=+414.581024398 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888382 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888393 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888402 4183 projected.go:200] Error preparing data for projected volume kube-api-access-6kgvs for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888428 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.888421622 +0000 UTC m=+414.581086350 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kgvs" (UniqueName: "kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888470 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888482 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888490 4183 projected.go:200] Error preparing data for projected volume kube-api-access-76gl8 for pod openshift-network-diagnostics/network-check-target-v54bt: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888511 4183 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888526 4183 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888535 4183 projected.go:200] Error preparing data for projected volume kube-api-access-2zpsk for pod openshift-console/downloads-65476884b9-9wcvx: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888514 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8 podName:34a48baf-1bee-4921-8bb2-9b7320e76f79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.888507274 +0000 UTC m=+414.581172002 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-76gl8" (UniqueName: "kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8") pod "network-check-target-v54bt" (UID: "34a48baf-1bee-4921-8bb2-9b7320e76f79") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888566 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888576 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk podName:6268b7fe-8910-4505-b404-6f1df638105c nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.888568106 +0000 UTC m=+414.581232814 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-2zpsk" (UniqueName: "kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk") pod "downloads-65476884b9-9wcvx" (UID: "6268b7fe-8910-4505-b404-6f1df638105c") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888582 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888593 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tf29r for pod openshift-marketplace/redhat-marketplace-8s8pc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888613 4183 secret.go:194] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888667 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.888656339 +0000 UTC m=+414.581321067 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-oauth-config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888693 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r podName:c782cf62-a827-4677-b3c2-6f82c5f09cbb nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.888682109 +0000 UTC m=+414.581346777 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-tf29r" (UniqueName: "kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r") pod "redhat-marketplace-8s8pc" (UID: "c782cf62-a827-4677-b3c2-6f82c5f09cbb") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888700 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888730 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.888722161 +0000 UTC m=+414.581386899 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888469 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888867 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888880 4183 projected.go:200] Error preparing data for projected volume kube-api-access-ptdrb for pod openshift-marketplace/redhat-operators-f4jkp: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888915 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb podName:4092a9f8-5acc-4932-9e90-ef962eeb301a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.888906736 +0000 UTC m=+414.581571474 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-ptdrb" (UniqueName: "kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb") pod "redhat-operators-f4jkp" (UID: "4092a9f8-5acc-4932-9e90-ef962eeb301a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.890483 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.889077 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 podName: nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.88905856 +0000 UTC m=+414.581726188 (durationBeforeRetry 4s). Error: MountVolume.MountDevice failed for volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.949696 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.989954 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lz9qh\" (UniqueName: \"kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.990154 4183 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.990313 4183 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.990332 4183 projected.go:200] Error preparing data for projected volume kube-api-access-lz9qh for pod openshift-console/console-84fccc7b6-mkncc: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.990405 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.990387736 +0000 UTC m=+414.683052464 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-lz9qh" (UniqueName: "kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.990492 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\") " pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.990832 4183 projected.go:294] Couldn't get configMap openshift-kube-controller-manager/kube-root-ca.crt: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.990886 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/revision-pruner-8-crc: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.990991 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access podName:72854c1e-5ae2-4ed6-9e50-ff3bccde2635 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.990967033 +0000 UTC m=+414.683631781 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access") pod "revision-pruner-8-crc" (UID: "72854c1e-5ae2-4ed6-9e50-ff3bccde2635") : object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.991646 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.991690 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.991701 4183 projected.go:200] Error preparing data for projected volume kube-api-access-r7dbp for pod openshift-marketplace/redhat-marketplace-rmwfn: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.991906 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r7dbp\" (UniqueName: \"kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp\") pod \"redhat-marketplace-rmwfn\" (UID: \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\") " pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.992018 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp podName:9ad279b4-d9dc-42a8-a1c8-a002bd063482 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.991993472 +0000 UTC m=+414.684658100 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-r7dbp" (UniqueName: "kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp") pod "redhat-marketplace-rmwfn" (UID: "9ad279b4-d9dc-42a8-a1c8-a002bd063482") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.005340 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.099383 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" event={"ID":"51a02bbf-2d40-4f84-868a-d399ea18a846","Type":"ContainerStarted","Data":"0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050"} Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.121366 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.129156 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" event={"ID":"ec1bae8b-3200-4ad9-b33b-cf8701f3027c","Type":"ContainerStarted","Data":"9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b"} Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.140241 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" event={"ID":"9fb762d1-812f-43f1-9eac-68034c1ecec7","Type":"ContainerStarted","Data":"c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4"} Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.155201 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" event={"ID":"bf1a8b70-3856-486f-9912-a2de1d57c3fb","Type":"ContainerStarted","Data":"3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c"} Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.193054 4183 generic.go:334] "Generic (PLEG): container finished" podID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerID="4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e" exitCode=0 Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.193152 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerDied","Data":"4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e"} Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.193071 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.209923 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:50:44 crc kubenswrapper[4183]: E0813 19:50:44.210084 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.210130 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:44 crc kubenswrapper[4183]: E0813 19:50:44.210208 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.210277 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:44 crc kubenswrapper[4183]: E0813 19:50:44.210357 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.210405 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:50:44 crc kubenswrapper[4183]: E0813 19:50:44.210491 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.210550 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:44 crc kubenswrapper[4183]: E0813 19:50:44.210654 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.210707 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:44 crc kubenswrapper[4183]: E0813 19:50:44.210976 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.211035 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:44 crc kubenswrapper[4183]: E0813 19:50:44.211115 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.225975 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" event={"ID":"aa90b3c2-febd-4588-a063-7fbbe82f00c1","Type":"ContainerStarted","Data":"0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839"} Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.228432 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.295126 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.356001 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.423151 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.435161 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.442647 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.442750 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.459239 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.504534 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.564752 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.591753 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-version-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-version-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.621835 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.651268 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.703514 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.751489 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.837500 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.882690 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.958035 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-approver-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-approver-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.019943 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48c1471ee6eaa615e5b0e19686e3fafc0f687dc03625988c88b411dc682d223f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2024-06-27T13:27:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2024-06-27T13:24:26Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.096662 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.130982 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.162479 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.198924 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.209408 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.209645 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.209706 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.209909 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.209959 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.210039 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.210087 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.210185 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.210250 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.210355 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.210401 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.210504 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.210558 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.210647 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.210709 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.211004 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.211069 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.211179 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.211241 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.211347 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.211414 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.211498 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.211537 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.211612 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.211652 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.211952 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.212027 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.212135 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.212203 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.212313 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.212355 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.212438 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.212480 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.212563 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.212602 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.212678 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.212724 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.212899 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.213100 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.213203 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.213280 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.213373 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.213418 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.213498 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.213548 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.213632 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.213675 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.213753 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.214105 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.214201 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.214419 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.214517 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.214565 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.214604 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.214683 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.214693 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.214745 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.214844 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.214883 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.214911 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.214923 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.214963 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.215004 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.214570 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.215122 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.215181 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.215241 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.215349 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.215387 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.215478 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.215536 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.215592 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.215668 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.215694 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.215768 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.215933 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.216020 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.216094 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.216171 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.216235 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.216268 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.216348 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.232187 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.241962 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" event={"ID":"cc291782-27d2-4a74-af79-c7dcb31535d2","Type":"ContainerStarted","Data":"ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce"} Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.253237 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-dn27q" event={"ID":"6a23c0ee-5648-448c-b772-83dced2891ce","Type":"ContainerStarted","Data":"5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79"} Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.258222 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-l92hr" event={"ID":"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e","Type":"ContainerStarted","Data":"dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917"} Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.277664 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerStarted","Data":"d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8"} Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.294170 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" event={"ID":"410cf605-1970-4691-9c95-53fdc123b1f3","Type":"ContainerStarted","Data":"3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615"} Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.302644 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b065562fefc63a381832e1073dc188f7f27d20b65780f1c54a9aa34c767a3b80\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:28:38Z\\\",\\\"message\\\":\\\"Thu Jun 27 13:21:15 UTC 2024\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:14Z\\\"}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.308194 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" event={"ID":"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8","Type":"ContainerStarted","Data":"1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b"} Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.348601 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.350341 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.381324 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.430546 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.430641 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.435459 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.478754 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.517912 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.576271 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.613625 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.656204 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.706152 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.751087 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.800708 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.838103 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.871207 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.923054 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.965925 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:46 crc kubenswrapper[4183]: I0813 19:50:46.003440 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:46 crc kubenswrapper[4183]: I0813 19:50:46.040298 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:46 crc kubenswrapper[4183]: I0813 19:50:46.084672 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:46 crc kubenswrapper[4183]: I0813 19:50:46.111724 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:46 crc kubenswrapper[4183]: I0813 19:50:46.150511 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:46 crc kubenswrapper[4183]: I0813 19:50:46.205934 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:46 crc kubenswrapper[4183]: I0813 19:50:46.208289 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:50:46 crc kubenswrapper[4183]: I0813 19:50:46.208472 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:46 crc kubenswrapper[4183]: E0813 19:50:46.208577 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:50:46 crc kubenswrapper[4183]: I0813 19:50:46.208624 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:46 crc kubenswrapper[4183]: I0813 19:50:46.208672 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:46 crc kubenswrapper[4183]: E0813 19:50:46.209258 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:50:46 crc kubenswrapper[4183]: I0813 19:50:46.209439 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:46 crc kubenswrapper[4183]: E0813 19:50:46.209623 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:50:46 crc kubenswrapper[4183]: I0813 19:50:46.209989 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:46 crc kubenswrapper[4183]: E0813 19:50:46.210222 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:50:46 crc kubenswrapper[4183]: I0813 19:50:46.210708 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:50:46 crc kubenswrapper[4183]: E0813 19:50:46.211035 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:50:46 crc kubenswrapper[4183]: E0813 19:50:46.211154 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:50:46 crc kubenswrapper[4183]: E0813 19:50:46.211304 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:50:46 crc kubenswrapper[4183]: I0813 19:50:46.326550 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212"} Aug 13 19:50:46 crc kubenswrapper[4183]: I0813 19:50:46.339235 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" event={"ID":"ec1bae8b-3200-4ad9-b33b-cf8701f3027c","Type":"ContainerStarted","Data":"9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4"} Aug 13 19:50:46 crc kubenswrapper[4183]: I0813 19:50:46.410512 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:46 crc kubenswrapper[4183]: I0813 19:50:46.471721 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:46 crc kubenswrapper[4183]: I0813 19:50:46.504611 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:46 crc kubenswrapper[4183]: I0813 19:50:46.620223 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:50:46 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:50:46 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:50:46 crc kubenswrapper[4183]: healthz check failed Aug 13 19:50:46 crc kubenswrapper[4183]: I0813 19:50:46.620387 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:50:46 crc kubenswrapper[4183]: I0813 19:50:46.716418 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:46 crc kubenswrapper[4183]: I0813 19:50:46.770626 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9572cbf27a025e52f8350ba1f90df2f73ac013d88644e34f555a7ae71822234\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2024-06-27T13:23:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:07Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:46 crc kubenswrapper[4183]: I0813 19:50:46.824290 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.209741 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.209893 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.209975 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.210038 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.210056 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.210089 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.210169 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.210200 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.210172 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.210252 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.210264 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.210294 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.210316 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.210353 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.210357 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.210389 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.210403 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.210438 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.210445 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.210472 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.210493 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.210527 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.210543 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.210563 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.210605 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.210631 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.210606 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.210682 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.210683 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.210715 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.210742 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.210747 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.210901 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.210964 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.211004 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.211028 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.211094 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.211129 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.211158 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.211192 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.211212 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.211219 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.211370 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.211447 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.211484 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.211558 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.211622 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.211679 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.211738 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.211885 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.211968 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.212011 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.212129 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.212229 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.212293 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.212379 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.212489 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.212552 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.212650 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.212721 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.212892 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.212935 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.213003 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.213066 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.213156 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.213241 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.213305 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.213364 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.213460 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.213529 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.213596 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.213714 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.213882 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.213971 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.214011 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.214069 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.214134 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.214206 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.214243 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.214309 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.214513 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.214643 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.268143 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.305553 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.366437 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" event={"ID":"410cf605-1970-4691-9c95-53fdc123b1f3","Type":"ContainerStarted","Data":"b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303"} Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.378926 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" event={"ID":"51a02bbf-2d40-4f84-868a-d399ea18a846","Type":"ContainerStarted","Data":"91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f"} Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.434695 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:50:47 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:50:47 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:50:47 crc kubenswrapper[4183]: healthz check failed Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.435147 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.501495 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.613613 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.613707 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.613762 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.614007 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.614054 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.614478 4183 configmap.go:199] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.614559 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.614536227 +0000 UTC m=+422.307200935 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : object "openshift-service-ca"/"signing-cabundle" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.615022 4183 configmap.go:199] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.615069 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.615058332 +0000 UTC m=+422.307722950 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"oauth-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.615160 4183 secret.go:194] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.615278 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.615251447 +0000 UTC m=+422.307916065 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : object "openshift-dns-operator"/"metrics-tls" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.615467 4183 secret.go:194] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.615539 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls podName:297ab9b6-2186-4d5b-a952-2bfd59af63c4 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.615516745 +0000 UTC m=+422.308181523 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls") pod "machine-config-controller-6df6df6b6b-58shh" (UID: "297ab9b6-2186-4d5b-a952-2bfd59af63c4") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.615632 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.615684 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.615670329 +0000 UTC m=+422.308335107 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.617234 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.617327 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.617377 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.617469 4183 secret.go:194] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.617525 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.617514212 +0000 UTC m=+422.310179020 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.617585 4183 configmap.go:199] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.617638 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.617619875 +0000 UTC m=+422.310285223 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"console-operator-config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.617889 4183 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.618134 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.618139 4183 configmap.go:199] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.618166 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.61815581 +0000 UTC m=+422.310820398 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.618732 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.618712646 +0000 UTC m=+422.311377264 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.619445 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.619619 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.619526 4183 secret.go:194] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.619760 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.619749666 +0000 UTC m=+422.312414274 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.619942 4183 configmap.go:199] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.620334 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.620317352 +0000 UTC m=+422.312982040 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.619764 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.620417 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j7zrh\" (UniqueName: \"kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.620466 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.620507 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.620542 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.620562 4183 secret.go:194] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.620621 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.62060515 +0000 UTC m=+422.313269988 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : object "openshift-service-ca"/"signing-key" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.620660 4183 secret.go:194] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.620701 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.620729 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.620718823 +0000 UTC m=+422.313383451 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.620767 4183 secret.go:194] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.620911 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.620943 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.620927979 +0000 UTC m=+422.313592607 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.620967 4183 configmap.go:199] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.620980 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.621007 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.620996711 +0000 UTC m=+422.313661329 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.621069 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.621131 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.621192 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.621222 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.621240 4183 secret.go:194] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.621258 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.621292 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.621369 4183 projected.go:294] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.621397 4183 projected.go:294] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.621417 4183 projected.go:200] Error preparing data for projected volume kube-api-access-j7zrh for pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.621513 4183 secret.go:194] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.621566 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.621551147 +0000 UTC m=+422.314215955 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.621594 4183 secret.go:194] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.621599 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.621584858 +0000 UTC m=+422.314249506 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-j7zrh" (UniqueName: "kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.621659 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.621712 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.621699641 +0000 UTC m=+422.314364429 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.621734 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.621768 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.621760343 +0000 UTC m=+422.314424931 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.621902 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.621932 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.621924168 +0000 UTC m=+422.314588876 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.621947 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.621940318 +0000 UTC m=+422.314604916 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.622000 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.622028 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.622021571 +0000 UTC m=+422.314686179 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.621068 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.622167 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.622137704 +0000 UTC m=+422.314802412 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.622095 4183 projected.go:294] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.622209 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.622258 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.622245777 +0000 UTC m=+422.314910525 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.622400 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.622379931 +0000 UTC m=+422.315044609 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default-metrics-tls" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.689579 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.725065 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.725167 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.725210 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.725303 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.725415 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.725388565 +0000 UTC m=+422.418053453 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.725475 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.725577 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.725610 4183 secret.go:194] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.725660 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.725643622 +0000 UTC m=+422.418308510 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.725691 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.725698 4183 secret.go:194] Couldn't get secret openshift-image-registry/installation-pull-secrets: object "openshift-image-registry"/"installation-pull-secrets" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.725723 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.725741 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.725732255 +0000 UTC m=+422.418396873 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "installation-pull-secrets" (UniqueName: "kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"installation-pull-secrets" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.725927 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.725974 4183 secret.go:194] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.726025 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.726013713 +0000 UTC m=+422.418678341 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.726060 4183 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.726117 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.726105455 +0000 UTC m=+422.418770083 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"client-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.726149 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.726140266 +0000 UTC m=+422.418804854 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"image-import-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.726184 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.726227 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.726216669 +0000 UTC m=+422.418881407 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.727169 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.727244 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.727638 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.727673 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.72766413 +0000 UTC m=+422.420328748 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.727720 4183 configmap.go:199] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.727753 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.727745212 +0000 UTC m=+422.420409830 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-images" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.743070 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.798019 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.828950 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.829185 4183 configmap.go:199] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.829269 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.829246243 +0000 UTC m=+422.521911081 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.829461 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.829506 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.8294957 +0000 UTC m=+422.522160428 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.829528 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.829696 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4w8wh\" (UniqueName: \"kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.829737 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.829764 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.829871 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.830151 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.830187 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.830228 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.830262 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.830296 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.830322 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.830348 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.830386 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.830418 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.830448 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.830482 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-serving-cert\" (UniqueName: \"kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.830510 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.830539 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.830576 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.830633 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.830670 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.830703 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bxr\" (UniqueName: \"kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.830745 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.830770 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.831006 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.831033 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.831064 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.831097 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bm986\" (UniqueName: \"kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.831129 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.831227 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.831330 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wrd8h\" (UniqueName: \"kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.831491 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.831597 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nf4t2\" (UniqueName: \"kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.831665 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtrv\" (UniqueName: \"kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.831757 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.831900 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxcp\" (UniqueName: \"kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp\") pod \"network-check-source-5c5478f8c-vqvt7\" (UID: \"d0f40333-c860-4c04-8058-a0bf572dcf12\") " pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.831987 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.832061 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.832105 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.832239 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.832297 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.832351 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d7ntf\" (UniqueName: \"kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.832393 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.832452 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.832480 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.832536 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmhq\" (UniqueName: \"kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq\") pod \"migrator-f7c6d88df-q2fnv\" (UID: \"cf1a8966-f594-490a-9fbb-eec5bafd13d3\") " pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.832586 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.832643 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.832694 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.832882 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tl5kg\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.832924 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.832974 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2h9\" (UniqueName: \"kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.833267 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.833318 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.833350 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.833396 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.833484 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.833527 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.833564 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8dcvj\" (UniqueName: \"kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.833638 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.833685 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.833722 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kp86\" (UniqueName: \"kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.833964 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hpzhn\" (UniqueName: \"kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.837627 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r8qj9\" (UniqueName: \"kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.837965 4183 projected.go:294] Couldn't get configMap hostpath-provisioner/kube-root-ca.crt: object "hostpath-provisioner"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838021 4183 projected.go:294] Couldn't get configMap hostpath-provisioner/openshift-service-ca.crt: object "hostpath-provisioner"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838035 4183 projected.go:200] Error preparing data for projected volume kube-api-access-vvtrv for pod hostpath-provisioner/csi-hostpathplugin-hvm8g: [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838051 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838103 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv podName:12e733dd-0939-4f1b-9cbb-13897e093787 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.838083896 +0000 UTC m=+422.530748614 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-vvtrv" (UniqueName: "kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv") pod "csi-hostpathplugin-hvm8g" (UID: "12e733dd-0939-4f1b-9cbb-13897e093787") : [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838124 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.838113817 +0000 UTC m=+422.530778425 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.830022 4183 projected.go:294] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838135 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838144 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838152 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838161 4183 projected.go:200] Error preparing data for projected volume kube-api-access-qcxcp for pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.830050 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-idp-0-file-data: object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.830098 4183 configmap.go:199] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838200 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp podName:d0f40333-c860-4c04-8058-a0bf572dcf12 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.838186659 +0000 UTC m=+422.530851347 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-qcxcp" (UniqueName: "kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp") pod "network-check-source-5c5478f8c-vqvt7" (UID: "d0f40333-c860-4c04-8058-a0bf572dcf12") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838247 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838268 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.838254961 +0000 UTC m=+422.530919669 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838285 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.838279041 +0000 UTC m=+422.530943629 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-idp-0-file-data" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838300 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.838293802 +0000 UTC m=+422.530958400 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838317 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.838308452 +0000 UTC m=+422.530973040 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838321 4183 projected.go:294] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838330 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838360 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.838348013 +0000 UTC m=+422.531012731 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838371 4183 configmap.go:199] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838410 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.838401915 +0000 UTC m=+422.531066623 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838414 4183 secret.go:194] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838479 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838539 4183 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838590 4183 secret.go:194] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838447 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.838439376 +0000 UTC m=+422.531103994 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"metrics-tls" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838621 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.838606801 +0000 UTC m=+422.531271419 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838641 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.838633011 +0000 UTC m=+422.531297599 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838645 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838654 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.838648692 +0000 UTC m=+422.531313290 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838676 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.838667722 +0000 UTC m=+422.531332330 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"openshift-global-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838701 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838713 4183 configmap.go:199] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838742 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.838735834 +0000 UTC m=+422.531400442 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"trusted-ca-bundle" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838844 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.838834747 +0000 UTC m=+422.531499465 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838879 4183 projected.go:294] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838900 4183 projected.go:294] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838906 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838923 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838931 4183 projected.go:200] Error preparing data for projected volume kube-api-access-ncrf5 for pod openshift-marketplace/certified-operators-7287f: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838948 4183 secret.go:194] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838959 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5 podName:887d596e-c519-4bfa-af90-3edd9e1b2f0f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.838951881 +0000 UTC m=+422.531616619 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-ncrf5" (UniqueName: "kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5") pod "certified-operators-7287f" (UID: "887d596e-c519-4bfa-af90-3edd9e1b2f0f") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838911 4183 projected.go:200] Error preparing data for projected volume kube-api-access-d7ntf for pod openshift-service-ca/service-ca-666f99b6f-vlbxv: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838975 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.838967321 +0000 UTC m=+422.531631929 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"encryption-config-1" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838993 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.838980911 +0000 UTC m=+422.531645529 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-d7ntf" (UniqueName: "kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839013 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839036 4183 secret.go:194] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839135 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839156 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839165 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hqmhq for pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839175 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839044 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.839037803 +0000 UTC m=+422.531702421 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839200 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.839191857 +0000 UTC m=+422.531856465 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839218 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq podName:cf1a8966-f594-490a-9fbb-eec5bafd13d3 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.839211748 +0000 UTC m=+422.531876336 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqmhq" (UniqueName: "kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq") pod "migrator-f7c6d88df-q2fnv" (UID: "cf1a8966-f594-490a-9fbb-eec5bafd13d3") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839242 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.839231119 +0000 UTC m=+422.531895707 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839256 4183 secret.go:194] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839294 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.83928366 +0000 UTC m=+422.531948278 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839297 4183 configmap.go:199] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839329 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.839321921 +0000 UTC m=+422.531986539 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839333 4183 secret.go:194] Couldn't get secret openshift-console-operator/webhook-serving-cert: object "openshift-console-operator"/"webhook-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839371 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.839358622 +0000 UTC m=+422.532023240 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-serving-cert" (UniqueName: "kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : object "openshift-console-operator"/"webhook-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839220 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839408 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.839397443 +0000 UTC m=+422.532062061 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"audit-1" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839410 4183 secret.go:194] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839442 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839446 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.839438954 +0000 UTC m=+422.532103652 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839478 4183 configmap.go:199] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839479 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.839472065 +0000 UTC m=+422.532136653 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839503 4183 configmap.go:199] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839509 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.839502776 +0000 UTC m=+422.532167464 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839539 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.839528127 +0000 UTC m=+422.532192895 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"audit" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839565 4183 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839585 4183 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839591 4183 secret.go:194] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839625 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839631 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.83962072 +0000 UTC m=+422.532285398 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839595 4183 projected.go:200] Error preparing data for projected volume kube-api-access-bm986 for pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839659 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.83964991 +0000 UTC m=+422.532314629 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-session" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839680 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986 podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.839670371 +0000 UTC m=+422.532335129 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-bm986" (UniqueName: "kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839707 4183 secret.go:194] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839739 4183 secret.go:194] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839742 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.839734773 +0000 UTC m=+422.532399471 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839867 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.839768684 +0000 UTC m=+422.532433272 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839898 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839905 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839920 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839931 4183 projected.go:200] Error preparing data for projected volume kube-api-access-wrd8h for pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839943 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.839935339 +0000 UTC m=+422.532599957 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839959 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.839949249 +0000 UTC m=+422.532613867 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-wrd8h" (UniqueName: "kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839992 4183 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839995 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840004 4183 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840012 4183 projected.go:200] Error preparing data for projected volume kube-api-access-l8bxr for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840020 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.840014021 +0000 UTC m=+422.532678639 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840036 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.840030151 +0000 UTC m=+422.532694759 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l8bxr" (UniqueName: "kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840059 4183 projected.go:294] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840069 4183 projected.go:294] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840078 4183 projected.go:200] Error preparing data for projected volume kube-api-access-nf4t2 for pod openshift-dns-operator/dns-operator-75f687757b-nz2xb: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840110 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2 podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.840101113 +0000 UTC m=+422.532765811 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-nf4t2" (UniqueName: "kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840111 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840138 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839095 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840197 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.830081 4183 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.837980 4183 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840139 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.840133744 +0000 UTC m=+422.532798362 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840215 4183 projected.go:200] Error preparing data for projected volume kube-api-access-n6sqt for pod openshift-marketplace/community-operators-8jhz6: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840232 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.840222027 +0000 UTC m=+422.532886615 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840249 4183 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840263 4183 projected.go:200] Error preparing data for projected volume kube-api-access-4w8wh for pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840287 4183 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840296 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt podName:3f4dca86-e6ee-4ec9-8324-86aff960225e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.840243467 +0000 UTC m=+422.532908335 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-n6sqt" (UniqueName: "kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt") pod "community-operators-8jhz6" (UID: "3f4dca86-e6ee-4ec9-8324-86aff960225e") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840317 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.840309599 +0000 UTC m=+422.532974187 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-4w8wh" (UniqueName: "kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840070 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840349 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840358 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.840351351 +0000 UTC m=+422.533015969 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-client" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839372 4183 configmap.go:199] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840300 4183 projected.go:200] Error preparing data for projected volume kube-api-access-55f7t for pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840395 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.840379601 +0000 UTC m=+422.533044309 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840416 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.840406552 +0000 UTC m=+422.533071170 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-55f7t" (UniqueName: "kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840431 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.840426093 +0000 UTC m=+422.533090681 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840459 4183 projected.go:294] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840474 4183 projected.go:294] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840481 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tl5kg for pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840508 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840566 4183 configmap.go:199] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840614 4183 projected.go:294] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840630 4183 projected.go:294] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840634 4183 projected.go:294] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840640 4183 projected.go:200] Error preparing data for projected volume kube-api-access-lx2h9 for pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840648 4183 projected.go:294] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840657 4183 projected.go:200] Error preparing data for projected volume kube-api-access-8dcvj for pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840510 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.840500815 +0000 UTC m=+422.533165523 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-tl5kg" (UniqueName: "kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840708 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.840699081 +0000 UTC m=+422.533363669 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"client-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840868 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.840856325 +0000 UTC m=+422.533521013 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840872 4183 configmap.go:199] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840893 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9 podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.840884726 +0000 UTC m=+422.533549324 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-lx2h9" (UniqueName: "kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840481 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840710 4183 secret.go:194] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840753 4183 secret.go:194] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840908 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.840902896 +0000 UTC m=+422.533567504 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-8dcvj" (UniqueName: "kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840963 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs podName:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.840950378 +0000 UTC m=+422.533614996 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs") pod "multus-admission-controller-6c7c885997-4hbbc" (UID: "d5025cb4-ddb0-4107-88c1-bcbcdb779ac0") : object "openshift-multus"/"multus-admission-controller-secret" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840990 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.840977998 +0000 UTC m=+422.533642586 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.841008 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.840998939 +0000 UTC m=+422.533663527 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.841015 4183 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.841021 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs podName:a702c6d2-4dde-4077-ab8c-0f8df804bf7a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.84101563 +0000 UTC m=+422.533680218 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs") pod "network-metrics-daemon-qdfr4" (UID: "a702c6d2-4dde-4077-ab8c-0f8df804bf7a") : object "openshift-multus"/"metrics-daemon-secret" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.841029 4183 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.841043 4183 projected.go:200] Error preparing data for projected volume kube-api-access-6kp86 for pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.841077 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86 podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.841066641 +0000 UTC m=+422.533731249 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kp86" (UniqueName: "kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.841092 4183 projected.go:294] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.841106 4183 projected.go:294] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.841114 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hpzhn for pod openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.841131 4183 secret.go:194] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.841149 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.841139653 +0000 UTC m=+422.533804261 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-hpzhn" (UniqueName: "kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840567 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.841171 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.841163104 +0000 UTC m=+422.533827702 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.841187 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.841179374 +0000 UTC m=+422.533844092 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.842130 4183 projected.go:294] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.842245 4183 projected.go:294] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.842271 4183 projected.go:200] Error preparing data for projected volume kube-api-access-r8qj9 for pod openshift-apiserver/apiserver-67cbf64bc9-mtx25: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.842559 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9 podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.842411779 +0000 UTC m=+422.535076617 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-r8qj9" (UniqueName: "kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.874231 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.938979 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.939137 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.939170 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.939409 4183 projected.go:294] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.939429 4183 projected.go:294] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.939592 4183 projected.go:200] Error preparing data for projected volume kube-api-access-dt5cx for pod openshift-ingress-canary/ingress-canary-2vhcn: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.939649 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx podName:0b5d722a-1123-4935-9740-52a08d018bc9 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.939630198 +0000 UTC m=+422.632294816 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-dt5cx" (UniqueName: "kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx") pod "ingress-canary-2vhcn" (UID: "0b5d722a-1123-4935-9740-52a08d018bc9") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.939725 4183 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.940008 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.939761842 +0000 UTC m=+422.632426710 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.940037 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dt5cx\" (UniqueName: \"kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx\") pod \"ingress-canary-2vhcn\" (UID: \"0b5d722a-1123-4935-9740-52a08d018bc9\") " pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.940108 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.940129 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.940141 4183 projected.go:200] Error preparing data for projected volume kube-api-access-rg2zg for pod openshift-marketplace/marketplace-operator-8b455464d-f9xdt: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.940181 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.940169033 +0000 UTC m=+422.632833711 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-rg2zg" (UniqueName: "kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.940232 4183 secret.go:194] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.940265 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-fqnmc\" (UniqueName: \"kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.940279 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.940268286 +0000 UTC m=+422.632932904 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.940313 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vhj\" (UniqueName: \"kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.940334 4183 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.940345 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.940349 4183 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.940366 4183 projected.go:200] Error preparing data for projected volume kube-api-access-fqnmc for pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.940391 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-x5d97\" (UniqueName: \"kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.940397 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.9403891 +0000 UTC m=+422.633053798 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-fqnmc" (UniqueName: "kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.940458 4183 projected.go:294] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.940479 4183 projected.go:294] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.940488 4183 projected.go:200] Error preparing data for projected volume kube-api-access-d9vhj for pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.940519 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.940511083 +0000 UTC m=+422.633175711 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-d9vhj" (UniqueName: "kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.940568 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.940727 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5rpl7\" (UniqueName: \"kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.940906 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.940908 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 podName: nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.940896614 +0000 UTC m=+422.633561322 (durationBeforeRetry 8s). Error: MountVolume.MountDevice failed for volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.940987 4183 secret.go:194] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941019 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.941011157 +0000 UTC m=+422.633675865 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.941019 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2zpsk\" (UniqueName: \"kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk\") pod \"downloads-65476884b9-9wcvx\" (UID: \"6268b7fe-8910-4505-b404-6f1df638105c\") " pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941070 4183 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941084 4183 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941093 4183 projected.go:200] Error preparing data for projected volume kube-api-access-2zpsk for pod openshift-console/downloads-65476884b9-9wcvx: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.941113 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941129 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk podName:6268b7fe-8910-4505-b404-6f1df638105c nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.94111815 +0000 UTC m=+422.633782768 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-2zpsk" (UniqueName: "kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk") pod "downloads-65476884b9-9wcvx" (UID: "6268b7fe-8910-4505-b404-6f1df638105c") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.940570 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941148 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941156 4183 projected.go:200] Error preparing data for projected volume kube-api-access-x5d97 for pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.941165 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941192 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97 podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.941180622 +0000 UTC m=+422.633845240 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-x5d97" (UniqueName: "kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.941221 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941231 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.941249 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941276 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.941259365 +0000 UTC m=+422.633924053 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941301 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.941312 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kgvs\" (UniqueName: \"kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941329 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.941322536 +0000 UTC m=+422.633987234 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.941361 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-76gl8\" (UniqueName: \"kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8\") pod \"network-check-target-v54bt\" (UID: \"34a48baf-1bee-4921-8bb2-9b7320e76f79\") " pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941384 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941402 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941412 4183 projected.go:200] Error preparing data for projected volume kube-api-access-6kgvs for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941426 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941438 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941445 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tf29r for pod openshift-marketplace/redhat-marketplace-8s8pc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941448 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.94143846 +0000 UTC m=+422.634103238 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kgvs" (UniqueName: "kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.941386 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941469 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r podName:c782cf62-a827-4677-b3c2-6f82c5f09cbb nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.94146257 +0000 UTC m=+422.634127188 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-tf29r" (UniqueName: "kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r") pod "redhat-marketplace-8s8pc" (UID: "c782cf62-a827-4677-b3c2-6f82c5f09cbb") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941502 4183 projected.go:269] Couldn't get secret openshift-image-registry/image-registry-tls: object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941509 4183 projected.go:200] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-585546dd8b-v5m4t: object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.941518 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pzb57\" (UniqueName: \"kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941520 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941559 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941573 4183 projected.go:200] Error preparing data for projected volume kube-api-access-ptdrb for pod openshift-marketplace/redhat-operators-f4jkp: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941584 4183 projected.go:294] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941602 4183 projected.go:294] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941613 4183 projected.go:200] Error preparing data for projected volume kube-api-access-pzb57 for pod openshift-controller-manager/controller-manager-6ff78978b4-q4vv8: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941519 4183 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941664 4183 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941666 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941713 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941732 4183 projected.go:200] Error preparing data for projected volume kube-api-access-76gl8 for pod openshift-network-diagnostics/network-check-target-v54bt: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941676 4183 projected.go:200] Error preparing data for projected volume kube-api-access-5rpl7 for pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941534 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.941525612 +0000 UTC m=+422.634190390 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.942229 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.942264 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.942374 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-w4r68\" (UniqueName: \"kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.942613 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.942644 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.942716 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.942875 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.943075 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hjlhw\" (UniqueName: \"kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.943146 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.943200 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tvc4r\" (UniqueName: \"kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.943281 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.943309 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9724w\" (UniqueName: \"kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.943332 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.943356 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.943392 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.943395 4183 secret.go:194] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.943427 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.943458 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.943438177 +0000 UTC m=+422.636102905 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"etcd-client" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.943499 4183 secret.go:194] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.943542 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.943551 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.943529049 +0000 UTC m=+422.636193667 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.943594 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.943629 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.943615322 +0000 UTC m=+422.636280020 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"audit-1" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.943658 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.943680 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.943692 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.943701 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hjlhw for pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.943692 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.943684044 +0000 UTC m=+422.636348752 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.943750 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb podName:4092a9f8-5acc-4932-9e90-ef962eeb301a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.943739215 +0000 UTC m=+422.636403903 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-ptdrb" (UniqueName: "kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb") pod "redhat-operators-f4jkp" (UID: "4092a9f8-5acc-4932-9e90-ef962eeb301a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944075 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944168 4183 secret.go:194] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944191 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8 podName:34a48baf-1bee-4921-8bb2-9b7320e76f79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.943764986 +0000 UTC m=+422.636429574 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-76gl8" (UniqueName: "kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8") pod "network-check-target-v54bt" (UID: "34a48baf-1bee-4921-8bb2-9b7320e76f79") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944214 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7 podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.944206049 +0000 UTC m=+422.636870647 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-5rpl7" (UniqueName: "kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944226 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.944220879 +0000 UTC m=+422.636885467 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-hjlhw" (UniqueName: "kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944242 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.94423529 +0000 UTC m=+422.636899878 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944245 4183 projected.go:294] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944259 4183 projected.go:294] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944269 4183 projected.go:200] Error preparing data for projected volume kube-api-access-w4r68 for pod openshift-authentication/oauth-openshift-765b47f944-n2lhl: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944261 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.94425088 +0000 UTC m=+422.636915568 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944304 4183 configmap.go:199] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944316 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68 podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.944307182 +0000 UTC m=+422.636971770 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-w4r68" (UniqueName: "kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944336 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.944329442 +0000 UTC m=+422.636994150 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"trusted-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944356 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944393 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944408 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944417 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tvc4r for pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944394 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.944385594 +0000 UTC m=+422.637050312 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944476 4183 projected.go:294] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944492 4183 projected.go:294] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944503 4183 projected.go:200] Error preparing data for projected volume kube-api-access-9724w for pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944516 4183 configmap.go:199] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944527 4183 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944566 4183 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944577 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944479 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57 podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.944469226 +0000 UTC m=+422.637133924 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-pzb57" (UniqueName: "kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944621 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944622 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.94461294 +0000 UTC m=+422.637277528 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-tvc4r" (UniqueName: "kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944681 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.944671092 +0000 UTC m=+422.637335680 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-9724w" (UniqueName: "kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944696 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.944689213 +0000 UTC m=+422.637353801 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"service-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944711 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.944705323 +0000 UTC m=+422.637369911 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944733 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.944723814 +0000 UTC m=+422.637388402 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-client" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944750 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.944744304 +0000 UTC m=+422.637408902 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944764 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.944758215 +0000 UTC m=+422.637422813 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.943607 4183 secret.go:194] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.945173 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.945145896 +0000 UTC m=+422.637813094 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-oauth-config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.970464 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:48 crc kubenswrapper[4183]: I0813 19:50:48.025491 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:48 crc kubenswrapper[4183]: I0813 19:50:48.047323 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lz9qh\" (UniqueName: \"kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:48 crc kubenswrapper[4183]: I0813 19:50:48.047561 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\") " pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:50:48 crc kubenswrapper[4183]: E0813 19:50:48.048135 4183 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Aug 13 19:50:48 crc kubenswrapper[4183]: E0813 19:50:48.048174 4183 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Aug 13 19:50:48 crc kubenswrapper[4183]: E0813 19:50:48.048191 4183 projected.go:200] Error preparing data for projected volume kube-api-access-lz9qh for pod openshift-console/console-84fccc7b6-mkncc: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:48 crc kubenswrapper[4183]: E0813 19:50:48.048211 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:48 crc kubenswrapper[4183]: E0813 19:50:48.048229 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:48 crc kubenswrapper[4183]: E0813 19:50:48.048241 4183 projected.go:200] Error preparing data for projected volume kube-api-access-r7dbp for pod openshift-marketplace/redhat-marketplace-rmwfn: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:48 crc kubenswrapper[4183]: I0813 19:50:48.048143 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r7dbp\" (UniqueName: \"kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp\") pod \"redhat-marketplace-rmwfn\" (UID: \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\") " pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:50:48 crc kubenswrapper[4183]: E0813 19:50:48.048473 4183 projected.go:294] Couldn't get configMap openshift-kube-controller-manager/kube-root-ca.crt: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:48 crc kubenswrapper[4183]: E0813 19:50:48.048542 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:56.048243422 +0000 UTC m=+422.740908190 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-lz9qh" (UniqueName: "kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:48 crc kubenswrapper[4183]: E0813 19:50:48.048529 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/revision-pruner-8-crc: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:48 crc kubenswrapper[4183]: E0813 19:50:48.048654 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access podName:72854c1e-5ae2-4ed6-9e50-ff3bccde2635 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:56.048625133 +0000 UTC m=+422.741289911 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access") pod "revision-pruner-8-crc" (UID: "72854c1e-5ae2-4ed6-9e50-ff3bccde2635") : object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:48 crc kubenswrapper[4183]: E0813 19:50:48.048709 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp podName:9ad279b4-d9dc-42a8-a1c8-a002bd063482 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:56.048689165 +0000 UTC m=+422.741354073 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-r7dbp" (UniqueName: "kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp") pod "redhat-marketplace-rmwfn" (UID: "9ad279b4-d9dc-42a8-a1c8-a002bd063482") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:48 crc kubenswrapper[4183]: I0813 19:50:48.160189 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:48 crc kubenswrapper[4183]: I0813 19:50:48.209087 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:50:48 crc kubenswrapper[4183]: E0813 19:50:48.209565 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:50:48 crc kubenswrapper[4183]: I0813 19:50:48.209755 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:48 crc kubenswrapper[4183]: E0813 19:50:48.210074 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:50:48 crc kubenswrapper[4183]: I0813 19:50:48.210224 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:48 crc kubenswrapper[4183]: E0813 19:50:48.210401 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:50:48 crc kubenswrapper[4183]: I0813 19:50:48.210532 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:50:48 crc kubenswrapper[4183]: E0813 19:50:48.210707 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:50:48 crc kubenswrapper[4183]: I0813 19:50:48.210957 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:48 crc kubenswrapper[4183]: E0813 19:50:48.211139 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:50:48 crc kubenswrapper[4183]: I0813 19:50:48.211280 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:48 crc kubenswrapper[4183]: E0813 19:50:48.211473 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:50:48 crc kubenswrapper[4183]: I0813 19:50:48.211613 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:48 crc kubenswrapper[4183]: E0813 19:50:48.212049 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:50:48 crc kubenswrapper[4183]: I0813 19:50:48.221377 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:48 crc kubenswrapper[4183]: I0813 19:50:48.295030 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:48 crc kubenswrapper[4183]: I0813 19:50:48.390246 4183 generic.go:334] "Generic (PLEG): container finished" podID="7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8" containerID="1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b" exitCode=0 Aug 13 19:50:48 crc kubenswrapper[4183]: I0813 19:50:48.390423 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" event={"ID":"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8","Type":"ContainerDied","Data":"1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b"} Aug 13 19:50:48 crc kubenswrapper[4183]: I0813 19:50:48.397025 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b"} Aug 13 19:50:48 crc kubenswrapper[4183]: I0813 19:50:48.435909 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:50:48 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:50:48 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:50:48 crc kubenswrapper[4183]: healthz check failed Aug 13 19:50:48 crc kubenswrapper[4183]: I0813 19:50:48.436378 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:50:48 crc kubenswrapper[4183]: I0813 19:50:48.645136 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:48 crc kubenswrapper[4183]: I0813 19:50:48.691565 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:48 crc kubenswrapper[4183]: I0813 19:50:48.869448 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:48 crc kubenswrapper[4183]: I0813 19:50:48.919496 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.108730 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.209291 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.211247 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.209515 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.209518 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.209553 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.209569 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.209611 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.209621 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.209645 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.209701 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.209700 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.209715 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.209730 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.209745 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.209747 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.209869 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.209894 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.209904 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.209923 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.209955 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.209952 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.209972 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.209976 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.210000 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.210012 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.210012 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.210033 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.210056 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.210065 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.210067 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.210070 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.210105 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.210115 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.210114 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.210153 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.210155 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.210170 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.210192 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.210201 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.210208 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.210258 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.210434 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.212027 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.212216 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.212286 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.212418 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.212593 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.212740 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.212982 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.213345 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.213879 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.214060 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.214282 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.214376 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.214474 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.214584 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.214846 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.214949 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.215059 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.215163 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.215242 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.215340 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.215578 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.216057 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.216240 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.216356 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.216480 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.216602 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.216722 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.216999 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.217413 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.217509 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.217605 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.217696 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.217881 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.217974 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.218091 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.218179 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.218274 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.218360 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.218493 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.221470 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.433890 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:50:49 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:50:49 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:50:49 crc kubenswrapper[4183]: healthz check failed Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.433965 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.675231 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.737545 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.863373 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.933597 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:50 crc kubenswrapper[4183]: I0813 19:50:50.117097 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:50 crc kubenswrapper[4183]: I0813 19:50:50.209374 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:50 crc kubenswrapper[4183]: I0813 19:50:50.210136 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:50 crc kubenswrapper[4183]: I0813 19:50:50.210171 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:50 crc kubenswrapper[4183]: I0813 19:50:50.210207 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:50 crc kubenswrapper[4183]: I0813 19:50:50.210250 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:50:50 crc kubenswrapper[4183]: I0813 19:50:50.210288 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:50:50 crc kubenswrapper[4183]: E0813 19:50:50.210670 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:50:50 crc kubenswrapper[4183]: E0813 19:50:50.211009 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:50:50 crc kubenswrapper[4183]: E0813 19:50:50.211165 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:50:50 crc kubenswrapper[4183]: E0813 19:50:50.211263 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:50:50 crc kubenswrapper[4183]: E0813 19:50:50.211348 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:50:50 crc kubenswrapper[4183]: E0813 19:50:50.211480 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:50:50 crc kubenswrapper[4183]: I0813 19:50:50.213901 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:50 crc kubenswrapper[4183]: E0813 19:50:50.215080 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:50:50 crc kubenswrapper[4183]: E0813 19:50:50.351920 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:50:50 crc kubenswrapper[4183]: I0813 19:50:50.412428 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652"} Aug 13 19:50:50 crc kubenswrapper[4183]: I0813 19:50:50.416657 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" event={"ID":"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8","Type":"ContainerStarted","Data":"54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87"} Aug 13 19:50:50 crc kubenswrapper[4183]: I0813 19:50:50.430274 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 19:50:50 crc kubenswrapper[4183]: I0813 19:50:50.437261 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:50:50 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:50:50 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:50:50 crc kubenswrapper[4183]: healthz check failed Aug 13 19:50:50 crc kubenswrapper[4183]: I0813 19:50:50.437763 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.211468 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.211711 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.211847 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.211932 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.211973 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.212105 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.212165 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.212251 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.212368 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.212449 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.212656 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.212990 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.213210 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.213483 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.213596 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.213671 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.213710 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.213891 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.213941 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.214021 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.214057 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.214119 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.214154 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.214252 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.214299 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.214373 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.214424 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.214495 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.214531 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.214591 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.214635 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.214706 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.214741 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.214915 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.214970 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.215038 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.215071 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.215172 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.215243 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.215253 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.215281 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.215339 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.215453 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.215475 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.215572 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.215650 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.215666 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.215722 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.215937 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.216035 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.216040 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.216094 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.216158 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.216163 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.216232 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.215502 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.216260 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.216293 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.216323 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.216356 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.216430 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.216519 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.216562 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.216625 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.216724 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.216767 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.216946 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.216979 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.217024 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.217081 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.217127 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.217185 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.217232 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.217599 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.217713 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.217894 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.218013 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.218051 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.218285 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.218397 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.218496 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.218583 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.267356 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.438392 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:50:51 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:50:51 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:50:51 crc kubenswrapper[4183]: healthz check failed Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.438476 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.011508 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.092727 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.152017 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.195546 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.209330 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.209392 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.209450 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.209470 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.209479 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.209505 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.209546 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:52 crc kubenswrapper[4183]: E0813 19:50:52.213360 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:50:52 crc kubenswrapper[4183]: E0813 19:50:52.213514 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:50:52 crc kubenswrapper[4183]: E0813 19:50:52.213593 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:50:52 crc kubenswrapper[4183]: E0813 19:50:52.213678 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:50:52 crc kubenswrapper[4183]: E0813 19:50:52.214118 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:50:52 crc kubenswrapper[4183]: E0813 19:50:52.214256 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:50:52 crc kubenswrapper[4183]: E0813 19:50:52.215975 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.255546 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.321627 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.379498 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.438071 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:50:52 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:50:52 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:50:52 crc kubenswrapper[4183]: healthz check failed Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.438229 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.445266 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.466404 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9"} Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.507118 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.587255 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.647267 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.695147 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.760623 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.811756 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.859250 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.885758 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.885883 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.885900 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.885920 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.885949 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:52Z","lastTransitionTime":"2025-08-13T19:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.899433 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.923967 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b065562fefc63a381832e1073dc188f7f27d20b65780f1c54a9aa34c767a3b80\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:28:38Z\\\",\\\"message\\\":\\\"Thu Jun 27 13:21:15 UTC 2024\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:14Z\\\"}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.206752 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.209838 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.209926 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.210018 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.210054 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.210088 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.210112 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.210160 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.210187 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.210220 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.210268 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.210279 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.210308 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.210338 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.210374 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.210380 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.210401 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.210427 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.210532 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.210608 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.210634 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.210705 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.210755 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.210897 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.210944 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.210981 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.211008 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.211067 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.211125 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.211246 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.211285 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.211308 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.211329 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.211372 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.211396 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.211424 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.211531 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.211571 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.211603 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.211629 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.211657 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.211678 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.211709 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.211735 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.211763 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.212206 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.212107 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.212349 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.212376 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.212519 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.212606 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.212704 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.212906 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.212996 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.213128 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.213253 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.213349 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.213462 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.213556 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.213651 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.213842 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.213960 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.214071 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.214198 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.214303 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.214434 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.214537 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.214690 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.215201 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.215367 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.215481 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.215568 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.215897 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.215999 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.216184 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.216307 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.216471 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.216613 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.216708 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.216871 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.216968 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.217041 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.217102 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.248569 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.289767 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.291489 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.307261 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.307383 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.307400 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.307423 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.307515 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:53Z","lastTransitionTime":"2025-08-13T19:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.337623 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.338296 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.349062 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.349112 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.349212 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.349235 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.349255 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:53Z","lastTransitionTime":"2025-08-13T19:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.368420 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.383148 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.391449 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.391586 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.391609 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.391635 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.391668 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:53Z","lastTransitionTime":"2025-08-13T19:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.416399 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.425267 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.447697 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.447935 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.447973 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.448006 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.448058 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:53Z","lastTransitionTime":"2025-08-13T19:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.455358 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.455563 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:50:53 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:50:53 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:50:53 crc kubenswrapper[4183]: healthz check failed Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.455621 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.482859 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.482984 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.512518 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa"} Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.525334 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.605677 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.643417 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.707289 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.770990 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.829555 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.870005 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.917092 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.985387 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:54 crc kubenswrapper[4183]: I0813 19:50:54.036606 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:54 crc kubenswrapper[4183]: I0813 19:50:54.082678 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:54 crc kubenswrapper[4183]: I0813 19:50:54.117446 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:54 crc kubenswrapper[4183]: I0813 19:50:54.178301 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:54 crc kubenswrapper[4183]: I0813 19:50:54.209548 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:50:54 crc kubenswrapper[4183]: I0813 19:50:54.210109 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:54 crc kubenswrapper[4183]: E0813 19:50:54.210149 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:50:54 crc kubenswrapper[4183]: I0813 19:50:54.210252 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:54 crc kubenswrapper[4183]: E0813 19:50:54.210380 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:50:54 crc kubenswrapper[4183]: I0813 19:50:54.210443 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:54 crc kubenswrapper[4183]: I0813 19:50:54.210448 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:54 crc kubenswrapper[4183]: E0813 19:50:54.210544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:50:54 crc kubenswrapper[4183]: I0813 19:50:54.210602 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:50:54 crc kubenswrapper[4183]: I0813 19:50:54.210619 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:54 crc kubenswrapper[4183]: E0813 19:50:54.210683 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:50:54 crc kubenswrapper[4183]: E0813 19:50:54.210853 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:50:54 crc kubenswrapper[4183]: E0813 19:50:54.210955 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:50:54 crc kubenswrapper[4183]: E0813 19:50:54.211035 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:50:54 crc kubenswrapper[4183]: I0813 19:50:54.246101 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:54 crc kubenswrapper[4183]: I0813 19:50:54.317140 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:54 crc kubenswrapper[4183]: I0813 19:50:54.451056 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:50:54 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:50:54 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:50:54 crc kubenswrapper[4183]: healthz check failed Aug 13 19:50:54 crc kubenswrapper[4183]: I0813 19:50:54.451358 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:50:54 crc kubenswrapper[4183]: I0813 19:50:54.592488 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:54 crc kubenswrapper[4183]: I0813 19:50:54.667318 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 19:50:54 crc kubenswrapper[4183]: I0813 19:50:54.667770 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 19:50:54 crc kubenswrapper[4183]: I0813 19:50:54.668370 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 19:50:54 crc kubenswrapper[4183]: I0813 19:50:54.668440 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 19:50:54 crc kubenswrapper[4183]: I0813 19:50:54.668464 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 19:50:54 crc kubenswrapper[4183]: I0813 19:50:54.759257 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:54 crc kubenswrapper[4183]: I0813 19:50:54.912115 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.013890 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.053062 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:48Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ce15d141220317b4e57b1599c379e880d26b45054aa1776fbad6346dd58a55d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce15d141220317b4e57b1599c379e880d26b45054aa1776fbad6346dd58a55d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2024-06-27T13:21:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8d4e207328f4e3140d751e6046a1a8d14a7f392d2f10d6248f7db828278d0972\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d4e207328f4e3140d751e6046a1a8d14a7f392d2f10d6248f7db828278d0972\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2024-06-27T13:21:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://455c9dcaca7ee7118b89a599c97b6a458888800688dd381f8c5dcbd6ba96e17d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://455c9dcaca7ee7118b89a599c97b6a458888800688dd381f8c5dcbd6ba96e17d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2024-06-27T13:21:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8d0ea8f66b79c23a45ba2f75937377749519dc802fb755a7fce9c90efb994507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d0ea8f66b79c23a45ba2f75937377749519dc802fb755a7fce9c90efb994507\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2024-06-27T13:21:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dba0ea54e565345301e3986d0dd8c643d32ea56c561c86bdb4d4b35fa49a453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dba0ea54e565345301e3986d0dd8c643d32ea56c561c86bdb4d4b35fa49a453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2024-06-27T13:21:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:13Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.126303 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.212486 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.220343 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.213255 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.221198 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.213334 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.221370 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.213362 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.221499 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.213404 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.221599 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.213437 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.221699 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.213372 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.221934 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.213531 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.222047 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.213562 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.222151 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.213589 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.222258 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.213607 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.222345 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.213626 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.222454 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.213700 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.222553 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.213751 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.222651 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.213844 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.213913 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.213911 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.214013 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.214101 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.214320 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.214325 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.214360 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.214364 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.214386 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.214389 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.214441 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.214476 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.214513 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.214526 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.214544 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.214746 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.214597 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.215977 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.216026 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.216725 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.216981 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.218251 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.218517 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.218536 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.219140 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.219373 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.226435 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.226770 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.227134 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.227238 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.227597 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.228215 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.229348 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.229609 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.229683 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.229741 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.230209 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.230364 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.230512 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.230632 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.230749 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.231684 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.231900 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.231995 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.233340 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.233730 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.234090 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.234336 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.235061 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.235377 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.235902 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.236745 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.237080 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.326347 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.368346 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.378972 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.447613 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:50:55 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:50:55 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:50:55 crc kubenswrapper[4183]: healthz check failed Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.447956 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.461126 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.684736 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.684974 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.685010 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.685044 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.685211 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.686283 4183 secret.go:194] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.686525 4183 secret.go:194] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.686549 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.687433 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls podName:297ab9b6-2186-4d5b-a952-2bfd59af63c4 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.68728321 +0000 UTC m=+438.379948088 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls") pod "machine-config-controller-6df6df6b6b-58shh" (UID: "297ab9b6-2186-4d5b-a952-2bfd59af63c4") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.686578 4183 configmap.go:199] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.687735 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.687708552 +0000 UTC m=+438.380373180 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.687758 4183 configmap.go:199] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.688567 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.688554646 +0000 UTC m=+438.381219274 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"oauth-serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.689384 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.689355569 +0000 UTC m=+438.382020177 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : object "openshift-service-ca"/"signing-cabundle" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.689480 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.689461712 +0000 UTC m=+438.382126400 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : object "openshift-dns-operator"/"metrics-tls" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.689586 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.689672 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.689703 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.689734 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.690132 4183 secret.go:194] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.690697 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.690684417 +0000 UTC m=+438.383349035 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.690422 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.691188 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.691306 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.691526 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j7zrh\" (UniqueName: \"kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.690473 4183 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.690501 4183 configmap.go:199] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.690309 4183 secret.go:194] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.691393 4183 configmap.go:199] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.691463 4183 secret.go:194] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.692606 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.692594162 +0000 UTC m=+438.385259040 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : object "openshift-service-ca"/"signing-key" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.692992 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.692982133 +0000 UTC m=+438.385646721 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.693124 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.693110486 +0000 UTC m=+438.385775084 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.693216 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.693206309 +0000 UTC m=+438.385870897 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.693320 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.693310182 +0000 UTC m=+438.385974780 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.693482 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.693615 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.693733 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.692076 4183 projected.go:294] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.694048 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.694260 4183 secret.go:194] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.694307 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.69429632 +0000 UTC m=+438.386960938 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.694471 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.694459495 +0000 UTC m=+438.387124123 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.694504 4183 configmap.go:199] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.694661 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.694651941 +0000 UTC m=+438.387316559 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"console-operator-config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.694740 4183 secret.go:194] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.695008 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.69499471 +0000 UTC m=+438.387659438 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.694659 4183 projected.go:294] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.695038 4183 projected.go:200] Error preparing data for projected volume kube-api-access-j7zrh for pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.695087 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.695074473 +0000 UTC m=+438.387739331 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-j7zrh" (UniqueName: "kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.694083 4183 secret.go:194] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.695126 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.695118654 +0000 UTC m=+438.387783252 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.694214 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.695186 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.695230 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.695257 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.695351 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.695427 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.695458 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.695484 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.695516 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.695976 4183 projected.go:294] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.696095 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.696244 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.696283 4183 configmap.go:199] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.696457 4183 secret.go:194] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.696587 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.696370 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.696411 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.696926 4183 secret.go:194] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.697436 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.69742201 +0000 UTC m=+438.390086608 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.697498 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.697486782 +0000 UTC m=+438.390151370 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.697518 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.697510942 +0000 UTC m=+438.390175660 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.697534 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.697527913 +0000 UTC m=+438.390192591 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.697557 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.697551163 +0000 UTC m=+438.390215761 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.697572 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.697565264 +0000 UTC m=+438.390229852 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.697588 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.697581864 +0000 UTC m=+438.390246542 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.697615 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.697604985 +0000 UTC m=+438.390269573 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default-metrics-tls" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.801620 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.802319 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.802378 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.802669 4183 secret.go:194] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.803059 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.803083 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.80273274 +0000 UTC m=+438.495397548 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.803576 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.803550453 +0000 UTC m=+438.496215061 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.803959 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.804112 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.804408 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.804581 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.806009 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.806293 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.807717 4183 secret.go:194] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.807922 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.807895747 +0000 UTC m=+438.500560485 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.807996 4183 secret.go:194] Couldn't get secret openshift-image-registry/installation-pull-secrets: object "openshift-image-registry"/"installation-pull-secrets" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.808038 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.808028671 +0000 UTC m=+438.500693289 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "installation-pull-secrets" (UniqueName: "kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"installation-pull-secrets" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.808087 4183 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.808123 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.808114633 +0000 UTC m=+438.500779241 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"client-ca" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.808166 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.808193 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.808185625 +0000 UTC m=+438.500850493 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.808366 4183 configmap.go:199] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.808401 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.808393271 +0000 UTC m=+438.501057889 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-images" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.808947 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.808987 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.808978388 +0000 UTC m=+438.501643106 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.802598 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.811435 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.812129 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.810984025 +0000 UTC m=+438.504770305 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"image-import-ca" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.851263 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.915187 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.915256 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.915308 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.915371 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.915425 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.915455 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.915501 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kp86\" (UniqueName: \"kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.915543 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8dcvj\" (UniqueName: \"kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.915572 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.915604 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.915635 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hpzhn\" (UniqueName: \"kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.915666 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r8qj9\" (UniqueName: \"kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.915700 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.915735 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.915788 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4w8wh\" (UniqueName: \"kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.915875 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.915969 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.916017 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.916084 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.916114 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.916155 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.916190 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.916218 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.916243 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.916276 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.916308 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.916360 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.916390 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.916415 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-serving-cert\" (UniqueName: \"kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.916449 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.916473 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.916497 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.916565 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.916592 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.916624 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bxr\" (UniqueName: \"kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.916653 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.916680 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.916722 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.916748 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.916784 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.917316 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.917842 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.917419657 +0000 UTC m=+438.610084385 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.917952 4183 secret.go:194] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918000 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs podName:a702c6d2-4dde-4077-ab8c-0f8df804bf7a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.917980323 +0000 UTC m=+438.610645041 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs") pod "network-metrics-daemon-qdfr4" (UID: "a702c6d2-4dde-4077-ab8c-0f8df804bf7a") : object "openshift-multus"/"metrics-daemon-secret" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918051 4183 configmap.go:199] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918079 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.918071946 +0000 UTC m=+438.610736554 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918120 4183 secret.go:194] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918154 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs podName:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.918144568 +0000 UTC m=+438.610809406 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs") pod "multus-admission-controller-6c7c885997-4hbbc" (UID: "d5025cb4-ddb0-4107-88c1-bcbcdb779ac0") : object "openshift-multus"/"multus-admission-controller-secret" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918200 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918232 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.91822545 +0000 UTC m=+438.610890048 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918266 4183 configmap.go:199] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918290 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.918282462 +0000 UTC m=+438.610947060 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918348 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918374 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.918367454 +0000 UTC m=+438.611032052 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"client-ca" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918438 4183 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918459 4183 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918474 4183 projected.go:200] Error preparing data for projected volume kube-api-access-6kp86 for pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918515 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86 podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.918504678 +0000 UTC m=+438.611169286 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kp86" (UniqueName: "kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918581 4183 projected.go:294] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918594 4183 projected.go:294] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918602 4183 projected.go:200] Error preparing data for projected volume kube-api-access-8dcvj for pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918629 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.918622042 +0000 UTC m=+438.611286750 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-8dcvj" (UniqueName: "kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918674 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918705 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.918696024 +0000 UTC m=+438.611360722 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918755 4183 secret.go:194] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918785 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.918778266 +0000 UTC m=+438.611443244 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918990 4183 projected.go:294] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.919008 4183 projected.go:294] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.919017 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hpzhn for pod openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.919055 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.919045494 +0000 UTC m=+438.611710122 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-hpzhn" (UniqueName: "kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.919418 4183 projected.go:294] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.919440 4183 projected.go:294] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.919450 4183 projected.go:200] Error preparing data for projected volume kube-api-access-r8qj9 for pod openshift-apiserver/apiserver-67cbf64bc9-mtx25: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.919506 4183 configmap.go:199] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.919551 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.919628 4183 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.919640 4183 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.919648 4183 projected.go:200] Error preparing data for projected volume kube-api-access-4w8wh for pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.919716 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-idp-0-file-data: object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.919855 4183 projected.go:294] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.919869 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.919902 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.919887628 +0000 UTC m=+438.612552236 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.919906 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bm986\" (UniqueName: \"kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.919946 4183 configmap.go:199] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.919979 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.91997055 +0000 UTC m=+438.612635248 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.920023 4183 configmap.go:199] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.920052 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.920045362 +0000 UTC m=+438.612709970 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.920094 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.920129 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.920121635 +0000 UTC m=+438.612786233 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.920175 4183 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.920213 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.920201277 +0000 UTC m=+438.612865885 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.920266 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.920278 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.920291 4183 projected.go:200] Error preparing data for projected volume kube-api-access-ncrf5 for pod openshift-marketplace/certified-operators-7287f: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.920324 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.920327 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5 podName:887d596e-c519-4bfa-af90-3edd9e1b2f0f nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.9203158 +0000 UTC m=+438.612980508 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-ncrf5" (UniqueName: "kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5") pod "certified-operators-7287f" (UID: "887d596e-c519-4bfa-af90-3edd9e1b2f0f") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.920369 4183 secret.go:194] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.920404 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.920389642 +0000 UTC m=+438.613054250 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.920456 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.920484 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.920477535 +0000 UTC m=+438.613142243 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.920526 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.920561 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.920553607 +0000 UTC m=+438.613218215 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"audit-1" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.920637 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.920878 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wrd8h\" (UniqueName: \"kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.921058 4183 secret.go:194] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.921091 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.921082302 +0000 UTC m=+438.613746910 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.921133 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.921168 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.921155184 +0000 UTC m=+438.613823082 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.921053 4183 configmap.go:199] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.921211 4183 secret.go:194] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.921239 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.921232896 +0000 UTC m=+438.613897604 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.921276 4183 secret.go:194] Couldn't get secret openshift-console-operator/webhook-serving-cert: object "openshift-console-operator"/"webhook-serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.921304 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.921297218 +0000 UTC m=+438.613961826 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-serving-cert" (UniqueName: "kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : object "openshift-console-operator"/"webhook-serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.921345 4183 secret.go:194] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.921372 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.92136588 +0000 UTC m=+438.614030488 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.921401 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.921389241 +0000 UTC m=+438.614053829 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.921418 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.921411411 +0000 UTC m=+438.614075999 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-4w8wh" (UniqueName: "kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.921433 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.921425802 +0000 UTC m=+438.614090460 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-idp-0-file-data" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.921447 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.921441252 +0000 UTC m=+438.614105840 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.921492 4183 secret.go:194] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.921522 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.921515944 +0000 UTC m=+438.614180542 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.921573 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.921602 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.921595287 +0000 UTC m=+438.614259895 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.921648 4183 secret.go:194] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.921683 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.921668739 +0000 UTC m=+438.614333337 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.921722 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.921762 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.921752631 +0000 UTC m=+438.614417239 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.922122 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.921977838 +0000 UTC m=+438.614642706 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"audit" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.922324 4183 secret.go:194] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.922367 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.922356188 +0000 UTC m=+438.615020806 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.922417 4183 configmap.go:199] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.922446 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.922438651 +0000 UTC m=+438.615103379 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.922525 4183 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.922539 4183 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.922555 4183 projected.go:200] Error preparing data for projected volume kube-api-access-bm986 for pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.922594 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986 podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.922585945 +0000 UTC m=+438.615250683 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-bm986" (UniqueName: "kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.922617 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9 podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.922609526 +0000 UTC m=+438.615274124 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-r8qj9" (UniqueName: "kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.922653 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.922736 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.922672687 +0000 UTC m=+438.615337305 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-session" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.923004 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nf4t2\" (UniqueName: \"kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.923061 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.923136 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtrv\" (UniqueName: \"kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.923200 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.923270 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxcp\" (UniqueName: \"kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp\") pod \"network-check-source-5c5478f8c-vqvt7\" (UID: \"d0f40333-c860-4c04-8058-a0bf572dcf12\") " pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.923326 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.923353 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.923393 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.923468 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.923504 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.923532 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d7ntf\" (UniqueName: \"kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.923557 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.923601 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.923630 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.923654 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.923697 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmhq\" (UniqueName: \"kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq\") pod \"migrator-f7c6d88df-q2fnv\" (UID: \"cf1a8966-f594-490a-9fbb-eec5bafd13d3\") " pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.923725 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.923758 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.924024 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tl5kg\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.924055 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.924098 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2h9\" (UniqueName: \"kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924200 4183 projected.go:294] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924221 4183 projected.go:294] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924233 4183 projected.go:200] Error preparing data for projected volume kube-api-access-lx2h9 for pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924263 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9 podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.924254833 +0000 UTC m=+438.616919451 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-lx2h9" (UniqueName: "kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924374 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924389 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924402 4183 projected.go:200] Error preparing data for projected volume kube-api-access-wrd8h for pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924432 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.924423368 +0000 UTC m=+438.617088096 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-wrd8h" (UniqueName: "kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924472 4183 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924490 4183 projected.go:294] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924497 4183 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924506 4183 projected.go:294] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924514 4183 projected.go:200] Error preparing data for projected volume kube-api-access-l8bxr for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924521 4183 projected.go:200] Error preparing data for projected volume kube-api-access-nf4t2 for pod openshift-dns-operator/dns-operator-75f687757b-nz2xb: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924551 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2 podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.924543811 +0000 UTC m=+438.617208429 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-nf4t2" (UniqueName: "kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924591 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924657 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924713 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.924566032 +0000 UTC m=+438.617230620 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-l8bxr" (UniqueName: "kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924743 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.924734136 +0000 UTC m=+438.617398854 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-client" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924760 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.924752707 +0000 UTC m=+438.617417415 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924778 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924871 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.92485985 +0000 UTC m=+438.617524458 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924897 4183 secret.go:194] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924932 4183 projected.go:294] Couldn't get configMap hostpath-provisioner/kube-root-ca.crt: object "hostpath-provisioner"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924953 4183 projected.go:294] Couldn't get configMap hostpath-provisioner/openshift-service-ca.crt: object "hostpath-provisioner"/"openshift-service-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924962 4183 projected.go:200] Error preparing data for projected volume kube-api-access-vvtrv for pod hostpath-provisioner/csi-hostpathplugin-hvm8g: [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924976 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.925029 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.925098 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.925111 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.925118 4183 projected.go:200] Error preparing data for projected volume kube-api-access-qcxcp for pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.925145 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.925163 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.925171 4183 projected.go:200] Error preparing data for projected volume kube-api-access-n6sqt for pod openshift-marketplace/community-operators-8jhz6: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.925191 4183 projected.go:294] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.925207 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.925232 4183 configmap.go:199] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.925266 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924935 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.924927252 +0000 UTC m=+438.617591980 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"encryption-config-1" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.937221 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv podName:12e733dd-0939-4f1b-9cbb-13897e093787 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.937200363 +0000 UTC m=+438.629864961 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-vvtrv" (UniqueName: "kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv") pod "csi-hostpathplugin-hvm8g" (UID: "12e733dd-0939-4f1b-9cbb-13897e093787") : [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.937243 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.937234514 +0000 UTC m=+438.629899102 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.937259 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.937253364 +0000 UTC m=+438.629918072 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.937274 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp podName:d0f40333-c860-4c04-8058-a0bf572dcf12 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.937267495 +0000 UTC m=+438.629932093 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-qcxcp" (UniqueName: "kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp") pod "network-check-source-5c5478f8c-vqvt7" (UID: "d0f40333-c860-4c04-8058-a0bf572dcf12") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.937299 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt podName:3f4dca86-e6ee-4ec9-8324-86aff960225e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.937288645 +0000 UTC m=+438.629953233 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-n6sqt" (UniqueName: "kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt") pod "community-operators-8jhz6" (UID: "3f4dca86-e6ee-4ec9-8324-86aff960225e") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.937321 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.937312466 +0000 UTC m=+438.629977054 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.937344 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.937336327 +0000 UTC m=+438.630000925 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.937365 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.937358707 +0000 UTC m=+438.630023305 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.925301 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.937573 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.925337 4183 configmap.go:199] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.925384 4183 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.925434 4183 projected.go:294] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.925468 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924743 4183 projected.go:294] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.925505 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.925625 4183 secret.go:194] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.925665 4183 configmap.go:199] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.928645 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.937994 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.937974755 +0000 UTC m=+438.630639383 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.938034 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.938027156 +0000 UTC m=+438.630691774 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.938059 4183 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.938074 4183 projected.go:200] Error preparing data for projected volume kube-api-access-55f7t for pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.938427 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.938410227 +0000 UTC m=+438.631074965 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"openshift-global-ca" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.938568 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.938559662 +0000 UTC m=+438.631224370 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"metrics-tls" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.938582 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.938575782 +0000 UTC m=+438.631240490 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"trusted-ca-bundle" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.938597 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.938590642 +0000 UTC m=+438.631255360 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-55f7t" (UniqueName: "kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.938446 4183 projected.go:294] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.938612 4183 projected.go:200] Error preparing data for projected volume kube-api-access-d7ntf for pod openshift-service-ca/service-ca-666f99b6f-vlbxv: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.938637 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.938630694 +0000 UTC m=+438.631295302 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-d7ntf" (UniqueName: "kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.938463 4183 projected.go:294] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.938654 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tl5kg for pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.938688 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.938679735 +0000 UTC m=+438.631344463 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-tl5kg" (UniqueName: "kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.938714 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hqmhq for pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.938742 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq podName:cf1a8966-f594-490a-9fbb-eec5bafd13d3 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.938735957 +0000 UTC m=+438.631400585 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqmhq" (UniqueName: "kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq") pod "migrator-f7c6d88df-q2fnv" (UID: "cf1a8966-f594-490a-9fbb-eec5bafd13d3") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.025592 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hjlhw\" (UniqueName: \"kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.025701 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.026004 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tvc4r\" (UniqueName: \"kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.026066 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.026101 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.026156 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.026183 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9724w\" (UniqueName: \"kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.026208 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.026242 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.026292 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.026321 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.026349 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.026379 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.026429 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dt5cx\" (UniqueName: \"kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx\") pod \"ingress-canary-2vhcn\" (UID: \"0b5d722a-1123-4935-9740-52a08d018bc9\") " pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.026618 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-fqnmc\" (UniqueName: \"kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.026671 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vhj\" (UniqueName: \"kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.026699 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.026744 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-x5d97\" (UniqueName: \"kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.027027 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.027185 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5rpl7\" (UniqueName: \"kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.027270 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.027410 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2zpsk\" (UniqueName: \"kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk\") pod \"downloads-65476884b9-9wcvx\" (UID: \"6268b7fe-8910-4505-b404-6f1df638105c\") " pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.027569 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.027669 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.027719 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.027763 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.027904 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kgvs\" (UniqueName: \"kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.027971 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-76gl8\" (UniqueName: \"kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8\") pod \"network-check-target-v54bt\" (UID: \"34a48baf-1bee-4921-8bb2-9b7320e76f79\") " pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.028015 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.028079 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pzb57\" (UniqueName: \"kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.028156 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.028187 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.028265 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-w4r68\" (UniqueName: \"kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.028469 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.028501 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.028579 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.028649 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.028960 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.029188 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.029166811 +0000 UTC m=+438.721831769 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"audit-1" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.029301 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.029321 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.029334 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hjlhw for pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.029371 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.029360627 +0000 UTC m=+438.722025315 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-hjlhw" (UniqueName: "kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.029494 4183 configmap.go:199] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.029538 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.029526441 +0000 UTC m=+438.722191140 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"trusted-ca" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.029607 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.029621 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.029631 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tvc4r for pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.029675 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.029662255 +0000 UTC m=+438.722326953 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-tvc4r" (UniqueName: "kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.029736 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.029780 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.029768928 +0000 UTC m=+438.722433606 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.029966 4183 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.030014 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.030001685 +0000 UTC m=+438.722666503 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.030070 4183 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.030109 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.030092478 +0000 UTC m=+438.722757176 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-client" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.030173 4183 projected.go:294] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.030188 4183 projected.go:294] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.030200 4183 projected.go:200] Error preparing data for projected volume kube-api-access-9724w for pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.030236 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.030224091 +0000 UTC m=+438.722888779 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-9724w" (UniqueName: "kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.030290 4183 configmap.go:199] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.030327 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.030317534 +0000 UTC m=+438.722982232 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"service-ca" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.030379 4183 secret.go:194] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.030419 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.030409057 +0000 UTC m=+438.723073755 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"serving-cert" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.030467 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.030515 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.030503499 +0000 UTC m=+438.723168187 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.030565 4183 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.030608 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.030597342 +0000 UTC m=+438.723262030 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.030673 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.030691 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.030702 4183 projected.go:200] Error preparing data for projected volume kube-api-access-rg2zg for pod openshift-marketplace/marketplace-operator-8b455464d-f9xdt: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.030744 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.030725956 +0000 UTC m=+438.723390654 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-rg2zg" (UniqueName: "kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.031240 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.052080 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.052060916 +0000 UTC m=+438.744725524 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.031395 4183 secret.go:194] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.052123 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.052114667 +0000 UTC m=+438.744779285 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"serving-cert" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.031469 4183 projected.go:294] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.052146 4183 projected.go:294] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.052160 4183 projected.go:200] Error preparing data for projected volume kube-api-access-dt5cx for pod openshift-ingress-canary/ingress-canary-2vhcn: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.052194 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx podName:0b5d722a-1123-4935-9740-52a08d018bc9 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.052186179 +0000 UTC m=+438.744850787 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-dt5cx" (UniqueName: "kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx") pod "ingress-canary-2vhcn" (UID: "0b5d722a-1123-4935-9740-52a08d018bc9") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.031518 4183 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.052217 4183 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.052225 4183 projected.go:200] Error preparing data for projected volume kube-api-access-fqnmc for pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.052249 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.052242381 +0000 UTC m=+438.744906989 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-fqnmc" (UniqueName: "kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.031562 4183 projected.go:294] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.052270 4183 projected.go:294] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.052277 4183 projected.go:200] Error preparing data for projected volume kube-api-access-d9vhj for pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.052299 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.052293622 +0000 UTC m=+438.744958230 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-d9vhj" (UniqueName: "kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.038112 4183 projected.go:269] Couldn't get secret openshift-image-registry/image-registry-tls: object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.052325 4183 projected.go:200] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-585546dd8b-v5m4t: object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.052353 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.052346404 +0000 UTC m=+438.745011012 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.038147 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.052392 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.052384525 +0000 UTC m=+438.745049133 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.038188 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.052415 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.052423 4183 projected.go:200] Error preparing data for projected volume kube-api-access-6kgvs for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.052752 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.052446107 +0000 UTC m=+438.745110715 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kgvs" (UniqueName: "kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.038222 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.053401 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.053413 4183 projected.go:200] Error preparing data for projected volume kube-api-access-76gl8 for pod openshift-network-diagnostics/network-check-target-v54bt: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.053450 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8 podName:34a48baf-1bee-4921-8bb2-9b7320e76f79 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.053438615 +0000 UTC m=+438.746103293 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-76gl8" (UniqueName: "kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8") pod "network-check-target-v54bt" (UID: "34a48baf-1bee-4921-8bb2-9b7320e76f79") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.038271 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.053477 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.053486 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tf29r for pod openshift-marketplace/redhat-marketplace-8s8pc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.053527 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r podName:c782cf62-a827-4677-b3c2-6f82c5f09cbb nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.053516757 +0000 UTC m=+438.746181375 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-tf29r" (UniqueName: "kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r") pod "redhat-marketplace-8s8pc" (UID: "c782cf62-a827-4677-b3c2-6f82c5f09cbb") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.038317 4183 projected.go:294] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.053551 4183 projected.go:294] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.053562 4183 projected.go:200] Error preparing data for projected volume kube-api-access-pzb57 for pod openshift-controller-manager/controller-manager-6ff78978b4-q4vv8: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.053596 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57 podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.053585829 +0000 UTC m=+438.746250447 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-pzb57" (UniqueName: "kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.038353 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.053630 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.0536242 +0000 UTC m=+438.746288818 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.038405 4183 secret.go:194] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.053669 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.053660101 +0000 UTC m=+438.746324789 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.038456 4183 projected.go:294] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.053702 4183 projected.go:294] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.053711 4183 projected.go:200] Error preparing data for projected volume kube-api-access-w4r68 for pod openshift-authentication/oauth-openshift-765b47f944-n2lhl: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.053738 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68 podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.053729263 +0000 UTC m=+438.746393961 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-w4r68" (UniqueName: "kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.038493 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.053783 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.053774815 +0000 UTC m=+438.746439503 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.038522 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.054154 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.054143315 +0000 UTC m=+438.746808003 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.038554 4183 secret.go:194] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.054196 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.054186946 +0000 UTC m=+438.746851634 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"etcd-client" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.039152 4183 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.054221 4183 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.054233 4183 projected.go:200] Error preparing data for projected volume kube-api-access-5rpl7 for pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.054265 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7 podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.054256728 +0000 UTC m=+438.746921436 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-5rpl7" (UniqueName: "kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.039202 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.054292 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.054302 4183 projected.go:200] Error preparing data for projected volume kube-api-access-x5d97 for pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.054334 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97 podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.05432556 +0000 UTC m=+438.746990238 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-x5d97" (UniqueName: "kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.039241 4183 secret.go:194] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.054375 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.054366301 +0000 UTC m=+438.747030989 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.039293 4183 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.054394 4183 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.054402 4183 projected.go:200] Error preparing data for projected volume kube-api-access-2zpsk for pod openshift-console/downloads-65476884b9-9wcvx: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.054423 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk podName:6268b7fe-8910-4505-b404-6f1df638105c nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.054417813 +0000 UTC m=+438.747082431 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-2zpsk" (UniqueName: "kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk") pod "downloads-65476884b9-9wcvx" (UID: "6268b7fe-8910-4505-b404-6f1df638105c") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.039337 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.054442 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.054449 4183 projected.go:200] Error preparing data for projected volume kube-api-access-ptdrb for pod openshift-marketplace/redhat-operators-f4jkp: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.054478 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb podName:4092a9f8-5acc-4932-9e90-ef962eeb301a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.054470504 +0000 UTC m=+438.747135142 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-ptdrb" (UniqueName: "kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb") pod "redhat-operators-f4jkp" (UID: "4092a9f8-5acc-4932-9e90-ef962eeb301a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.039593 4183 secret.go:194] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.054517 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.054508025 +0000 UTC m=+438.747172713 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-oauth-config" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.063362 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 podName: nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.063190003 +0000 UTC m=+438.755854841 (durationBeforeRetry 16s). Error: MountVolume.MountDevice failed for volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.069783 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.131035 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r7dbp\" (UniqueName: \"kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp\") pod \"redhat-marketplace-rmwfn\" (UID: \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\") " pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.133448 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.133513 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.133528 4183 projected.go:200] Error preparing data for projected volume kube-api-access-r7dbp for pod openshift-marketplace/redhat-marketplace-rmwfn: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.134696 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lz9qh\" (UniqueName: \"kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.135692 4183 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.137675 4183 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.137709 4183 projected.go:200] Error preparing data for projected volume kube-api-access-lz9qh for pod openshift-console/console-84fccc7b6-mkncc: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.135741 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp podName:9ad279b4-d9dc-42a8-a1c8-a002bd063482 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.135714826 +0000 UTC m=+438.828379424 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-r7dbp" (UniqueName: "kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp") pod "redhat-marketplace-rmwfn" (UID: "9ad279b4-d9dc-42a8-a1c8-a002bd063482") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.138024 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.137987071 +0000 UTC m=+438.830652129 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-lz9qh" (UniqueName: "kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.141418 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\") " pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.142038 4183 projected.go:294] Couldn't get configMap openshift-kube-controller-manager/kube-root-ca.crt: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.142077 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/revision-pruner-8-crc: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.142251 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access podName:72854c1e-5ae2-4ed6-9e50-ff3bccde2635 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.142238353 +0000 UTC m=+438.834903071 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access") pod "revision-pruner-8-crc" (UID: "72854c1e-5ae2-4ed6-9e50-ff3bccde2635") : object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.184985 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.209403 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.209674 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.209756 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.209772 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.209896 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.209993 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.210313 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.210545 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.210593 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.210694 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.210887 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.210987 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.211124 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.211261 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.247521 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.293759 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.333889 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.391443 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.433995 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:50:56 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:50:56 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:50:56 crc kubenswrapper[4183]: healthz check failed Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.434142 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.434338 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.557632 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9"} Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.656619 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.900761 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.113489 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.215207 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.215482 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.215547 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.215670 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.215720 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.215852 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.215904 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.215986 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.216031 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.216161 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.216364 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.216500 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.216553 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.216629 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.216672 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.216746 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.218157 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.218341 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.218918 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.219295 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.219538 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.219632 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.219689 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.219786 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.220071 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.220098 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.220176 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.220265 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.220291 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.220333 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.220360 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.220402 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.220408 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.220470 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.220531 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.220540 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.220639 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.220649 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.220717 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.220742 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.220778 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.220958 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.220964 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.221005 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.221033 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.221081 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.221094 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.221117 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.221188 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.221201 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.221226 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.221295 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.221397 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.221543 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.221618 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.221711 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.221749 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.225035 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.227431 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.227920 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.227983 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.228145 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.228176 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.228087 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.228134 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.228277 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.228347 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.228402 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.228493 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.228580 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.228697 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.228915 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.229038 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.228087 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.229152 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.229232 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.229301 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.229479 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.229722 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.229856 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.229981 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.230055 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.253521 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.330164 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.401170 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.446947 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:50:57 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:50:57 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:50:57 crc kubenswrapper[4183]: healthz check failed Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.447375 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.468128 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.495680 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.528711 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.556147 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.595352 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.677505 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.724467 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.802921 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.845900 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.891453 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.932938 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.009762 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.134739 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.208763 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.208894 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:58 crc kubenswrapper[4183]: E0813 19:50:58.209672 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:50:58 crc kubenswrapper[4183]: E0813 19:50:58.209929 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.208952 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:58 crc kubenswrapper[4183]: E0813 19:50:58.210115 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.209000 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:50:58 crc kubenswrapper[4183]: E0813 19:50:58.210244 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.209027 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:58 crc kubenswrapper[4183]: E0813 19:50:58.210396 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.209045 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:58 crc kubenswrapper[4183]: E0813 19:50:58.210512 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.209081 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:58 crc kubenswrapper[4183]: E0813 19:50:58.210612 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.286195 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.322688 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.366602 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b065562fefc63a381832e1073dc188f7f27d20b65780f1c54a9aa34c767a3b80\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:28:38Z\\\",\\\"message\\\":\\\"Thu Jun 27 13:21:15 UTC 2024\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:14Z\\\"}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.400446 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.436993 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:50:58 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:50:58 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:50:58 crc kubenswrapper[4183]: healthz check failed Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.437129 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.438613 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.475304 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.508161 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.537058 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.573022 4183 generic.go:334] "Generic (PLEG): container finished" podID="7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8" containerID="54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87" exitCode=0 Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.573114 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" event={"ID":"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8","Type":"ContainerDied","Data":"54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87"} Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.574289 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.600757 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.628072 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.649170 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.686045 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.715759 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.748028 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.770996 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.797042 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.827005 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.871950 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.905761 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.947086 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.974070 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.020358 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.075759 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.129960 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.169723 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.208595 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.208730 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.209023 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.208611 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.209025 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.208681 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.209233 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.209314 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.209429 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.209516 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.209560 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.210147 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.210308 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.210648 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.214018 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.214178 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.214580 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.214738 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.214984 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.215049 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.215110 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.215330 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.215384 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.215453 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.215544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.215595 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.215648 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.215712 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.215880 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.215947 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.216015 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.216062 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.216120 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.216176 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.216213 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.216281 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.216335 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.216380 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.216440 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.216509 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.216618 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.216713 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.218250 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.218479 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.218842 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.218910 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.218985 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.218979 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.219080 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.219273 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.219313 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.219361 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.219393 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.219427 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.219398 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.219754 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.220085 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.220124 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.220160 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.220336 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.220385 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.220492 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.220524 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.220534 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.220597 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.220945 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.221042 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.221293 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.221377 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.221452 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.221522 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.221604 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.221683 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.221777 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.222194 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.222285 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.222373 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.224393 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.224581 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.225192 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.225307 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.225576 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.237571 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.291390 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8d4e207328f4e3140d751e6046a1a8d14a7f392d2f10d6248f7db828278d0972\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d4e207328f4e3140d751e6046a1a8d14a7f392d2f10d6248f7db828278d0972\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2024-06-27T13:21:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://455c9dcaca7ee7118b89a599c97b6a458888800688dd381f8c5dcbd6ba96e17d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://455c9dcaca7ee7118b89a599c97b6a458888800688dd381f8c5dcbd6ba96e17d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2024-06-27T13:21:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8d0ea8f66b79c23a45ba2f75937377749519dc802fb755a7fce9c90efb994507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d0ea8f66b79c23a45ba2f75937377749519dc802fb755a7fce9c90efb994507\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2024-06-27T13:21:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dba0ea54e565345301e3986d0dd8c643d32ea56c561c86bdb4d4b35fa49a453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dba0ea54e565345301e3986d0dd8c643d32ea56c561c86bdb4d4b35fa49a453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2024-06-27T13:21:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:13Z\\\"}}}],\\\"phase\\\":\\\"Pending\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.322467 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.350143 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.406518 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.435894 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:50:59 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:50:59 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:50:59 crc kubenswrapper[4183]: healthz check failed Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.436017 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.445968 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.473691 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.501036 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.531341 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.613399 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.652141 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.686485 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.728089 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.758686 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.806360 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.848123 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.894343 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.940165 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.991706 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.075096 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.136959 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.192562 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.208568 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.208644 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.208759 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.208927 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:00 crc kubenswrapper[4183]: E0813 19:51:00.208975 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.209059 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:00 crc kubenswrapper[4183]: E0813 19:51:00.209261 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:00 crc kubenswrapper[4183]: E0813 19:51:00.209411 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.209476 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:00 crc kubenswrapper[4183]: E0813 19:51:00.209605 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.209662 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:00 crc kubenswrapper[4183]: E0813 19:51:00.209872 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:00 crc kubenswrapper[4183]: E0813 19:51:00.210013 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:00 crc kubenswrapper[4183]: E0813 19:51:00.210108 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.233030 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.264467 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.293098 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.322323 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.361048 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: E0813 19:51:00.378410 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.391430 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.425056 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.433337 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:00 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:00 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:00 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.433920 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.461912 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.525496 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.552112 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.579068 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.606494 4183 generic.go:334] "Generic (PLEG): container finished" podID="7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8" containerID="c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf" exitCode=0 Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.606575 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" event={"ID":"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8","Type":"ContainerDied","Data":"c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf"} Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.618186 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6"} Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.622722 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.658214 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.694858 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.734452 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.807626 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.833256 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.858683 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.901316 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b065562fefc63a381832e1073dc188f7f27d20b65780f1c54a9aa34c767a3b80\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:28:38Z\\\",\\\"message\\\":\\\"Thu Jun 27 13:21:15 UTC 2024\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:14Z\\\"}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.932641 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.961318 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.999348 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.036401 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.063490 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.114161 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.147094 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.179297 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.209339 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.209760 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.210075 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.210085 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.209421 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.209460 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.210305 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.210376 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.210461 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.209368 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.209549 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.209576 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.210557 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.210664 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.210863 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.209603 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.209505 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.209698 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.211027 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.209661 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.211092 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.211174 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.211233 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.211349 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.211428 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.211513 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.211574 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.211691 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.211704 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212130 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.212231 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212316 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212371 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212419 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212459 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212471 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212503 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212522 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212546 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212570 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212583 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212614 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212619 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212641 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212661 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212707 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212713 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212735 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212778 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212857 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212883 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212923 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212926 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212950 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.213546 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.213945 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.214181 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.214361 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.214524 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.214697 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.215125 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.215512 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.215660 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.216049 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.216562 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.217613 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.218029 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.218032 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.218110 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.218212 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.218433 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.218572 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.218732 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.218949 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.219070 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.219178 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.219300 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.219458 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.219622 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.219754 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.219976 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.220108 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.220255 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.250750 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.281051 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.309018 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.343007 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.370518 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.398015 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.457964 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:01 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:01 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:01 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.458680 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.468953 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.532073 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.613927 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.659184 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.715514 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.766423 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.805048 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.837733 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.880652 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.923471 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8d4e207328f4e3140d751e6046a1a8d14a7f392d2f10d6248f7db828278d0972\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d4e207328f4e3140d751e6046a1a8d14a7f392d2f10d6248f7db828278d0972\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2024-06-27T13:21:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://455c9dcaca7ee7118b89a599c97b6a458888800688dd381f8c5dcbd6ba96e17d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://455c9dcaca7ee7118b89a599c97b6a458888800688dd381f8c5dcbd6ba96e17d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2024-06-27T13:21:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8d0ea8f66b79c23a45ba2f75937377749519dc802fb755a7fce9c90efb994507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d0ea8f66b79c23a45ba2f75937377749519dc802fb755a7fce9c90efb994507\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2024-06-27T13:21:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dba0ea54e565345301e3986d0dd8c643d32ea56c561c86bdb4d4b35fa49a453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dba0ea54e565345301e3986d0dd8c643d32ea56c561c86bdb4d4b35fa49a453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2024-06-27T13:21:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:13Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.001616 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.034109 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.065888 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.108057 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.148203 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.175307 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.199300 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.209913 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.209963 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.209926 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.210072 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:02 crc kubenswrapper[4183]: E0813 19:51:02.210098 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.210176 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.210209 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:02 crc kubenswrapper[4183]: E0813 19:51:02.210393 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:02 crc kubenswrapper[4183]: E0813 19:51:02.210411 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:02 crc kubenswrapper[4183]: E0813 19:51:02.210510 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:02 crc kubenswrapper[4183]: E0813 19:51:02.210588 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:02 crc kubenswrapper[4183]: E0813 19:51:02.210659 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.210538 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:02 crc kubenswrapper[4183]: E0813 19:51:02.210896 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.232958 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.262201 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.284102 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.312208 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.346709 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.374514 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.425160 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.432559 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:02 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:02 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:02 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.432655 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.469506 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.498331 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.535115 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.562049 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.613405 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.652239 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.658277 4183 generic.go:334] "Generic (PLEG): container finished" podID="7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8" containerID="018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6" exitCode=0 Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.658379 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" event={"ID":"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8","Type":"ContainerDied","Data":"018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6"} Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.074150 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.128730 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.191086 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.208594 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.208672 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.208759 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.208863 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.208887 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.208921 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.208979 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.208981 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.209074 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.209101 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.209108 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.209077 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.209183 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.209194 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.209218 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.209274 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.209283 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.209308 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.209332 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.209391 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.209395 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.209435 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.209448 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.209452 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.209484 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.209542 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.209550 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.209615 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.209667 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.209686 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.209754 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.209894 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.209894 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.209951 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.210002 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.210056 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.210111 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.210116 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.210150 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.210192 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.210240 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.210242 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.210270 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.210347 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.210347 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.210374 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.210454 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.210527 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.210564 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.210648 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.210698 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.210748 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.210855 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.210874 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.210948 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.211111 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.211133 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.211191 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.211211 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.211307 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.211380 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.211459 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.211507 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.211583 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.211678 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.211719 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.211893 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.211932 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.211981 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.212033 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.212065 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.212105 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.212160 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.212267 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.212325 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.212400 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.212461 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.212521 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.212582 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.212678 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.212738 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.212879 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.219195 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.258186 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.320661 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.353456 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.385083 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.415414 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.437119 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:03 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:03 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:03 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.437734 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.472669 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.516271 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.561183 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.599609 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.648547 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.701203 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.738746 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b065562fefc63a381832e1073dc188f7f27d20b65780f1c54a9aa34c767a3b80\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:28:38Z\\\",\\\"message\\\":\\\"Thu Jun 27 13:21:15 UTC 2024\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:14Z\\\"}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.739324 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"07752a5beb70c8c101afc3171b1a8e4c4e2212fc9939840b594a2736d0ab4561"} Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.740385 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.740622 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.751859 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.864463 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.865516 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.865560 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.865572 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.865613 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.865695 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:03Z","lastTransitionTime":"2025-08-13T19:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.891201 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.910497 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.910561 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.910577 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.910603 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.910637 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:03Z","lastTransitionTime":"2025-08-13T19:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.928645 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.969222 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.971533 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.980643 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.980700 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.980716 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.980741 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.980764 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:03Z","lastTransitionTime":"2025-08-13T19:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:04 crc kubenswrapper[4183]: E0813 19:51:04.000279 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.014273 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.014669 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.014713 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.014730 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.014759 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.014887 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:04Z","lastTransitionTime":"2025-08-13T19:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:04 crc kubenswrapper[4183]: E0813 19:51:04.036882 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.056893 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.058050 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.060678 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.062055 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.062189 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.062748 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:04Z","lastTransitionTime":"2025-08-13T19:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:04 crc kubenswrapper[4183]: E0813 19:51:04.091697 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:04 crc kubenswrapper[4183]: E0813 19:51:04.091757 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.099907 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.147366 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.176675 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.193186 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.209067 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.209157 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.209155 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:04 crc kubenswrapper[4183]: E0813 19:51:04.209280 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.209346 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.209349 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:04 crc kubenswrapper[4183]: E0813 19:51:04.209469 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:04 crc kubenswrapper[4183]: E0813 19:51:04.209526 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.209528 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:04 crc kubenswrapper[4183]: E0813 19:51:04.209626 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:04 crc kubenswrapper[4183]: E0813 19:51:04.209743 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:04 crc kubenswrapper[4183]: E0813 19:51:04.209933 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.209997 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:04 crc kubenswrapper[4183]: E0813 19:51:04.210215 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.217704 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.269547 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.280166 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.356140 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.397459 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.433765 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:04 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:04 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:04 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.434007 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.468167 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.524495 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.616197 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.650058 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.671362 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.755020 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" event={"ID":"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8","Type":"ContainerStarted","Data":"3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c"} Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.921579 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:04.999980 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.038347 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.063112 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.093964 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.176644 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8d0ea8f66b79c23a45ba2f75937377749519dc802fb755a7fce9c90efb994507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d0ea8f66b79c23a45ba2f75937377749519dc802fb755a7fce9c90efb994507\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2024-06-27T13:21:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dba0ea54e565345301e3986d0dd8c643d32ea56c561c86bdb4d4b35fa49a453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dba0ea54e565345301e3986d0dd8c643d32ea56c561c86bdb4d4b35fa49a453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2024-06-27T13:21:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:13Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.211072 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.211310 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.211378 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.211463 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.211520 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.211895 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.211961 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.211966 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.212337 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.212362 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.211979 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.212031 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.212033 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.212490 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.212496 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.212066 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.212500 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.212068 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.212203 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.212204 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.212584 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.212595 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.212218 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.212586 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.212261 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.212255 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.212298 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.212712 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.212309 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.212315 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.212326 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.212183 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.212256 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.214366 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.214469 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.214581 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.214629 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.214691 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.214775 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.214951 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.215023 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.215065 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.215116 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.215183 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.215225 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.215295 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.215376 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.215474 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.215632 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.215702 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.215874 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.216022 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.216074 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.216131 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.216229 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.216285 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.216351 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.216417 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.216462 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.216513 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.216579 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.216644 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.216732 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.217099 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.217144 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.217250 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.217355 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.217465 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.217550 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.217652 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.217747 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.218091 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.218233 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.218398 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.218480 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.218554 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.218605 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.218694 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.218901 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.218994 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.219079 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.219119 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.229455 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.350753 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.382071 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.420324 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.434598 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:05 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:05 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:05 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.434696 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.482096 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.523588 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.590380 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.617719 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.722247 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.778175 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.927874 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.970170 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:06 crc kubenswrapper[4183]: I0813 19:51:06.065532 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:06 crc kubenswrapper[4183]: I0813 19:51:06.125139 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:06 crc kubenswrapper[4183]: I0813 19:51:06.208946 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:06 crc kubenswrapper[4183]: I0813 19:51:06.209044 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:06 crc kubenswrapper[4183]: I0813 19:51:06.209067 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:06 crc kubenswrapper[4183]: I0813 19:51:06.209192 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:06 crc kubenswrapper[4183]: E0813 19:51:06.209206 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:06 crc kubenswrapper[4183]: I0813 19:51:06.209234 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:06 crc kubenswrapper[4183]: I0813 19:51:06.209195 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:06 crc kubenswrapper[4183]: I0813 19:51:06.209308 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:06 crc kubenswrapper[4183]: E0813 19:51:06.209506 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:06 crc kubenswrapper[4183]: E0813 19:51:06.209692 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:06 crc kubenswrapper[4183]: E0813 19:51:06.209930 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:06 crc kubenswrapper[4183]: E0813 19:51:06.210056 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:06 crc kubenswrapper[4183]: E0813 19:51:06.210168 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:06 crc kubenswrapper[4183]: E0813 19:51:06.210267 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:06 crc kubenswrapper[4183]: I0813 19:51:06.432341 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:06 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:06 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:06 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:06 crc kubenswrapper[4183]: I0813 19:51:06.432441 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.209137 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.209467 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.209562 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.209598 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.209678 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.209685 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.209722 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.209743 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.209884 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.209898 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.209925 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.210015 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.210018 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.210025 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.210127 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.210178 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.210213 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.210056 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.210067 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.210075 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.210333 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.210385 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.210452 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.210383 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.210468 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.210535 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.210553 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.210570 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.210698 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.210721 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.210767 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.210904 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.210904 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.210952 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.211088 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.211145 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.211249 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.211351 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.211492 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.211556 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.211643 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.211928 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.212033 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.212168 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.212183 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.212436 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.212635 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.212770 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.213050 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.213132 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.213220 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.213286 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.213454 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.213709 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.213905 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.213967 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.214024 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.214156 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.214287 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.214335 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.214391 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.214449 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.214516 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.214587 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.214667 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.214700 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.214986 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.215162 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.215388 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.215572 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.215764 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.216076 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.216083 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.216138 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.216310 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.216479 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.216614 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.216767 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.217396 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.217551 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.218303 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.218644 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.432438 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:07 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:07 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:07 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.432909 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.495766 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.548358 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:07Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.618118 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:07Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.646137 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:07Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.669107 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:07Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.692050 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:07Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.738361 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:07Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.804098 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:07Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.833114 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:07Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.862096 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:07Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.898239 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:07Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.939601 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:07Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.972192 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:07Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:08 crc kubenswrapper[4183]: I0813 19:51:08.027572 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:08Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:08 crc kubenswrapper[4183]: I0813 19:51:08.061320 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:08Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:08 crc kubenswrapper[4183]: I0813 19:51:08.097478 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:08Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:08 crc kubenswrapper[4183]: I0813 19:51:08.127744 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:08Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:08 crc kubenswrapper[4183]: I0813 19:51:08.148870 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:08Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:08 crc kubenswrapper[4183]: I0813 19:51:08.179521 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:08Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:08 crc kubenswrapper[4183]: I0813 19:51:08.205912 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:08Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:08 crc kubenswrapper[4183]: I0813 19:51:08.208590 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:08 crc kubenswrapper[4183]: I0813 19:51:08.208723 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:08 crc kubenswrapper[4183]: I0813 19:51:08.209048 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:08 crc kubenswrapper[4183]: E0813 19:51:08.209313 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:08 crc kubenswrapper[4183]: I0813 19:51:08.209366 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:08 crc kubenswrapper[4183]: I0813 19:51:08.209377 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:08 crc kubenswrapper[4183]: E0813 19:51:08.209469 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:08 crc kubenswrapper[4183]: E0813 19:51:08.209537 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:08 crc kubenswrapper[4183]: I0813 19:51:08.209570 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:08 crc kubenswrapper[4183]: I0813 19:51:08.209623 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:08 crc kubenswrapper[4183]: E0813 19:51:08.209690 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:08 crc kubenswrapper[4183]: E0813 19:51:08.209892 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:08 crc kubenswrapper[4183]: E0813 19:51:08.209983 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:08 crc kubenswrapper[4183]: E0813 19:51:08.210068 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:08 crc kubenswrapper[4183]: I0813 19:51:08.234059 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:08Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:08 crc kubenswrapper[4183]: I0813 19:51:08.264989 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:08Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:08 crc kubenswrapper[4183]: I0813 19:51:08.289536 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:08Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:08 crc kubenswrapper[4183]: I0813 19:51:08.434084 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:08 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:08 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:08 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:08 crc kubenswrapper[4183]: I0813 19:51:08.434184 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:08 crc kubenswrapper[4183]: I0813 19:51:08.916143 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b065562fefc63a381832e1073dc188f7f27d20b65780f1c54a9aa34c767a3b80\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:28:38Z\\\",\\\"message\\\":\\\"Thu Jun 27 13:21:15 UTC 2024\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:14Z\\\"}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:08Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:08 crc kubenswrapper[4183]: I0813 19:51:08.949487 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:08Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:08 crc kubenswrapper[4183]: I0813 19:51:08.984255 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:08Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.013006 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.051369 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.076465 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.108584 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.144491 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.174097 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.207762 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.208495 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.208596 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.208601 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.208675 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.208693 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.208761 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.208898 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.208911 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.208976 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.208992 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.208667 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.209063 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.209071 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.209070 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.209114 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.209123 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.209172 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.209182 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.209197 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.209172 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.209310 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.209465 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.209514 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.209529 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.209531 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.209613 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.209698 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.209710 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.209698 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.209727 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.209751 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.209863 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.209981 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.210093 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.210185 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.210251 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.210322 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.210366 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.210411 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.210413 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.210493 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.210534 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.210574 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.210743 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.210953 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.210990 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.211036 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.211047 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.211098 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.211172 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.211233 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.211252 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.211318 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.211329 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.211376 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.211399 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.211459 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.211476 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.211530 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.211550 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.211645 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.211703 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.211867 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.211894 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.211979 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.212121 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.212552 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.212738 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.212899 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.212926 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.213029 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.213128 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.213192 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.213256 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.213382 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.213457 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.213620 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.213690 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.213751 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.213902 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.214023 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.214114 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.238434 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.274247 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.298652 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.432176 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:09 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:09 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:09 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.432303 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.825172 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.850296 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.889281 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.918575 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.948323 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.972630 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.990956 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.011964 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dba0ea54e565345301e3986d0dd8c643d32ea56c561c86bdb4d4b35fa49a453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dba0ea54e565345301e3986d0dd8c643d32ea56c561c86bdb4d4b35fa49a453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2024-06-27T13:21:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:13Z\\\"}}}],\\\"phase\\\":\\\"Pending\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.059552 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.112084 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.196681 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.210173 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.210283 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:10 crc kubenswrapper[4183]: E0813 19:51:10.210437 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.210527 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:10 crc kubenswrapper[4183]: E0813 19:51:10.210622 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.210677 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.210760 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:10 crc kubenswrapper[4183]: E0813 19:51:10.210984 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.211033 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:10 crc kubenswrapper[4183]: E0813 19:51:10.211138 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:10 crc kubenswrapper[4183]: E0813 19:51:10.211296 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:10 crc kubenswrapper[4183]: E0813 19:51:10.211405 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.212099 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:10 crc kubenswrapper[4183]: E0813 19:51:10.212648 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.243285 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:10 crc kubenswrapper[4183]: E0813 19:51:10.388547 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.444178 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:10 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:10 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:10 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.444296 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.530199 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.575148 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.630057 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.681587 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.721430 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.774552 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.814963 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.846009 4183 generic.go:334] "Generic (PLEG): container finished" podID="7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8" containerID="3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c" exitCode=0 Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.846079 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" event={"ID":"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8","Type":"ContainerDied","Data":"3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c"} Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.850105 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.084491 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.116248 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.137138 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.180652 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://07752a5beb70c8c101afc3171b1a8e4c4e2212fc9939840b594a2736d0ab4561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.201934 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.209714 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.210068 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.210101 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.210208 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.209936 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.209993 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.210029 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.211416 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.211429 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.211720 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.211940 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.211983 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212032 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212073 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212116 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212137 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212183 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212234 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212242 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212204 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212280 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.211615 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.211642 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.211667 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.211692 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212201 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212481 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.212368 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212409 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212536 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212542 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212428 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212564 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212459 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212597 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212461 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212486 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212634 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212516 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212598 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212671 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212681 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.211586 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212900 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.214661 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.215048 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.215158 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.215241 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.215458 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.217724 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.215559 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.215643 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.215722 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.215890 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.215984 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.216082 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.216169 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.216303 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.216455 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.216538 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.216608 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.216684 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.216898 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.216988 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.217060 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.217148 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.217235 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.217295 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.217355 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.217432 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.217505 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.217567 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.217983 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.217660 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.218150 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.218244 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.218344 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.218446 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.218506 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.219170 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.223245 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.223982 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.238187 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.252902 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="ovnkube-controller" probeResult="failure" output="" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.261025 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.281422 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.313526 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.340654 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.369495 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.389295 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.407039 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.432526 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:11 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:11 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:11 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.432621 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.447233 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.467723 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.492706 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.517337 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.545198 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.567309 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.586343 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.606654 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.631234 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.649246 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.666476 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.679672 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.701056 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.721531 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.742491 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.759196 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.772995 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.773082 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.773139 4183 configmap.go:199] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.773192 4183 secret.go:194] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.773230 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.773207915 +0000 UTC m=+470.465872723 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"console-operator-config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.773255 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.773240246 +0000 UTC m=+470.465904964 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.773293 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.773347 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.773379 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.773409 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.773437 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.773472 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j7zrh\" (UniqueName: \"kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.773508 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.773544 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.773589 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.773720 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.773865 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.773915 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.773989 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.774072 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.774120 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.774147 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.774179 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.774213 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.774641 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.774705 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.774732 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.774757 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.774866 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.775040 4183 configmap.go:199] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.775115 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.775096969 +0000 UTC m=+470.467761707 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"oauth-serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.775172 4183 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.775243 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.775227413 +0000 UTC m=+470.467892221 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.775298 4183 configmap.go:199] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.775329 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.775321926 +0000 UTC m=+470.467986554 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.775384 4183 secret.go:194] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.775437 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.775429169 +0000 UTC m=+470.468093897 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.775477 4183 configmap.go:199] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.775502 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.775496171 +0000 UTC m=+470.468160779 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.775538 4183 secret.go:194] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.775561 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.775554363 +0000 UTC m=+470.468219071 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : object "openshift-service-ca"/"signing-key" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.775620 4183 projected.go:294] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.775633 4183 projected.go:294] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.775645 4183 projected.go:200] Error preparing data for projected volume kube-api-access-j7zrh for pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.775672 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.775664646 +0000 UTC m=+470.468329374 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-j7zrh" (UniqueName: "kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.775712 4183 secret.go:194] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.775737 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.775731128 +0000 UTC m=+470.468395856 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.775858 4183 secret.go:194] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.775897 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.775887872 +0000 UTC m=+470.468552600 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.775941 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.775966 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.775959624 +0000 UTC m=+470.468624552 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776001 4183 secret.go:194] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776023 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.776017276 +0000 UTC m=+470.468682024 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776055 4183 configmap.go:199] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776076 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.776070067 +0000 UTC m=+470.468734795 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776111 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776137 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.776130729 +0000 UTC m=+470.468795457 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776173 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776195 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.776188091 +0000 UTC m=+470.468852789 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776247 4183 projected.go:294] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776260 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776286 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.776278083 +0000 UTC m=+470.468942791 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776322 4183 secret.go:194] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776345 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.776338965 +0000 UTC m=+470.469003663 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776379 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776401 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.776394707 +0000 UTC m=+470.469059315 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776434 4183 secret.go:194] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776460 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.776452058 +0000 UTC m=+470.469116776 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default-metrics-tls" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776489 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776514 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.77650713 +0000 UTC m=+470.469171738 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776547 4183 secret.go:194] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776571 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls podName:297ab9b6-2186-4d5b-a952-2bfd59af63c4 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.776565181 +0000 UTC m=+470.469229799 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls") pod "machine-config-controller-6df6df6b6b-58shh" (UID: "297ab9b6-2186-4d5b-a952-2bfd59af63c4") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776605 4183 configmap.go:199] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776636 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.776627503 +0000 UTC m=+470.469292141 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : object "openshift-service-ca"/"signing-cabundle" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776920 4183 secret.go:194] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.777015 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.777002534 +0000 UTC m=+470.469667382 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : object "openshift-dns-operator"/"metrics-tls" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776700 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.777065 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.777057855 +0000 UTC m=+470.469722583 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.782555 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.803531 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.821089 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.838261 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.855374 4183 generic.go:334] "Generic (PLEG): container finished" podID="7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8" containerID="6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be" exitCode=0 Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.855422 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" event={"ID":"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8","Type":"ContainerDied","Data":"6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be"} Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.871927 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.876014 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.876098 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.876125 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.876276 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.876314 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.876341 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.876373 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.877594 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.877687 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.878534 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.878630 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.878613548 +0000 UTC m=+470.571278276 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.878685 4183 secret.go:194] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.878713 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.878706221 +0000 UTC m=+470.571370829 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.878750 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.878855 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.878768702 +0000 UTC m=+470.571433430 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"image-import-ca" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.878901 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.878927 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.878921087 +0000 UTC m=+470.571585705 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.878963 4183 secret.go:194] Couldn't get secret openshift-image-registry/installation-pull-secrets: object "openshift-image-registry"/"installation-pull-secrets" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.878985 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.878979138 +0000 UTC m=+470.571643756 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "installation-pull-secrets" (UniqueName: "kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"installation-pull-secrets" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.879015 4183 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.879036 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.8790307 +0000 UTC m=+470.571695318 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"client-ca" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.879070 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.879185 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.879177064 +0000 UTC m=+470.571841682 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.879233 4183 secret.go:194] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.879311 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.879253696 +0000 UTC m=+470.571918324 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.879362 4183 configmap.go:199] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.879416 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.879409461 +0000 UTC m=+470.572074189 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-images" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.904850 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.927889 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.946533 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.968362 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.979616 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.979721 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.979751 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.979870 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.979923 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.979964 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.979995 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980027 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980051 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kp86\" (UniqueName: \"kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980079 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8dcvj\" (UniqueName: \"kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980105 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hpzhn\" (UniqueName: \"kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980130 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r8qj9\" (UniqueName: \"kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980157 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980181 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980218 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4w8wh\" (UniqueName: \"kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980290 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980320 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980346 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980369 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980393 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980434 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980467 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980495 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980518 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980552 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980580 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980605 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980631 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980656 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-serving-cert\" (UniqueName: \"kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980681 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980709 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980741 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980907 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980939 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980965 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bxr\" (UniqueName: \"kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980999 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.981023 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.981062 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.981086 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.981110 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.981181 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bm986\" (UniqueName: \"kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.981209 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981274 4183 configmap.go:199] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981315 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981369 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.981347424 +0000 UTC m=+470.674012162 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981383 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981394 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.981383195 +0000 UTC m=+470.674047793 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-session" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981417 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.981403316 +0000 UTC m=+470.674068034 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981463 4183 secret.go:194] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981502 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs podName:a702c6d2-4dde-4077-ab8c-0f8df804bf7a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.981495018 +0000 UTC m=+470.674159756 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs") pod "network-metrics-daemon-qdfr4" (UID: "a702c6d2-4dde-4077-ab8c-0f8df804bf7a") : object "openshift-multus"/"metrics-daemon-secret" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981544 4183 secret.go:194] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981555 4183 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981570 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs podName:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.98156337 +0000 UTC m=+470.674228098 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs") pod "multus-admission-controller-6c7c885997-4hbbc" (UID: "d5025cb4-ddb0-4107-88c1-bcbcdb779ac0") : object "openshift-multus"/"multus-admission-controller-secret" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981592 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.981583301 +0000 UTC m=+470.674248019 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981603 4183 configmap.go:199] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981628 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.981621632 +0000 UTC m=+470.674286350 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981680 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981710 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.981702304 +0000 UTC m=+470.674366932 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981726 4183 secret.go:194] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981747 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981863 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.981766226 +0000 UTC m=+470.674430854 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"client-ca" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981889 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.981880149 +0000 UTC m=+470.674544887 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981912 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981936 4183 configmap.go:199] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981974 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.981951652 +0000 UTC m=+470.674616340 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981986 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981997 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.981989143 +0000 UTC m=+470.674653741 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982011 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.982004583 +0000 UTC m=+470.674669171 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982051 4183 secret.go:194] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982063 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982075 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.982069145 +0000 UTC m=+470.674733873 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982077 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982097 4183 projected.go:200] Error preparing data for projected volume kube-api-access-ncrf5 for pod openshift-marketplace/certified-operators-7287f: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982110 4183 secret.go:194] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982131 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5 podName:887d596e-c519-4bfa-af90-3edd9e1b2f0f nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.982124796 +0000 UTC m=+470.674789404 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-ncrf5" (UniqueName: "kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5") pod "certified-operators-7287f" (UID: "887d596e-c519-4bfa-af90-3edd9e1b2f0f") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982149 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.982141687 +0000 UTC m=+470.674806275 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982165 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982196 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.982186978 +0000 UTC m=+470.674851596 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982220 4183 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982237 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982242 4183 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982253 4183 projected.go:200] Error preparing data for projected volume kube-api-access-6kp86 for pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982253 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982279 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86 podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.982272091 +0000 UTC m=+470.674936889 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kp86" (UniqueName: "kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982298 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.982290091 +0000 UTC m=+470.674954809 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982317 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.982309542 +0000 UTC m=+470.674974130 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"audit-1" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982334 4183 secret.go:194] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982345 4183 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982359 4183 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982368 4183 projected.go:294] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982376 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.982365853 +0000 UTC m=+470.675030521 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982385 4183 projected.go:294] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982394 4183 projected.go:200] Error preparing data for projected volume kube-api-access-8dcvj for pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982408 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982421 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.982412705 +0000 UTC m=+470.675077323 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-8dcvj" (UniqueName: "kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982425 4183 secret.go:194] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982439 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.982431585 +0000 UTC m=+470.675096173 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982456 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.982448116 +0000 UTC m=+470.675112904 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982458 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982488 4183 projected.go:294] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982493 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.982487107 +0000 UTC m=+470.675151725 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-client" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982502 4183 projected.go:294] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982511 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hpzhn for pod openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982520 4183 configmap.go:199] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982535 4183 secret.go:194] Couldn't get secret openshift-console-operator/webhook-serving-cert: object "openshift-console-operator"/"webhook-serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982538 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.982531998 +0000 UTC m=+470.675196726 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-hpzhn" (UniqueName: "kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982490 4183 secret.go:194] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982582 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982617 4183 projected.go:294] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982632 4183 projected.go:294] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982640 4183 projected.go:200] Error preparing data for projected volume kube-api-access-r8qj9 for pod openshift-apiserver/apiserver-67cbf64bc9-mtx25: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982659 4183 configmap.go:199] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982685 4183 configmap.go:199] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982723 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982732 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982866 4183 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982881 4183 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982894 4183 projected.go:200] Error preparing data for projected volume kube-api-access-bm986 for pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982881 4183 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982934 4183 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982936 4183 projected.go:294] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.983119 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.983123 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-idp-0-file-data: object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982371 4183 projected.go:200] Error preparing data for projected volume kube-api-access-l8bxr for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982944 4183 projected.go:200] Error preparing data for projected volume kube-api-access-4w8wh for pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982224 4183 secret.go:194] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982984 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.982966241 +0000 UTC m=+470.675630979 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.983324 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.98331422 +0000 UTC m=+470.675978929 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "webhook-serving-cert" (UniqueName: "kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : object "openshift-console-operator"/"webhook-serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.983343 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.983335961 +0000 UTC m=+470.676000549 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.983052 4183 configmap.go:199] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.983362 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.983355932 +0000 UTC m=+470.676020530 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.983384 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9 podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.983376992 +0000 UTC m=+470.676041590 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-r8qj9" (UniqueName: "kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.983406 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.983391963 +0000 UTC m=+470.676056671 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.983425 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.983417733 +0000 UTC m=+470.676082411 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"audit" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.983479 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.983434204 +0000 UTC m=+470.676135313 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.983503 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.983494216 +0000 UTC m=+470.676158894 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"etcd-serving-ca" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.983517 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.983510786 +0000 UTC m=+470.676175474 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.983534 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986 podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.983525877 +0000 UTC m=+470.676190475 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-bm986" (UniqueName: "kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.983574 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.984142 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wrd8h\" (UniqueName: \"kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.984428 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nf4t2\" (UniqueName: \"kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.984456 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.984496 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtrv\" (UniqueName: \"kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.984552 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.984591 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxcp\" (UniqueName: \"kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp\") pod \"network-check-source-5c5478f8c-vqvt7\" (UID: \"d0f40333-c860-4c04-8058-a0bf572dcf12\") " pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.984626 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.984663 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.984689 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.984766 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.984875 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d7ntf\" (UniqueName: \"kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.984917 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.984953 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.984979 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.985006 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.985047 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmhq\" (UniqueName: \"kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq\") pod \"migrator-f7c6d88df-q2fnv\" (UID: \"cf1a8966-f594-490a-9fbb-eec5bafd13d3\") " pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.985073 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.985097 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.985153 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.985190 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tl5kg\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.985214 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.985252 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2h9\" (UniqueName: \"kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.985516 4183 projected.go:294] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.985530 4183 projected.go:294] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.985539 4183 projected.go:200] Error preparing data for projected volume kube-api-access-lx2h9 for pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.985572 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9 podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.985562785 +0000 UTC m=+470.678227523 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-lx2h9" (UniqueName: "kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.985649 4183 secret.go:194] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.985693 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.985685668 +0000 UTC m=+470.678350366 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.985872 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.985705159 +0000 UTC m=+470.678369837 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-l8bxr" (UniqueName: "kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.985898 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.985889384 +0000 UTC m=+470.678554092 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-idp-0-file-data" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.985912 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.985906435 +0000 UTC m=+470.678571143 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.985928 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.985920805 +0000 UTC m=+470.678585513 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-4w8wh" (UniqueName: "kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.985942 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.985935475 +0000 UTC m=+470.678600073 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.985992 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986006 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986015 4183 projected.go:200] Error preparing data for projected volume kube-api-access-wrd8h for pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986042 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.986035038 +0000 UTC m=+470.678699776 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-wrd8h" (UniqueName: "kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986085 4183 projected.go:294] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986097 4183 projected.go:294] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986106 4183 projected.go:200] Error preparing data for projected volume kube-api-access-nf4t2 for pod openshift-dns-operator/dns-operator-75f687757b-nz2xb: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986130 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2 podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.986123101 +0000 UTC m=+470.678787829 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-nf4t2" (UniqueName: "kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986166 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986190 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.986183532 +0000 UTC m=+470.678848140 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986232 4183 projected.go:294] Couldn't get configMap hostpath-provisioner/kube-root-ca.crt: object "hostpath-provisioner"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986242 4183 projected.go:294] Couldn't get configMap hostpath-provisioner/openshift-service-ca.crt: object "hostpath-provisioner"/"openshift-service-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986251 4183 projected.go:200] Error preparing data for projected volume kube-api-access-vvtrv for pod hostpath-provisioner/csi-hostpathplugin-hvm8g: [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986275 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv podName:12e733dd-0939-4f1b-9cbb-13897e093787 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.986268455 +0000 UTC m=+470.678933443 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-vvtrv" (UniqueName: "kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv") pod "csi-hostpathplugin-hvm8g" (UID: "12e733dd-0939-4f1b-9cbb-13897e093787") : [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986309 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986330 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.986324746 +0000 UTC m=+470.678989465 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986369 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986379 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986388 4183 projected.go:200] Error preparing data for projected volume kube-api-access-qcxcp for pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986412 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp podName:d0f40333-c860-4c04-8058-a0bf572dcf12 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.986405849 +0000 UTC m=+470.679070587 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-qcxcp" (UniqueName: "kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp") pod "network-check-source-5c5478f8c-vqvt7" (UID: "d0f40333-c860-4c04-8058-a0bf572dcf12") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986447 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986470 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.98646266 +0000 UTC m=+470.679127368 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986512 4183 projected.go:294] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986521 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986545 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.986539113 +0000 UTC m=+470.679203731 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986581 4183 secret.go:194] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986604 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.986598654 +0000 UTC m=+470.679263272 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"metrics-tls" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987126 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987170 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987182 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hqmhq for pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987214 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq podName:cf1a8966-f594-490a-9fbb-eec5bafd13d3 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.987205112 +0000 UTC m=+470.679869720 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqmhq" (UniqueName: "kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq") pod "migrator-f7c6d88df-q2fnv" (UID: "cf1a8966-f594-490a-9fbb-eec5bafd13d3") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987251 4183 configmap.go:199] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987276 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.987269594 +0000 UTC m=+470.679934322 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987307 4183 configmap.go:199] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987333 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.987322555 +0000 UTC m=+470.679987173 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987383 4183 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987395 4183 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987402 4183 projected.go:200] Error preparing data for projected volume kube-api-access-55f7t for pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987422 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987445 4183 configmap.go:199] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987465 4183 projected.go:294] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987477 4183 projected.go:294] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987485 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tl5kg for pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987500 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987524 4183 projected.go:200] Error preparing data for projected volume kube-api-access-n6sqt for pod openshift-marketplace/community-operators-8jhz6: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987530 4183 projected.go:294] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987535 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987543 4183 projected.go:294] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987555 4183 projected.go:200] Error preparing data for projected volume kube-api-access-d7ntf for pod openshift-service-ca/service-ca-666f99b6f-vlbxv: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987620 4183 secret.go:194] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987645 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987669 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987427 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.987419648 +0000 UTC m=+470.680084376 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-55f7t" (UniqueName: "kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987699 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.987689366 +0000 UTC m=+470.680353954 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"openshift-global-ca" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987719 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.987709946 +0000 UTC m=+470.680374534 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-tl5kg" (UniqueName: "kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987735 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt podName:3f4dca86-e6ee-4ec9-8324-86aff960225e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.987727907 +0000 UTC m=+470.680392495 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-n6sqt" (UniqueName: "kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt") pod "community-operators-8jhz6" (UID: "3f4dca86-e6ee-4ec9-8324-86aff960225e") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987747 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.987741927 +0000 UTC m=+470.680406515 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-d7ntf" (UniqueName: "kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987760 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.987755017 +0000 UTC m=+470.680419615 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"encryption-config-1" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987895 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.987769398 +0000 UTC m=+470.680544489 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987917 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.987910832 +0000 UTC m=+470.680575420 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987935 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.987929572 +0000 UTC m=+470.680594160 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"trusted-ca-bundle" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.990337 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.009479 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dba0ea54e565345301e3986d0dd8c643d32ea56c561c86bdb4d4b35fa49a453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dba0ea54e565345301e3986d0dd8c643d32ea56c561c86bdb4d4b35fa49a453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2024-06-27T13:21:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:13Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.027937 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.052380 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.071366 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.087352 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9724w\" (UniqueName: \"kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.087422 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.087450 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.087476 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.087525 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.087549 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.087577 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.087611 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.087634 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.087661 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.087993 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dt5cx\" (UniqueName: \"kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx\") pod \"ingress-canary-2vhcn\" (UID: \"0b5d722a-1123-4935-9740-52a08d018bc9\") " pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.088152 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vhj\" (UniqueName: \"kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.088178 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.088224 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-fqnmc\" (UniqueName: \"kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.088270 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-x5d97\" (UniqueName: \"kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.088342 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.088503 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.088526 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.088550 4183 projected.go:200] Error preparing data for projected volume kube-api-access-rg2zg for pod openshift-marketplace/marketplace-operator-8b455464d-f9xdt: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.088603 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.088587829 +0000 UTC m=+470.781252447 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-rg2zg" (UniqueName: "kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.088660 4183 projected.go:294] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.088671 4183 projected.go:294] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.088678 4183 projected.go:200] Error preparing data for projected volume kube-api-access-9724w for pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.088703 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.088696492 +0000 UTC m=+470.781361110 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-9724w" (UniqueName: "kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.088743 4183 configmap.go:199] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.088768 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.088760304 +0000 UTC m=+470.781424912 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"service-ca" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.088910 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.088940 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.088932999 +0000 UTC m=+470.781597607 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.088972 4183 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.088999 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.088990601 +0000 UTC m=+470.781655399 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.089045 4183 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.089076 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.089067713 +0000 UTC m=+470.781732651 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-client" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.089123 4183 secret.go:194] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.089153 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.089144575 +0000 UTC m=+470.781809373 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"serving-cert" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.089212 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.089249 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.089238618 +0000 UTC m=+470.781903326 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.089257 4183 secret.go:194] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.089323 4183 secret.go:194] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.089353 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5rpl7\" (UniqueName: \"kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.089356 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.089341771 +0000 UTC m=+470.782006499 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"serving-cert" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.089388 4183 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.089413 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.089406333 +0000 UTC m=+470.782070951 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.089456 4183 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.089471 4183 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.089476 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.089482 4183 projected.go:200] Error preparing data for projected volume kube-api-access-5rpl7 for pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.089517 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7 podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.089506105 +0000 UTC m=+470.782170774 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-5rpl7" (UniqueName: "kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.089546 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.089557 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.089566 4183 projected.go:200] Error preparing data for projected volume kube-api-access-ptdrb for pod openshift-marketplace/redhat-operators-f4jkp: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.089592 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb podName:4092a9f8-5acc-4932-9e90-ef962eeb301a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.089584738 +0000 UTC m=+470.782249366 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-ptdrb" (UniqueName: "kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb") pod "redhat-operators-f4jkp" (UID: "4092a9f8-5acc-4932-9e90-ef962eeb301a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.089613 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2zpsk\" (UniqueName: \"kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk\") pod \"downloads-65476884b9-9wcvx\" (UID: \"6268b7fe-8910-4505-b404-6f1df638105c\") " pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.089639 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.094238 4183 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.094322 4183 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.094349 4183 projected.go:200] Error preparing data for projected volume kube-api-access-fqnmc for pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.094424 4183 secret.go:194] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.094470 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.094431806 +0000 UTC m=+470.787096604 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-fqnmc" (UniqueName: "kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.095057 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.095038014 +0000 UTC m=+470.787702712 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-oauth-config" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.095237 4183 projected.go:294] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.095257 4183 projected.go:294] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.095269 4183 projected.go:200] Error preparing data for projected volume kube-api-access-dt5cx for pod openshift-ingress-canary/ingress-canary-2vhcn: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.095319 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx podName:0b5d722a-1123-4935-9740-52a08d018bc9 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.095306281 +0000 UTC m=+470.787970969 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-dt5cx" (UniqueName: "kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx") pod "ingress-canary-2vhcn" (UID: "0b5d722a-1123-4935-9740-52a08d018bc9") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.095621 4183 projected.go:294] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.096503 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.096536 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.096554 4183 projected.go:200] Error preparing data for projected volume kube-api-access-x5d97 for pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.096700 4183 projected.go:294] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.096881 4183 projected.go:200] Error preparing data for projected volume kube-api-access-d9vhj for pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.096739 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97 podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.096719552 +0000 UTC m=+470.789384230 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-x5d97" (UniqueName: "kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.097979 4183 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.098004 4183 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.098050 4183 projected.go:200] Error preparing data for projected volume kube-api-access-2zpsk for pod openshift-console/downloads-65476884b9-9wcvx: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.098317 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.098408 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.097172025 +0000 UTC m=+470.789836673 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-d9vhj" (UniqueName: "kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.098479 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.098952 4183 projected.go:269] Couldn't get secret openshift-image-registry/image-registry-tls: object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.098980 4183 projected.go:200] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-585546dd8b-v5m4t: object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.099099 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.098925045 +0000 UTC m=+470.791589653 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.099133 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk podName:6268b7fe-8910-4505-b404-6f1df638105c nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.09912044 +0000 UTC m=+470.791785028 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-2zpsk" (UniqueName: "kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk") pod "downloads-65476884b9-9wcvx" (UID: "6268b7fe-8910-4505-b404-6f1df638105c") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.099155 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.099143831 +0000 UTC m=+470.791808469 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.099730 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 podName: nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.099711007 +0000 UTC m=+470.792375715 (durationBeforeRetry 32s). Error: MountVolume.MountDevice failed for volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.100349 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.100468 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.101294 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.10120308 +0000 UTC m=+470.793867898 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.101298 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.105427 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-76gl8\" (UniqueName: \"kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8\") pod \"network-check-target-v54bt\" (UID: \"34a48baf-1bee-4921-8bb2-9b7320e76f79\") " pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.106115 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kgvs\" (UniqueName: \"kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.106400 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.106885 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pzb57\" (UniqueName: \"kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.106924 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.106954 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.106967 4183 projected.go:200] Error preparing data for projected volume kube-api-access-76gl8 for pod openshift-network-diagnostics/network-check-target-v54bt: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.107034 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8 podName:34a48baf-1bee-4921-8bb2-9b7320e76f79 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.107016596 +0000 UTC m=+470.799681254 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-76gl8" (UniqueName: "kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8") pod "network-check-target-v54bt" (UID: "34a48baf-1bee-4921-8bb2-9b7320e76f79") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.101617 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.107087 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.107079328 +0000 UTC m=+470.799743946 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.107306 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.107323 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.107332 4183 projected.go:200] Error preparing data for projected volume kube-api-access-6kgvs for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.107370 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.107360976 +0000 UTC m=+470.800025594 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kgvs" (UniqueName: "kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.107470 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.107502 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.107516 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tf29r for pod openshift-marketplace/redhat-marketplace-8s8pc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.107546 4183 projected.go:294] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.107566 4183 projected.go:294] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.107578 4183 projected.go:200] Error preparing data for projected volume kube-api-access-pzb57 for pod openshift-controller-manager/controller-manager-6ff78978b4-q4vv8: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.107630 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r podName:c782cf62-a827-4677-b3c2-6f82c5f09cbb nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.107556381 +0000 UTC m=+470.800221119 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-tf29r" (UniqueName: "kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r") pod "redhat-marketplace-8s8pc" (UID: "c782cf62-a827-4677-b3c2-6f82c5f09cbb") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.107933 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57 podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.107920772 +0000 UTC m=+470.800585370 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-pzb57" (UniqueName: "kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.109002 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.109257 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.109438 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-w4r68\" (UniqueName: \"kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.109595 4183 secret.go:194] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.109643 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.109632791 +0000 UTC m=+470.802297409 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.109746 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.109751 4183 projected.go:294] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.109766 4183 projected.go:294] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.109860 4183 projected.go:200] Error preparing data for projected volume kube-api-access-w4r68 for pod openshift-authentication/oauth-openshift-765b47f944-n2lhl: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.109881 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.109907 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68 podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.109895198 +0000 UTC m=+470.802559826 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-w4r68" (UniqueName: "kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.109945 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.110001 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.110014 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.110033 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.110024812 +0000 UTC m=+470.802689430 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.110110 4183 secret.go:194] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.110119 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.110207 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.110188447 +0000 UTC m=+470.802853275 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.110232 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.110223158 +0000 UTC m=+470.802887826 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"etcd-client" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.110482 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hjlhw\" (UniqueName: \"kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.111062 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.111252 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.111443 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.111331929 +0000 UTC m=+470.803996717 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.111509 4183 configmap.go:199] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.111726 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.11170629 +0000 UTC m=+470.804370998 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"trusted-ca" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.111887 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.111922 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.111936 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hjlhw for pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.111993 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.111979728 +0000 UTC m=+470.804644426 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-hjlhw" (UniqueName: "kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.112028 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.112078 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.1120664 +0000 UTC m=+470.804731068 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"audit-1" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.115026 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tvc4r\" (UniqueName: \"kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.115381 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.115405 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.115426 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tvc4r for pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.115488 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.115469188 +0000 UTC m=+470.808134116 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-tvc4r" (UniqueName: "kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.125390 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.160298 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.177964 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.207374 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://07752a5beb70c8c101afc3171b1a8e4c4e2212fc9939840b594a2736d0ab4561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.209503 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.209729 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.209917 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.210030 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.210126 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.210230 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.210293 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.210451 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.210521 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.210637 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.210695 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.210968 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.211039 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.211142 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.218271 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lz9qh\" (UniqueName: \"kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.219298 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\") " pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.219700 4183 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.219926 4183 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.220010 4183 projected.go:294] Couldn't get configMap openshift-kube-controller-manager/kube-root-ca.crt: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.220065 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/revision-pruner-8-crc: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.220121 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access podName:72854c1e-5ae2-4ed6-9e50-ff3bccde2635 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.220103548 +0000 UTC m=+470.912768326 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access") pod "revision-pruner-8-crc" (UID: "72854c1e-5ae2-4ed6-9e50-ff3bccde2635") : object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.219955 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.220153 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.220166 4183 projected.go:200] Error preparing data for projected volume kube-api-access-r7dbp for pod openshift-marketplace/redhat-marketplace-rmwfn: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.220201 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp podName:9ad279b4-d9dc-42a8-a1c8-a002bd063482 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.220189081 +0000 UTC m=+470.912853879 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-r7dbp" (UniqueName: "kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp") pod "redhat-marketplace-rmwfn" (UID: "9ad279b4-d9dc-42a8-a1c8-a002bd063482") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.220020 4183 projected.go:200] Error preparing data for projected volume kube-api-access-lz9qh for pod openshift-console/console-84fccc7b6-mkncc: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.219741 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r7dbp\" (UniqueName: \"kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp\") pod \"redhat-marketplace-rmwfn\" (UID: \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\") " pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.222958 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.220358935 +0000 UTC m=+470.913023683 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-lz9qh" (UniqueName: "kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.233442 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.251069 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.267173 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.285887 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.303890 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.318966 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.336464 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.351733 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.367608 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.392973 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.412932 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.426069 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.432979 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:12 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:12 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:12 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.433088 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.443656 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.462753 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.490319 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.503713 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.522402 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.545945 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.561971 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.580005 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.603304 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.624736 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.643679 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.662195 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.678466 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.698843 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.715483 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.731612 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.747235 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.772334 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.797919 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.814680 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.832541 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b065562fefc63a381832e1073dc188f7f27d20b65780f1c54a9aa34c767a3b80\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:28:38Z\\\",\\\"message\\\":\\\"Thu Jun 27 13:21:15 UTC 2024\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:14Z\\\"}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.851410 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.865155 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" event={"ID":"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8","Type":"ContainerStarted","Data":"8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f"} Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.875411 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.895211 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.912060 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.938242 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.954307 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.971114 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.989738 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.004040 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.023087 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.045277 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.057881 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.080718 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.096717 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.111548 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.129164 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.141299 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.162088 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.180579 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.195850 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.208144 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.208185 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.208316 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.208325 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.208370 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.208383 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.208440 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.208453 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.208531 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.208564 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.208571 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.208610 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.208645 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.208701 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.208750 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.208867 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.208942 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.209020 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.209050 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.209093 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.209144 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.209208 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.209310 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.209373 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.209461 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.209536 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.209580 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.209587 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.209642 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.209670 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.209660 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.209713 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.209721 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.209744 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.209846 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.209860 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.209885 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.209932 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.209937 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.209961 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.209934 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.210026 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.210075 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.210083 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.210150 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.210044 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.210049 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.210229 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.210049 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.210425 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.210536 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.210564 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.210581 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.210625 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.210701 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.210720 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.210848 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.210928 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.211023 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.211088 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.211114 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.211194 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.211236 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.211279 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.211311 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.211386 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.211465 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.211543 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.211652 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.211731 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.211975 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.212029 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.212103 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.212204 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.212270 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.212448 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.212471 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.212542 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.212598 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.212672 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.212744 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.213058 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.216768 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.241924 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.260389 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.275694 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.302278 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.337870 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.377087 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.417746 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.431914 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:13 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:13 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:13 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.431982 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.463690 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.501359 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b065562fefc63a381832e1073dc188f7f27d20b65780f1c54a9aa34c767a3b80\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:28:38Z\\\",\\\"message\\\":\\\"Thu Jun 27 13:21:15 UTC 2024\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:14Z\\\"}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.539684 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.578988 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.616479 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.657338 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.700046 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.738332 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.778629 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.830527 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.881299 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.918914 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.945133 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.977665 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.022864 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.057453 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.106942 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.139392 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.176210 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.208972 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.209081 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.209134 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.209178 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.209219 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.209017 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.209059 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:14 crc kubenswrapper[4183]: E0813 19:51:14.209343 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:14 crc kubenswrapper[4183]: E0813 19:51:14.209527 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:14 crc kubenswrapper[4183]: E0813 19:51:14.209746 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:14 crc kubenswrapper[4183]: E0813 19:51:14.209973 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:14 crc kubenswrapper[4183]: E0813 19:51:14.210047 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:14 crc kubenswrapper[4183]: E0813 19:51:14.210168 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:14 crc kubenswrapper[4183]: E0813 19:51:14.210250 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.223399 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.258626 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.302991 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.314875 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.315148 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.315248 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.315374 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.315496 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:14Z","lastTransitionTime":"2025-08-13T19:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:14 crc kubenswrapper[4183]: E0813 19:51:14.335936 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.339166 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.341630 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.341916 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.342086 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.342240 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.342413 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:14Z","lastTransitionTime":"2025-08-13T19:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:14 crc kubenswrapper[4183]: E0813 19:51:14.360299 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.365747 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.365857 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.365874 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.365893 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.365920 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:14Z","lastTransitionTime":"2025-08-13T19:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:14 crc kubenswrapper[4183]: E0813 19:51:14.386918 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.391278 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.391526 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.391548 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.391559 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.391577 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.391601 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:14Z","lastTransitionTime":"2025-08-13T19:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:14 crc kubenswrapper[4183]: E0813 19:51:14.409225 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.413737 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.413917 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.413941 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.413976 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.414015 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:14Z","lastTransitionTime":"2025-08-13T19:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.421178 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.432215 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:14 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:14 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:14 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.432302 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:14 crc kubenswrapper[4183]: E0813 19:51:14.432905 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: E0813 19:51:14.432958 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.457277 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.497870 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.541958 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.584955 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.628287 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.667723 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.701660 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.738179 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.813600 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.834432 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.860175 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.901732 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.946650 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://07752a5beb70c8c101afc3171b1a8e4c4e2212fc9939840b594a2736d0ab4561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.977707 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.018738 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.056296 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.096667 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.140752 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.188368 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.208970 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.209179 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.209220 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.209269 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.209008 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.209065 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.209352 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.209224 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.209117 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.209489 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.209502 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.209087 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.209134 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.209167 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.209605 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.209632 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.209174 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.209673 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.209735 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.209744 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.209750 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.209901 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.209908 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.209134 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.210032 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.210036 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.210076 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.210100 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.210179 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.210192 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.210191 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.210182 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.210244 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.210334 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.210263 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.210373 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.210426 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.210683 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.210692 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.210993 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.211069 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.211174 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.211148 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.211259 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.211287 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.211367 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.211435 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.211495 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.211600 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.211897 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.211966 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.212054 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.212097 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.212212 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.212303 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.212669 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.212947 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.213080 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.213485 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.214097 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.214160 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.214165 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.214198 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.214224 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.214227 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.214253 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.214893 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.215106 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.215136 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.215163 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.215309 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.215533 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.215760 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.216020 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.216186 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.216344 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.216579 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.217003 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.217091 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.217174 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.217253 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.217336 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.224061 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.285058 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.389688 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.432067 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:15 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:15 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:15 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.432156 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.510596 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.669903 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.828137 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.871104 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.984308 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.033579 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.064140 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.209198 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.209513 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.209553 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:16 crc kubenswrapper[4183]: E0813 19:51:16.211036 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.209587 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:16 crc kubenswrapper[4183]: E0813 19:51:16.211185 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.209621 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:16 crc kubenswrapper[4183]: E0813 19:51:16.211303 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.209650 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:16 crc kubenswrapper[4183]: E0813 19:51:16.211450 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.209767 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:16 crc kubenswrapper[4183]: E0813 19:51:16.211569 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:16 crc kubenswrapper[4183]: E0813 19:51:16.210361 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:16 crc kubenswrapper[4183]: E0813 19:51:16.212506 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.273030 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.338308 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.363562 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.385404 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.404099 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.421513 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.432631 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:16 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:16 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:16 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.432723 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.440407 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.474394 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://07752a5beb70c8c101afc3171b1a8e4c4e2212fc9939840b594a2736d0ab4561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.494576 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.515876 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.534547 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.555438 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.573903 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.617191 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.637756 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.655413 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.671890 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.703082 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.727226 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.745717 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.760621 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.775566 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.810502 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.833505 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.850015 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.867168 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.891755 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.904583 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" event={"ID":"2b6d14a5-ca00-40c7-af7a-051a98a24eed","Type":"ContainerStarted","Data":"572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453"} Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.908658 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.924159 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.939621 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.956049 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.971838 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.991393 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.005948 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.021381 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.040133 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.056056 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.072695 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.099019 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.139018 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.176695 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.208564 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.208642 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.208674 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.208572 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.208586 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.208603 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.208900 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.208962 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.208989 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.209012 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.209031 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.209116 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.209177 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.209182 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.209211 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.209263 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.209429 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.209477 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.209516 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.209546 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.208992 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.209627 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.209631 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.209737 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.209769 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.209909 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.209949 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.209981 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.210055 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.210059 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.210079 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.210104 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.210117 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.210161 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.210168 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.210200 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.210256 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.210260 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.210283 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.210351 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.210364 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.210455 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.210533 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.210575 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.210622 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.210674 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.210683 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.210740 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.211241 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.211311 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.211405 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.211460 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.211529 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.211602 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.211642 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.211737 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.211768 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.211923 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.211983 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.212137 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.212142 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.212220 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.212301 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.212362 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.212491 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.212587 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.212715 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.212912 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.212957 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.213072 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.213120 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.213253 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.213292 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.213322 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.213380 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.213381 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.213488 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.213561 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.213598 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.213654 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.213705 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.213761 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.220088 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b065562fefc63a381832e1073dc188f7f27d20b65780f1c54a9aa34c767a3b80\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:28:38Z\\\",\\\"message\\\":\\\"Thu Jun 27 13:21:15 UTC 2024\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:14Z\\\"}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.258336 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.300519 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.339115 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.380173 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.421761 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.431862 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:17 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:17 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:17 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.431965 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.456697 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.498064 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.548357 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.577003 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.622515 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.660262 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.698527 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.739460 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.776891 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.817708 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.859649 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.896351 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.940258 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.977728 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.020480 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.058168 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.103326 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.143005 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.176901 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.208268 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.208320 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:18 crc kubenswrapper[4183]: E0813 19:51:18.208482 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.208280 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.208336 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.208357 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.208390 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:18 crc kubenswrapper[4183]: E0813 19:51:18.208751 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.208933 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:18 crc kubenswrapper[4183]: E0813 19:51:18.208989 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:18 crc kubenswrapper[4183]: E0813 19:51:18.209134 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:18 crc kubenswrapper[4183]: E0813 19:51:18.209215 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:18 crc kubenswrapper[4183]: E0813 19:51:18.209293 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:18 crc kubenswrapper[4183]: E0813 19:51:18.209381 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.220355 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.260275 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.298617 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.339464 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.381536 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.418447 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.432713 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:18 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:18 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:18 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.432906 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.460858 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.498501 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.542007 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.580348 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.617100 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.657993 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.698140 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.738018 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.786119 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.818503 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.856961 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.899682 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.941592 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.975461 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.019726 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.060018 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.098167 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.138641 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.177624 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.208260 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.208264 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.208291 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.208382 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.208497 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.208628 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.208698 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.208994 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.209945 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.210013 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.210018 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.210090 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.210126 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.210169 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.209294 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.209377 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.209404 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.210218 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.209425 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.210245 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.210327 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.210332 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.210366 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.210373 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.210422 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.210465 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.210520 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.210546 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.210589 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.210639 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.210664 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.209469 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.210733 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.209493 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.210924 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.210954 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.210979 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.209517 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.211115 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.209540 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.209576 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.209607 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.209631 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.209651 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.209671 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.209764 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.211210 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.211119 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.211289 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.211378 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.211484 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.211519 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.211544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.211596 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.211646 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.211723 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.211766 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.211883 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.211910 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.211926 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.211925 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.211955 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.212014 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.212132 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.212218 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.212361 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.212473 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.212614 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.212704 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.212914 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.213020 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.213072 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.213461 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.213527 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.213631 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.213751 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.213959 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.214075 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.214192 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.214975 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.215357 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.215365 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.220908 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.259021 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.300585 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.338432 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.380912 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.419832 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.431902 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:19 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:19 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:19 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.432320 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.458858 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.500419 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.537872 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.577374 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.623367 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.766701 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.790947 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.820256 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.850595 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.874545 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.892241 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.909876 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.938441 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.977522 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.017315 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.057197 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.098237 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.139069 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.184590 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.208428 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:20 crc kubenswrapper[4183]: E0813 19:51:20.208644 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.208943 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:20 crc kubenswrapper[4183]: E0813 19:51:20.209023 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.209135 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:20 crc kubenswrapper[4183]: E0813 19:51:20.209202 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.209309 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:20 crc kubenswrapper[4183]: E0813 19:51:20.209388 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.209491 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:20 crc kubenswrapper[4183]: E0813 19:51:20.209568 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.209673 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:20 crc kubenswrapper[4183]: E0813 19:51:20.209866 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.209964 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:20 crc kubenswrapper[4183]: E0813 19:51:20.210192 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.218435 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.259151 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.296561 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.340897 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.379992 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: E0813 19:51:20.392194 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.423043 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.432402 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:20 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:20 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:20 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.432498 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.458549 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.500177 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.541717 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.579589 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.623195 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.658644 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.703630 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.739581 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.777685 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.817014 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.864239 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://07752a5beb70c8c101afc3171b1a8e4c4e2212fc9939840b594a2736d0ab4561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.900465 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.938106 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.979222 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.024705 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:21Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.208217 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.208252 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.208270 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.208347 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.208358 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.208468 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.208475 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.208494 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.208501 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.208533 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.208575 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.208469 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.208686 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.208222 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.208943 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.208965 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.209026 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.209056 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.209112 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.209134 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.209238 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.209261 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.209264 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.209394 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.209416 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.209472 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.209517 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.209559 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.209559 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.209586 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.209692 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.209696 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.209734 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.209763 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.209929 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.209957 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.209981 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.210008 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.210057 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.210056 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.210082 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.210203 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.210236 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.210277 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.210293 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.210298 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.210399 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.210401 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.210430 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.210463 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.210560 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.210706 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.210716 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.210887 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.210975 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.211038 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.211115 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.211215 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.211286 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.211368 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.211400 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.211420 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.211916 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.212407 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.212405 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.212591 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.212704 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.212866 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.213012 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.213121 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.213365 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.213878 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.214027 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.214129 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.215254 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.214340 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.215363 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.218528 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.218959 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.219304 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.219398 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.219540 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.432517 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:21 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:21 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:21 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.433903 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:22 crc kubenswrapper[4183]: I0813 19:51:22.209141 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:22 crc kubenswrapper[4183]: I0813 19:51:22.209211 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:22 crc kubenswrapper[4183]: I0813 19:51:22.209268 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:22 crc kubenswrapper[4183]: I0813 19:51:22.209324 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:22 crc kubenswrapper[4183]: I0813 19:51:22.209383 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:22 crc kubenswrapper[4183]: I0813 19:51:22.209226 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:22 crc kubenswrapper[4183]: I0813 19:51:22.209186 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:22 crc kubenswrapper[4183]: E0813 19:51:22.209489 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:22 crc kubenswrapper[4183]: E0813 19:51:22.209625 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:22 crc kubenswrapper[4183]: E0813 19:51:22.209750 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:22 crc kubenswrapper[4183]: E0813 19:51:22.209970 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:22 crc kubenswrapper[4183]: E0813 19:51:22.210025 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:22 crc kubenswrapper[4183]: E0813 19:51:22.210097 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:22 crc kubenswrapper[4183]: E0813 19:51:22.210157 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:22 crc kubenswrapper[4183]: I0813 19:51:22.432099 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:22 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:22 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:22 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:22 crc kubenswrapper[4183]: I0813 19:51:22.432193 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.208882 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.209032 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.209136 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.209302 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.209315 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.209395 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.209479 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.209565 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.209602 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.209666 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.209702 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.209769 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.209886 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.209986 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.210012 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.210051 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.210078 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.210130 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.210129 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.210158 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.210188 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.210205 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.210195 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.210026 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.210249 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.210283 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.210344 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.210360 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.210393 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.210480 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.210485 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.210499 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.210528 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.210558 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.210564 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.210600 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.210626 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.210640 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.210682 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.210685 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.210764 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.210852 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.210958 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.211003 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.211023 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.211068 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.211079 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.211101 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.211147 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.211159 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.211181 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.211225 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.211277 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.211355 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.211384 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.211426 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.211482 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.211556 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.211598 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.211674 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.211713 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.211761 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.211922 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.211982 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.212043 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.212173 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.212211 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.212247 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.212301 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.212373 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.212417 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.212469 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.212532 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.212600 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.212657 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.212720 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.212859 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.212929 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.212980 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.213360 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.213441 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.213534 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.431706 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:23 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:23 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:23 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.431872 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.208756 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.208908 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:24 crc kubenswrapper[4183]: E0813 19:51:24.210135 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.208952 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:24 crc kubenswrapper[4183]: E0813 19:51:24.210392 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.208988 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.209016 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.209064 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.209084 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:24 crc kubenswrapper[4183]: E0813 19:51:24.209560 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:24 crc kubenswrapper[4183]: E0813 19:51:24.210555 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:24 crc kubenswrapper[4183]: E0813 19:51:24.210663 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:24 crc kubenswrapper[4183]: E0813 19:51:24.211045 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:24 crc kubenswrapper[4183]: E0813 19:51:24.211196 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.432906 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:24 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:24 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:24 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.433026 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.639317 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.639385 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.639401 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.639421 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.639447 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:24Z","lastTransitionTime":"2025-08-13T19:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:24 crc kubenswrapper[4183]: E0813 19:51:24.653677 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.658767 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.659077 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.659184 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.659297 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.659402 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:24Z","lastTransitionTime":"2025-08-13T19:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:24 crc kubenswrapper[4183]: E0813 19:51:24.674016 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.679322 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.679390 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.679493 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.679525 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.679655 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:24Z","lastTransitionTime":"2025-08-13T19:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:24 crc kubenswrapper[4183]: E0813 19:51:24.696555 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.701721 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.701824 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.701844 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.701862 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.702191 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:24Z","lastTransitionTime":"2025-08-13T19:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:24 crc kubenswrapper[4183]: E0813 19:51:24.716616 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.721700 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.721751 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.721765 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.721853 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.721878 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:24Z","lastTransitionTime":"2025-08-13T19:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:24 crc kubenswrapper[4183]: E0813 19:51:24.738284 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:24 crc kubenswrapper[4183]: E0813 19:51:24.738362 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.208893 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.209122 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.209196 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.209285 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.209338 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.209431 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.209486 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.209610 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.209665 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.209754 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.209911 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.210022 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.210161 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.210188 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.210217 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.210238 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.210198 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.210319 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.210371 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.210161 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.210420 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.210433 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.210433 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.210512 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.210516 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.210536 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.210563 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.210590 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.210601 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.210623 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.210657 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.210734 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.210940 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.210953 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.211004 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.211021 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.210953 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.211024 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.210979 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.210948 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.210986 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.211101 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.211144 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.211170 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.211191 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.211273 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.211291 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.211371 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.211404 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.211440 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.211380 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.211532 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.211628 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.211671 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.211758 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.211928 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.211962 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.212010 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.212078 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.212167 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.212258 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.212331 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.212407 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.212499 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.212542 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.212629 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.212720 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.212906 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.213153 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.213242 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.213376 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.213455 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.213493 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.213564 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.213593 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.213637 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.213713 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.213922 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.214074 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.214165 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.214261 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.214344 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.231529 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.249615 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.265732 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.279593 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.295707 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.313038 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.328375 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.345296 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.367307 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.383495 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.393295 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.400683 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.416499 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.432475 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:25 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:25 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:25 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.432588 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.433335 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.457061 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.474546 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.490258 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.509655 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.527202 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.545919 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.565131 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.580255 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.613675 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.629380 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.649561 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.666564 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.685427 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.707308 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.724742 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.746955 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.766518 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.785331 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.804706 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.828198 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.844508 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.862140 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.880048 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.895446 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.920745 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.948183 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.982439 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://07752a5beb70c8c101afc3171b1a8e4c4e2212fc9939840b594a2736d0ab4561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.005452 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.022371 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.040235 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.061654 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.084113 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.099721 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.116106 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.131947 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.146928 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.170229 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.183011 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.196946 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.208040 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.208071 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.208225 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:26 crc kubenswrapper[4183]: E0813 19:51:26.208228 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.208310 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:26 crc kubenswrapper[4183]: E0813 19:51:26.208400 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.208541 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:26 crc kubenswrapper[4183]: E0813 19:51:26.208611 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.208636 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:26 crc kubenswrapper[4183]: E0813 19:51:26.208720 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.208890 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:26 crc kubenswrapper[4183]: E0813 19:51:26.209013 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:26 crc kubenswrapper[4183]: E0813 19:51:26.209112 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:26 crc kubenswrapper[4183]: E0813 19:51:26.209215 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.215950 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.232440 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.249114 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.272082 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.288483 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.305123 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.320452 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.337512 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.352419 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.368181 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.386370 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.402988 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.421871 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.432242 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:26 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:26 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:26 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.432343 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.440943 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.456318 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.208142 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.208254 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.208403 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.208421 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.208471 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.208486 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.208495 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.208560 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.208602 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.208564 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.208526 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.208668 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.208680 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.208718 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.208739 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.208753 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.208920 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.208926 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.208952 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.208519 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.208986 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.208633 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.208680 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.209062 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.209074 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.209101 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.209132 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.209152 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.209156 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.209152 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.209281 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.209322 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.209353 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.209363 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.209417 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.209444 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.209548 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.209605 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.209903 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.210039 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.210154 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.210246 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.210298 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.210377 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.210380 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.210390 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.210489 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.210571 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.210680 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.210742 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.210849 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.210905 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.210993 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.211054 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.211175 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.211224 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.211224 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.211271 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.211361 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.211426 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.211517 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.211558 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.211601 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.211718 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.211993 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.212126 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.212178 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.212219 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.212958 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.213024 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.213065 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.213142 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.213262 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.213373 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.213550 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.213648 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.213689 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.213874 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.214096 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.214155 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.214326 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.214492 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.433882 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:27 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:27 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:27 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.434002 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:28 crc kubenswrapper[4183]: I0813 19:51:28.208441 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:28 crc kubenswrapper[4183]: E0813 19:51:28.208712 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:28 crc kubenswrapper[4183]: I0813 19:51:28.208849 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:28 crc kubenswrapper[4183]: I0813 19:51:28.208877 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:28 crc kubenswrapper[4183]: I0813 19:51:28.209110 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:28 crc kubenswrapper[4183]: I0813 19:51:28.209181 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:28 crc kubenswrapper[4183]: I0813 19:51:28.209132 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:28 crc kubenswrapper[4183]: E0813 19:51:28.209145 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:28 crc kubenswrapper[4183]: E0813 19:51:28.209277 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:28 crc kubenswrapper[4183]: I0813 19:51:28.209376 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:28 crc kubenswrapper[4183]: E0813 19:51:28.209456 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:28 crc kubenswrapper[4183]: E0813 19:51:28.209608 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:28 crc kubenswrapper[4183]: E0813 19:51:28.209682 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:28 crc kubenswrapper[4183]: E0813 19:51:28.209893 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:28 crc kubenswrapper[4183]: I0813 19:51:28.433038 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:28 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:28 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:28 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:28 crc kubenswrapper[4183]: I0813 19:51:28.433173 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.209022 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.209177 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.209271 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.209317 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.209432 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.209453 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.209544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.209598 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.209754 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.210002 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210111 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210177 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210192 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210231 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210250 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210268 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210276 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210342 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.210347 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210355 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210362 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210402 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210403 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210443 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210517 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210527 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210523 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210561 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.210578 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210633 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210647 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.210657 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210633 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210691 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.210877 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210901 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210931 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210993 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.210997 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.211081 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.211143 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.211177 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.211209 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.211227 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.211282 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.211335 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.211395 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.211475 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.211504 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.211635 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.211711 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.211865 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.211873 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.211886 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.211917 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.212035 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.212083 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.212174 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.212214 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.212238 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.212309 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.212449 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.212464 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.212503 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.212562 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.212618 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.212688 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.212874 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.212959 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.212993 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.213060 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.213172 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.213372 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.213383 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.213415 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.213469 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.213578 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.213700 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.213929 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.214056 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.214112 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.214205 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.433432 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:29 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:29 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:29 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.433544 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:30 crc kubenswrapper[4183]: I0813 19:51:30.208156 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:30 crc kubenswrapper[4183]: E0813 19:51:30.208441 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:30 crc kubenswrapper[4183]: I0813 19:51:30.208659 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:30 crc kubenswrapper[4183]: E0813 19:51:30.208879 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:30 crc kubenswrapper[4183]: I0813 19:51:30.209053 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:30 crc kubenswrapper[4183]: E0813 19:51:30.209207 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:30 crc kubenswrapper[4183]: I0813 19:51:30.209363 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:30 crc kubenswrapper[4183]: E0813 19:51:30.209462 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:30 crc kubenswrapper[4183]: I0813 19:51:30.209604 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:30 crc kubenswrapper[4183]: E0813 19:51:30.209703 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:30 crc kubenswrapper[4183]: I0813 19:51:30.210018 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:30 crc kubenswrapper[4183]: I0813 19:51:30.210073 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:30 crc kubenswrapper[4183]: E0813 19:51:30.210134 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:30 crc kubenswrapper[4183]: E0813 19:51:30.210415 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:30 crc kubenswrapper[4183]: E0813 19:51:30.395481 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:51:30 crc kubenswrapper[4183]: I0813 19:51:30.433977 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:30 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:30 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:30 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:30 crc kubenswrapper[4183]: I0813 19:51:30.434108 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.209113 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.209208 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.209890 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.209936 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.209946 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.210043 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.210087 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.210059 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.209299 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.210075 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.211733 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.212161 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.212282 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.212405 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.212638 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.212946 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.213727 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.213927 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.214056 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.214216 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.214272 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.214361 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.214510 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.214622 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.214923 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.214985 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.215069 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.215202 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.215259 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.215348 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.215509 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.215566 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.215643 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.215874 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.215993 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.216189 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.216210 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.216295 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.216351 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.216526 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.216721 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.217008 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.217033 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.217222 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.217247 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.217378 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.217381 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.217404 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.217454 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.217457 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.217544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.217587 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.217594 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.217877 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.217835 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.217842 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.218947 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.219121 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.219208 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.219595 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.219844 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.220057 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.220211 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.220310 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.220348 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.220414 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.220499 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.220525 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.220748 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.221030 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.221281 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.221190 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.222056 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.222436 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.222495 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.222532 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.222625 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.222755 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.222933 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.223027 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.223166 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.223293 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.433089 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:31 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:31 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:31 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.433191 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:32 crc kubenswrapper[4183]: I0813 19:51:32.208385 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:32 crc kubenswrapper[4183]: E0813 19:51:32.209278 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:32 crc kubenswrapper[4183]: I0813 19:51:32.208407 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:32 crc kubenswrapper[4183]: I0813 19:51:32.208451 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:32 crc kubenswrapper[4183]: I0813 19:51:32.208499 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:32 crc kubenswrapper[4183]: I0813 19:51:32.208497 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:32 crc kubenswrapper[4183]: I0813 19:51:32.208510 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:32 crc kubenswrapper[4183]: I0813 19:51:32.208576 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:32 crc kubenswrapper[4183]: E0813 19:51:32.209672 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:32 crc kubenswrapper[4183]: E0813 19:51:32.209671 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:32 crc kubenswrapper[4183]: E0813 19:51:32.209757 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:32 crc kubenswrapper[4183]: E0813 19:51:32.209926 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:32 crc kubenswrapper[4183]: E0813 19:51:32.210010 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:32 crc kubenswrapper[4183]: E0813 19:51:32.210071 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:32 crc kubenswrapper[4183]: I0813 19:51:32.432598 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:32 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:32 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:32 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:32 crc kubenswrapper[4183]: I0813 19:51:32.432690 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.208626 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.208647 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.208647 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.208895 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.208924 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.208948 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.208990 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.208967 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.209068 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.209093 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.209171 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.209173 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.209285 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.209311 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.209355 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.209374 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.209358 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.209604 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.209704 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.209723 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.209885 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.209737 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.209707 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.210015 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.210031 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.210095 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.210175 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.210188 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.210229 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.210252 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.210343 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.210350 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.210469 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.210471 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.210499 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.210525 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.210529 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.210673 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.210744 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.210952 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.211037 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.211052 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.210970 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.211287 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.211352 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.211542 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.211690 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.211951 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.212075 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.212139 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.212183 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.212146 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.212263 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.212387 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.212476 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.212391 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.212410 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.212610 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.212919 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.212933 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.213032 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.213106 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.213182 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.213489 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.213589 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.213647 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.213666 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.214039 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.214134 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.214259 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.214661 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.214991 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.215079 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.215188 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.215366 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.215527 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.215607 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.215682 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.215904 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.216087 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.216168 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.216359 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.433117 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:33 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:33 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:33 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.433221 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.208543 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.208600 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.208659 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.208672 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.208612 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.208565 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.208881 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:34 crc kubenswrapper[4183]: E0813 19:51:34.209080 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:34 crc kubenswrapper[4183]: E0813 19:51:34.209179 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:34 crc kubenswrapper[4183]: E0813 19:51:34.209284 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:34 crc kubenswrapper[4183]: E0813 19:51:34.209063 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:34 crc kubenswrapper[4183]: E0813 19:51:34.209426 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:34 crc kubenswrapper[4183]: E0813 19:51:34.209505 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:34 crc kubenswrapper[4183]: E0813 19:51:34.209579 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.433364 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:34 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:34 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:34 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.433469 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.869755 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.870279 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.870328 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.870375 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.870426 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:34Z","lastTransitionTime":"2025-08-13T19:51:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:34 crc kubenswrapper[4183]: E0813 19:51:34.893462 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:34Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.899691 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.899726 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.899738 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.899756 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.899874 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:34Z","lastTransitionTime":"2025-08-13T19:51:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:34 crc kubenswrapper[4183]: E0813 19:51:34.914523 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:34Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.919409 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.919485 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.919505 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.919530 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.919560 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:34Z","lastTransitionTime":"2025-08-13T19:51:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:34 crc kubenswrapper[4183]: E0813 19:51:34.935607 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:34Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.941412 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.941546 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.941570 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.941596 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.941625 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:34Z","lastTransitionTime":"2025-08-13T19:51:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:34 crc kubenswrapper[4183]: E0813 19:51:34.956460 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:34Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.962061 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.962156 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.962179 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.962222 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.962253 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:34Z","lastTransitionTime":"2025-08-13T19:51:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:34 crc kubenswrapper[4183]: E0813 19:51:34.977525 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:34Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:34 crc kubenswrapper[4183]: E0813 19:51:34.977593 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.208898 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.208951 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.208983 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.208918 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.208972 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.209043 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.209008 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.209044 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.209081 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.208954 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.209166 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.209181 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.209190 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.209170 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.209313 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.209369 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.209460 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.209497 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.209559 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.209584 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.209609 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.209706 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.209733 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.209710 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.209711 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.209906 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.209934 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.209950 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.210033 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.209326 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.211730 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.211929 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.212018 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.212086 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.212115 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.212178 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.212394 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.212542 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.212872 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.213083 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.213197 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.214420 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.214666 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.214902 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.214981 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.215067 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.215211 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.215298 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.215347 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.215410 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.215496 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.215523 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.215654 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.215871 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.216027 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.216140 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.216237 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.216244 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.216335 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.216373 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.216490 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.216549 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.216639 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.217173 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.217422 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.217658 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.217966 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.218189 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.218282 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.218360 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.218536 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.218649 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.218707 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.219276 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.219469 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.219520 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.219617 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.219994 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.220157 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.220258 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.220627 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.220747 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.233218 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.248274 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.262142 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.282086 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.300956 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.326733 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://07752a5beb70c8c101afc3171b1a8e4c4e2212fc9939840b594a2736d0ab4561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.344253 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.361191 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.376080 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.390425 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.398056 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.413960 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.430656 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.432073 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:35 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:35 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:35 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.432149 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.448265 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.464224 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.484653 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.509143 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.523885 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.539186 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.553393 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.574891 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.598559 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.615859 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.633722 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.653005 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.669221 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.684397 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.700524 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.716653 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.735922 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.752281 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.772240 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.795576 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.811142 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.826870 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.846876 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.864673 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.880239 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.895552 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.910379 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.926482 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.941238 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.955151 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.971067 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.000906 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.017406 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.035327 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.050655 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.065579 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.101752 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.172718 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.187702 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.208520 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.208600 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.208661 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:36 crc kubenswrapper[4183]: E0813 19:51:36.208749 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.208881 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.208937 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.208966 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:36 crc kubenswrapper[4183]: E0813 19:51:36.209059 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:36 crc kubenswrapper[4183]: E0813 19:51:36.209097 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:36 crc kubenswrapper[4183]: E0813 19:51:36.209192 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:36 crc kubenswrapper[4183]: E0813 19:51:36.209365 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:36 crc kubenswrapper[4183]: E0813 19:51:36.209467 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.209597 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:36 crc kubenswrapper[4183]: E0813 19:51:36.210357 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.212151 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.227295 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.242123 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.256967 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.267353 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.282965 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.298417 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.317515 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.334164 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.351543 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.368673 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.385298 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.399928 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.417266 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.431895 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:36 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:36 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:36 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.432192 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.432671 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.448930 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.209150 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.209238 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.209268 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.209371 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.209379 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.209404 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.209426 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.209464 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.209505 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.209521 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.209574 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.209579 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.209608 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.209633 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.209666 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.209668 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.209696 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.209718 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.209764 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.209763 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.209890 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.209903 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.209978 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.209994 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.210022 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.210034 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.210069 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.210101 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.210126 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.210155 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.210218 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.210247 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.210293 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.210293 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.210320 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.210326 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.210353 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.210379 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.210383 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.210402 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.210446 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.210446 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.210503 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.210524 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.210638 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.210684 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.210697 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.210918 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.210973 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.211018 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.211095 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.211131 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.211224 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.211306 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.211377 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.211467 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.211631 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.211718 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.211901 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.211949 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.212078 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.212132 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.212167 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.212188 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.212188 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.212204 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.212292 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.212485 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.212629 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.212932 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.213021 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.213214 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.213261 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.213341 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.213400 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.213480 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.213558 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.213658 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.213753 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.213855 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.214038 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.214174 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.437644 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:37 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:37 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:37 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.437841 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:38 crc kubenswrapper[4183]: I0813 19:51:38.208492 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:38 crc kubenswrapper[4183]: I0813 19:51:38.208601 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:38 crc kubenswrapper[4183]: I0813 19:51:38.208609 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:38 crc kubenswrapper[4183]: I0813 19:51:38.208528 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:38 crc kubenswrapper[4183]: I0813 19:51:38.208573 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:38 crc kubenswrapper[4183]: I0813 19:51:38.208623 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:38 crc kubenswrapper[4183]: I0813 19:51:38.208492 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:38 crc kubenswrapper[4183]: E0813 19:51:38.208923 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:38 crc kubenswrapper[4183]: E0813 19:51:38.209050 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:38 crc kubenswrapper[4183]: E0813 19:51:38.209177 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:38 crc kubenswrapper[4183]: E0813 19:51:38.209352 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:38 crc kubenswrapper[4183]: E0813 19:51:38.209515 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:38 crc kubenswrapper[4183]: E0813 19:51:38.209585 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:38 crc kubenswrapper[4183]: E0813 19:51:38.209657 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:38 crc kubenswrapper[4183]: I0813 19:51:38.431243 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:38 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:38 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:38 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:38 crc kubenswrapper[4183]: I0813 19:51:38.431333 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.208950 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.209061 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.209027 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.209211 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.209237 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.209257 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.209321 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.209356 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.209408 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.209461 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.209492 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.209536 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.209587 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.209617 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.209690 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.209721 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.209767 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.209858 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.209903 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.209968 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.210000 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.210047 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.210099 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.210133 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.210203 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.210230 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.209694 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.210302 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.210340 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.210422 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.210498 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.210526 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.210568 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.210620 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.210682 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.210738 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.210207 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.210895 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.210927 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.210974 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.211032 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.211061 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.211160 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.211231 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.211255 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.211328 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.211332 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.211393 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.211403 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.211532 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.211637 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.211679 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.211752 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.211857 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.211911 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.212101 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.212250 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.212355 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.212549 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.212686 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.212874 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.213084 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.213252 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.213411 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.213594 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.213637 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.213701 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.213714 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.213867 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.213948 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.214012 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.214109 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.214171 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.214230 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.214298 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.214372 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.214402 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.214480 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.214536 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.214552 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.214591 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.214650 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.433011 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:39 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:39 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:39 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.433108 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:40 crc kubenswrapper[4183]: I0813 19:51:40.208919 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:40 crc kubenswrapper[4183]: I0813 19:51:40.208989 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:40 crc kubenswrapper[4183]: I0813 19:51:40.209016 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:40 crc kubenswrapper[4183]: I0813 19:51:40.209019 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:40 crc kubenswrapper[4183]: I0813 19:51:40.209078 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:40 crc kubenswrapper[4183]: E0813 19:51:40.209164 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:40 crc kubenswrapper[4183]: I0813 19:51:40.208918 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:40 crc kubenswrapper[4183]: I0813 19:51:40.209229 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:40 crc kubenswrapper[4183]: E0813 19:51:40.209270 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:40 crc kubenswrapper[4183]: E0813 19:51:40.209444 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:40 crc kubenswrapper[4183]: E0813 19:51:40.209577 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:40 crc kubenswrapper[4183]: E0813 19:51:40.209624 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:40 crc kubenswrapper[4183]: E0813 19:51:40.209718 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:40 crc kubenswrapper[4183]: E0813 19:51:40.209917 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:40 crc kubenswrapper[4183]: I0813 19:51:40.267242 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="ovnkube-controller" probeResult="failure" output="" Aug 13 19:51:40 crc kubenswrapper[4183]: E0813 19:51:40.400725 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:51:40 crc kubenswrapper[4183]: I0813 19:51:40.432900 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:40 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:40 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:40 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:40 crc kubenswrapper[4183]: I0813 19:51:40.433040 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.209303 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.209448 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.209517 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.209361 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.209917 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.209926 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.210008 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.210148 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.210248 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.210300 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.210430 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.210465 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.210563 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.210632 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.210730 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.210970 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.211070 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.211110 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.211178 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.211293 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.211339 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.211399 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.211479 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.211515 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.211577 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.211664 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.211699 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.211956 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.211981 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.212386 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.212414 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.212506 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.212523 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.212627 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.212649 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.212695 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.212998 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.213087 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.213161 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.213219 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.213351 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.213374 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.213384 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.213559 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.213667 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.213735 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.213983 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.214065 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.214371 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.214437 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.214483 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.214530 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.214537 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.213769 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.214640 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.214676 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.214680 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.214752 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.214772 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.214986 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.215056 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.215093 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.215185 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.215259 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.215339 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.215438 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.215526 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.215564 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.215658 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.215699 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.215921 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.215970 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.216040 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.216140 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.216485 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.216533 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.216925 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.216931 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.217009 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.217218 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.217573 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.217929 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.436074 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:41 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:41 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:41 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.436377 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:42 crc kubenswrapper[4183]: I0813 19:51:42.208692 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:42 crc kubenswrapper[4183]: I0813 19:51:42.208871 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:42 crc kubenswrapper[4183]: I0813 19:51:42.208921 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:42 crc kubenswrapper[4183]: I0813 19:51:42.208937 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:42 crc kubenswrapper[4183]: I0813 19:51:42.208937 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:42 crc kubenswrapper[4183]: I0813 19:51:42.209012 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:42 crc kubenswrapper[4183]: E0813 19:51:42.209060 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:42 crc kubenswrapper[4183]: I0813 19:51:42.208714 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:42 crc kubenswrapper[4183]: E0813 19:51:42.209247 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:42 crc kubenswrapper[4183]: E0813 19:51:42.209363 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:42 crc kubenswrapper[4183]: E0813 19:51:42.209519 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:42 crc kubenswrapper[4183]: E0813 19:51:42.209587 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:42 crc kubenswrapper[4183]: E0813 19:51:42.209859 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:42 crc kubenswrapper[4183]: E0813 19:51:42.209943 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:42 crc kubenswrapper[4183]: I0813 19:51:42.433429 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:42 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:42 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:42 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:42 crc kubenswrapper[4183]: I0813 19:51:42.433547 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.208285 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.208475 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.208512 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.208725 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.208746 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.208903 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.208947 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.209018 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.209058 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.209120 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.209151 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.209215 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.209248 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.209311 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.209339 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.209400 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.209436 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.209507 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.209540 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.209608 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.209657 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.209724 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.210114 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.210169 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.210189 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.210247 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.210401 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.210401 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.210441 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.210500 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.210512 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.210114 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.210603 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.210718 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.210999 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.211004 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.211038 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.211104 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.211033 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.211173 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.211209 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.211241 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.211416 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.211511 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.211603 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.211655 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.211721 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.211753 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.211917 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.212006 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.212056 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.212133 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.212196 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.212230 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.212280 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.212324 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.212380 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.212512 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.212514 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.212555 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.212584 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.212665 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.212702 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.212749 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.212865 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.212926 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.213040 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.213091 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.213129 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.213199 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.213257 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.213323 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.213357 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.213430 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.213523 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.213601 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.213667 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.213930 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.214050 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.214138 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.214894 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.215137 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.432240 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:43 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:43 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:43 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.432343 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.798252 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.798372 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.798525 4183 configmap.go:199] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.798622 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.798600749 +0000 UTC m=+534.491265497 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"console-operator-config" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.798951 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.799012 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.799091 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.799126 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.799152 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.799180 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j7zrh\" (UniqueName: \"kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.799213 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.799249 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.799301 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.799388 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.799727 4183 secret.go:194] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.799509 4183 configmap.go:199] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.799549 4183 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.799567 4183 secret.go:194] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.799602 4183 secret.go:194] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.799598 4183 secret.go:194] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.799597 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.799630 4183 configmap.go:199] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.799653 4183 secret.go:194] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.799659 4183 secret.go:194] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.799674 4183 projected.go:294] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.800140 4183 projected.go:294] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.800155 4183 projected.go:200] Error preparing data for projected volume kube-api-access-j7zrh for pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.799905 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.799887525 +0000 UTC m=+534.492552243 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.800202 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.800189444 +0000 UTC m=+534.492854132 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.800248 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.800213594 +0000 UTC m=+534.492878263 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"config" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.800268 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.800258746 +0000 UTC m=+534.492923464 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : object "openshift-service-ca"/"signing-key" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.800286 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.800277976 +0000 UTC m=+534.492942674 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"serving-cert" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.800304 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.800295277 +0000 UTC m=+534.492959965 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.800330 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.800319788 +0000 UTC m=+534.492984486 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.800348 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.800339328 +0000 UTC m=+534.493004036 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.800365 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.800357329 +0000 UTC m=+534.493021997 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.800383 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.800374099 +0000 UTC m=+534.493038737 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"serving-cert" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.800400 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.80039226 +0000 UTC m=+534.493056958 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-j7zrh" (UniqueName: "kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.800493 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.800534 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.800601 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.800669 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.800733 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.800768 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.800910 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.800947 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.801147 4183 projected.go:294] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.801169 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.801207 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.801195152 +0000 UTC m=+534.493859860 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.801243 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.801288 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.801277095 +0000 UTC m=+534.493941823 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.801318 4183 configmap.go:199] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.801356 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.801345647 +0000 UTC m=+534.494010345 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.801388 4183 secret.go:194] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.801457 4183 secret.go:194] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.801491 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.801479681 +0000 UTC m=+534.494144389 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default-metrics-tls" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.801556 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.801571 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.801593 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.801616 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.801603534 +0000 UTC m=+534.494268252 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.801647 4183 configmap.go:199] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.801687 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.801689 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.801676936 +0000 UTC m=+534.494341734 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : object "openshift-service-ca"/"signing-cabundle" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.801729 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.801736 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.801725298 +0000 UTC m=+534.494389956 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.801766 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.801755578 +0000 UTC m=+534.494420346 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.801876 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.801860561 +0000 UTC m=+534.494525259 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"serving-cert" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.801900 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.801931 4183 secret.go:194] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.801945 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.801933534 +0000 UTC m=+534.494598222 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"serving-cert" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.801972 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls podName:297ab9b6-2186-4d5b-a952-2bfd59af63c4 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.801961254 +0000 UTC m=+534.494626082 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls") pod "machine-config-controller-6df6df6b6b-58shh" (UID: "297ab9b6-2186-4d5b-a952-2bfd59af63c4") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.801649 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.802018 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.802054 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.802387 4183 configmap.go:199] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.802427 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.802416117 +0000 UTC m=+534.495080945 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"oauth-serving-cert" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.802475 4183 secret.go:194] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.802512 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.80250155 +0000 UTC m=+534.495166278 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : object "openshift-dns-operator"/"metrics-tls" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.904579 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.904750 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.904892 4183 configmap.go:199] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.905220 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.905150844 +0000 UTC m=+534.597815532 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-images" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.905557 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.905608 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.905635 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.905763 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.905890 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.905925 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.905944 4183 secret.go:194] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.905932 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.905921426 +0000 UTC m=+534.598586074 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"image-import-ca" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.905998 4183 secret.go:194] Couldn't get secret openshift-image-registry/installation-pull-secrets: object "openshift-image-registry"/"installation-pull-secrets" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.906067 4183 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.906063 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.906102 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.906090431 +0000 UTC m=+534.598755109 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"client-ca" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.906137 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.906144 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.906135682 +0000 UTC m=+534.598800340 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.906152 4183 secret.go:194] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.906161 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.906155433 +0000 UTC m=+534.598820021 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "installation-pull-secrets" (UniqueName: "kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"installation-pull-secrets" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.906195 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.906182614 +0000 UTC m=+534.598847312 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.906203 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.906246 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.906234035 +0000 UTC m=+534.598898723 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.906478 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.906518 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.906507363 +0000 UTC m=+534.599171991 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.906767 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.906996 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.906983467 +0000 UTC m=+534.599648105 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.008591 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.009063 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.009217 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.009334 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.008717 4183 secret.go:194] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.009528 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.009230 4183 secret.go:194] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.009530 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs podName:a702c6d2-4dde-4077-ab8c-0f8df804bf7a nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.009505138 +0000 UTC m=+534.702169846 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs") pod "network-metrics-daemon-qdfr4" (UID: "a702c6d2-4dde-4077-ab8c-0f8df804bf7a") : object "openshift-multus"/"metrics-daemon-secret" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.009285 4183 configmap.go:199] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.009605 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.00958855 +0000 UTC m=+534.702253238 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.009435 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.009626 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs podName:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.009615681 +0000 UTC m=+534.702280369 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs") pod "multus-admission-controller-6c7c885997-4hbbc" (UID: "d5025cb4-ddb0-4107-88c1-bcbcdb779ac0") : object "openshift-multus"/"multus-admission-controller-secret" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.009651 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.009641841 +0000 UTC m=+534.702306479 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.009701 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.009740 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.009846 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.009884 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kp86\" (UniqueName: \"kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.009912 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8dcvj\" (UniqueName: \"kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.009945 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hpzhn\" (UniqueName: \"kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.009983 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r8qj9\" (UniqueName: \"kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.010023 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010084 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010146 4183 secret.go:194] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010155 4183 projected.go:294] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010174 4183 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010179 4183 projected.go:294] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010192 4183 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010193 4183 projected.go:200] Error preparing data for projected volume kube-api-access-r8qj9 for pod openshift-apiserver/apiserver-67cbf64bc9-mtx25: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010203 4183 projected.go:200] Error preparing data for projected volume kube-api-access-6kp86 for pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010216 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.010184647 +0000 UTC m=+534.702849355 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010235 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9 podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.010225678 +0000 UTC m=+534.702890296 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-r8qj9" (UniqueName: "kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010102 4183 configmap.go:199] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010238 4183 configmap.go:199] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010251 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.010244589 +0000 UTC m=+534.702909177 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010117 4183 projected.go:294] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010295 4183 projected.go:294] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010307 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hpzhn for pod openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010278 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.010265479 +0000 UTC m=+534.702930147 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.010370 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010415 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.010431 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4w8wh\" (UniqueName: \"kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010447 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.010439954 +0000 UTC m=+534.703104652 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.010470 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.010497 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010505 4183 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010511 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86 podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.010493286 +0000 UTC m=+534.703157974 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kp86" (UniqueName: "kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010523 4183 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010534 4183 projected.go:200] Error preparing data for projected volume kube-api-access-4w8wh for pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010542 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-idp-0-file-data: object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010544 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.010530167 +0000 UTC m=+534.703194835 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-hpzhn" (UniqueName: "kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.010607 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010611 4183 projected.go:294] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010629 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.010649 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010669 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.01065774 +0000 UTC m=+534.703322438 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010711 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.010699432 +0000 UTC m=+534.703364190 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010724 4183 configmap.go:199] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010733 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.010723102 +0000 UTC m=+534.703387870 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-idp-0-file-data" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010758 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.010749593 +0000 UTC m=+534.703414201 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.010764 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010713 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010894 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.010876997 +0000 UTC m=+534.703541765 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-4w8wh" (UniqueName: "kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010876 4183 configmap.go:199] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.011056 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.011041971 +0000 UTC m=+534.703706649 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-config" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.011182 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.011212 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.011195706 +0000 UTC m=+534.703860394 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.011302 4183 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.011375 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.01136069 +0000 UTC m=+534.704025358 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.011400 4183 projected.go:294] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.011509 4183 projected.go:294] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.011527 4183 projected.go:200] Error preparing data for projected volume kube-api-access-8dcvj for pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.011561 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.011574 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.011561056 +0000 UTC m=+534.704225784 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-8dcvj" (UniqueName: "kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.011613 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.011713 4183 secret.go:194] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.011764 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.011751922 +0000 UTC m=+534.704416630 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.011654 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.011935 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.011980 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.012021 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012021 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.012067 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012068 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012073 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.01206059 +0000 UTC m=+534.704725298 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012117 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.012107332 +0000 UTC m=+534.704771970 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"audit-1" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.012120 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-serving-cert\" (UniqueName: \"kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012135 4183 secret.go:194] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.012164 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012175 4183 secret.go:194] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012179 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.012166653 +0000 UTC m=+534.704831381 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012203 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.012196654 +0000 UTC m=+534.704861342 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012241 4183 secret.go:194] Couldn't get secret openshift-console-operator/webhook-serving-cert: object "openshift-console-operator"/"webhook-serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012265 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.012257676 +0000 UTC m=+534.704922294 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "webhook-serving-cert" (UniqueName: "kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : object "openshift-console-operator"/"webhook-serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.012273 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012304 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.012323 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012328 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.012322468 +0000 UTC m=+534.704987156 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"config" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012361 4183 configmap.go:199] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012410 4183 secret.go:194] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.012412 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012435 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.012428921 +0000 UTC m=+534.705093539 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012473 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012519 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.012506893 +0000 UTC m=+534.705171601 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.012474 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bxr\" (UniqueName: \"kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012532 4183 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012546 4183 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012557 4183 projected.go:200] Error preparing data for projected volume kube-api-access-l8bxr for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.012580 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012587 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.012577695 +0000 UTC m=+534.705242303 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l8bxr" (UniqueName: "kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012627 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.012614906 +0000 UTC m=+534.705279614 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"audit" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012635 4183 secret.go:194] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.012657 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012663 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.012654407 +0000 UTC m=+534.705319185 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.012696 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.012748 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.012888 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012900 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.012939 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012942 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.012933865 +0000 UTC m=+534.705598563 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-client" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012982 4183 configmap.go:199] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013015 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.013007107 +0000 UTC m=+534.705671805 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013049 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.013062 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bm986\" (UniqueName: \"kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.013088 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013096 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.013083829 +0000 UTC m=+534.705748517 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"etcd-serving-ca" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013127 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013174 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.013186 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013197 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.013190313 +0000 UTC m=+534.705854921 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013251 4183 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013262 4183 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013270 4183 projected.go:200] Error preparing data for projected volume kube-api-access-bm986 for pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.013287 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wrd8h\" (UniqueName: \"kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013301 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986 podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.013291375 +0000 UTC m=+534.705955993 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-bm986" (UniqueName: "kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013366 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013387 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013400 4183 projected.go:200] Error preparing data for projected volume kube-api-access-wrd8h for pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013436 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.013425259 +0000 UTC m=+534.706089977 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-wrd8h" (UniqueName: "kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013467 4183 secret.go:194] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013529 4183 secret.go:194] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013562 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.013555103 +0000 UTC m=+534.706219691 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.013565 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nf4t2\" (UniqueName: \"kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013576 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.013569893 +0000 UTC m=+534.706234491 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-session" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.013606 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013625 4183 projected.go:294] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013637 4183 projected.go:294] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013649 4183 projected.go:200] Error preparing data for projected volume kube-api-access-nf4t2 for pod openshift-dns-operator/dns-operator-75f687757b-nz2xb: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.013660 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtrv\" (UniqueName: \"kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013675 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2 podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.013668896 +0000 UTC m=+534.706333514 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-nf4t2" (UniqueName: "kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.013736 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013737 4183 projected.go:294] Couldn't get configMap hostpath-provisioner/kube-root-ca.crt: object "hostpath-provisioner"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013756 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.013744408 +0000 UTC m=+534.706409076 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013764 4183 projected.go:294] Couldn't get configMap hostpath-provisioner/openshift-service-ca.crt: object "hostpath-provisioner"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013898 4183 projected.go:200] Error preparing data for projected volume kube-api-access-vvtrv for pod hostpath-provisioner/csi-hostpathplugin-hvm8g: [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.013913 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxcp\" (UniqueName: \"kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp\") pod \"network-check-source-5c5478f8c-vqvt7\" (UID: \"d0f40333-c860-4c04-8058-a0bf572dcf12\") " pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013993 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.014005 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv podName:12e733dd-0939-4f1b-9cbb-13897e093787 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.013928984 +0000 UTC m=+534.706593672 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-vvtrv" (UniqueName: "kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv") pod "csi-hostpathplugin-hvm8g" (UID: "12e733dd-0939-4f1b-9cbb-13897e093787") : [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.014050 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.014092 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.014081038 +0000 UTC m=+534.706745716 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.014140 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.014010 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.014174 4183 projected.go:200] Error preparing data for projected volume kube-api-access-qcxcp for pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.014257 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.014263 4183 projected.go:294] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.014280 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.014318 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.014305404 +0000 UTC m=+534.706970122 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.014140 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.014366 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.014356976 +0000 UTC m=+534.707021654 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.014367 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.014478 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d7ntf\" (UniqueName: \"kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.014497 4183 secret.go:194] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.014614 4183 projected.go:294] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.014633 4183 projected.go:294] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.014645 4183 projected.go:200] Error preparing data for projected volume kube-api-access-d7ntf for pod openshift-service-ca/service-ca-666f99b6f-vlbxv: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.014516 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.014557 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.014725 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.014739 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.014621833 +0000 UTC m=+534.707286571 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"metrics-tls" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.014764 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.014753227 +0000 UTC m=+534.707417895 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.014887 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.0148704 +0000 UTC m=+534.707535088 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-d7ntf" (UniqueName: "kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.014938 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp podName:d0f40333-c860-4c04-8058-a0bf572dcf12 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.014924932 +0000 UTC m=+534.707589640 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-qcxcp" (UniqueName: "kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp") pod "network-check-source-5c5478f8c-vqvt7" (UID: "d0f40333-c860-4c04-8058-a0bf572dcf12") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.014999 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.015028 4183 configmap.go:199] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.015056 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.015090 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.015077086 +0000 UTC m=+534.707741804 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"trusted-ca-bundle" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.015126 4183 secret.go:194] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.015144 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.015168 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.015156739 +0000 UTC m=+534.707821487 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"encryption-config-1" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.015208 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.015220 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.015237 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.015249 4183 projected.go:200] Error preparing data for projected volume kube-api-access-n6sqt for pod openshift-marketplace/community-operators-8jhz6: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.015259 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmhq\" (UniqueName: \"kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq\") pod \"migrator-f7c6d88df-q2fnv\" (UID: \"cf1a8966-f594-490a-9fbb-eec5bafd13d3\") " pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.015286 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt podName:3f4dca86-e6ee-4ec9-8324-86aff960225e nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.015274592 +0000 UTC m=+534.707939330 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-n6sqt" (UniqueName: "kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt") pod "community-operators-8jhz6" (UID: "3f4dca86-e6ee-4ec9-8324-86aff960225e") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.015321 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.015334 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.015351 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.015363 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hqmhq for pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.015402 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq podName:cf1a8966-f594-490a-9fbb-eec5bafd13d3 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.015391265 +0000 UTC m=+534.708055943 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqmhq" (UniqueName: "kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq") pod "migrator-f7c6d88df-q2fnv" (UID: "cf1a8966-f594-490a-9fbb-eec5bafd13d3") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.015403 4183 configmap.go:199] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.015443 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.015433346 +0000 UTC m=+534.708098064 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.015447 4183 configmap.go:199] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.015469 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.015458777 +0000 UTC m=+534.708123475 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"openshift-global-ca" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.015489 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.015480428 +0000 UTC m=+534.708145116 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.015363 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.015512 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.015581 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.015629 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tl5kg\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.015662 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.015724 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2h9\" (UniqueName: \"kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.015989 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.016035 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.016025603 +0000 UTC m=+534.708690311 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"client-ca" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.016066 4183 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.016088 4183 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.016101 4183 projected.go:200] Error preparing data for projected volume kube-api-access-55f7t for pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.016142 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.016128956 +0000 UTC m=+534.708793674 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-55f7t" (UniqueName: "kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.016141 4183 projected.go:294] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.016168 4183 projected.go:294] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.016179 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tl5kg for pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.016185 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.016214 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.016203238 +0000 UTC m=+534.708867946 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-tl5kg" (UniqueName: "kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.016089 4183 projected.go:294] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.016239 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.016227249 +0000 UTC m=+534.708891987 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.016240 4183 projected.go:294] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.016262 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.01625286 +0000 UTC m=+534.708917548 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"config" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.016263 4183 projected.go:200] Error preparing data for projected volume kube-api-access-lx2h9 for pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.016321 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.016339 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.016354 4183 projected.go:200] Error preparing data for projected volume kube-api-access-ncrf5 for pod openshift-marketplace/certified-operators-7287f: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.016340 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9 podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.016324232 +0000 UTC m=+534.708988950 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-lx2h9" (UniqueName: "kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.016413 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5 podName:887d596e-c519-4bfa-af90-3edd9e1b2f0f nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.016397694 +0000 UTC m=+534.709062362 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-ncrf5" (UniqueName: "kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5") pod "certified-operators-7287f" (UID: "887d596e-c519-4bfa-af90-3edd9e1b2f0f") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.016411 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.016472 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.016458556 +0000 UTC m=+534.709123254 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.123741 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.123987 4183 secret.go:194] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.124249 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.124305 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.124287828 +0000 UTC m=+534.816952456 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"audit-1" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.124379 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.12435796 +0000 UTC m=+534.817022668 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"etcd-client" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.124396 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.124658 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hjlhw\" (UniqueName: \"kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.124761 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.125120 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tvc4r\" (UniqueName: \"kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.125427 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.124845 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.125623 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.125643 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hjlhw for pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.124922 4183 configmap.go:199] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.125225 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.125866 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.125909 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tvc4r for pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.125939 4183 projected.go:294] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.125959 4183 projected.go:294] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.125968 4183 projected.go:200] Error preparing data for projected volume kube-api-access-9724w for pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.125509 4183 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.125692 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.125677157 +0000 UTC m=+534.818341855 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-hjlhw" (UniqueName: "kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.126033 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.126022627 +0000 UTC m=+534.818687225 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"trusted-ca" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.126051 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.126044608 +0000 UTC m=+534.818709196 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-tvc4r" (UniqueName: "kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.126067 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.126061218 +0000 UTC m=+534.818725806 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-9724w" (UniqueName: "kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.126082 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.126075259 +0000 UTC m=+534.818739857 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-client" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.125583 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9724w\" (UniqueName: \"kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.126381 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.126457 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.126492 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.126524 4183 configmap.go:199] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.126574 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.126560763 +0000 UTC m=+534.819225471 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"service-ca" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.126576 4183 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.126622 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.126609974 +0000 UTC m=+534.819274682 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.126622 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.126672 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.126655685 +0000 UTC m=+534.819320383 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.126708 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.126760 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.126904 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.126947 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.126980 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.127057 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dt5cx\" (UniqueName: \"kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx\") pod \"ingress-canary-2vhcn\" (UID: \"0b5d722a-1123-4935-9740-52a08d018bc9\") " pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.127096 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.127150 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.127135009 +0000 UTC m=+534.819799777 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.127202 4183 projected.go:294] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.127218 4183 projected.go:294] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.127236 4183 projected.go:200] Error preparing data for projected volume kube-api-access-dt5cx for pod openshift-ingress-canary/ingress-canary-2vhcn: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.127260 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-fqnmc\" (UniqueName: \"kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.127272 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx podName:0b5d722a-1123-4935-9740-52a08d018bc9 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.127259452 +0000 UTC m=+534.819924070 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-dt5cx" (UniqueName: "kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx") pod "ingress-canary-2vhcn" (UID: "0b5d722a-1123-4935-9740-52a08d018bc9") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.127304 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vhj\" (UniqueName: \"kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.127308 4183 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.127347 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.127363 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.127375 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.127382 4183 projected.go:200] Error preparing data for projected volume kube-api-access-rg2zg for pod openshift-marketplace/marketplace-operator-8b455464d-f9xdt: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.127407 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-x5d97\" (UniqueName: \"kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.127408 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.127399346 +0000 UTC m=+534.820063964 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-rg2zg" (UniqueName: "kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.127445 4183 secret.go:194] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.127471 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.127465278 +0000 UTC m=+534.820129976 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.127523 4183 secret.go:194] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.127555 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.127565 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.127553071 +0000 UTC m=+534.820217759 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.127595 4183 secret.go:194] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.127625 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.127618283 +0000 UTC m=+534.820282971 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.127746 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5rpl7\" (UniqueName: \"kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.127869 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128162 4183 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128184 4183 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128194 4183 projected.go:200] Error preparing data for projected volume kube-api-access-5rpl7 for pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128203 4183 projected.go:294] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128163 4183 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128221 4183 projected.go:294] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128235 4183 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128237 4183 projected.go:200] Error preparing data for projected volume kube-api-access-d9vhj for pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128245 4183 projected.go:200] Error preparing data for projected volume kube-api-access-fqnmc for pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128407 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128425 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128433 4183 projected.go:200] Error preparing data for projected volume kube-api-access-ptdrb for pod openshift-marketplace/redhat-operators-f4jkp: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128489 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128506 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128516 4183 projected.go:200] Error preparing data for projected volume kube-api-access-x5d97 for pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.128739 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2zpsk\" (UniqueName: \"kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk\") pod \"downloads-65476884b9-9wcvx\" (UID: \"6268b7fe-8910-4505-b404-6f1df638105c\") " pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128841 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.128826497 +0000 UTC m=+534.821491235 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128865 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7 podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.128858638 +0000 UTC m=+534.821523226 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-5rpl7" (UniqueName: "kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128881 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.128873958 +0000 UTC m=+534.821538556 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-d9vhj" (UniqueName: "kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128895 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.128888779 +0000 UTC m=+534.821553447 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-fqnmc" (UniqueName: "kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128897 4183 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128911 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb podName:4092a9f8-5acc-4932-9e90-ef962eeb301a nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.128904169 +0000 UTC m=+534.821568757 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-ptdrb" (UniqueName: "kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb") pod "redhat-operators-f4jkp" (UID: "4092a9f8-5acc-4932-9e90-ef962eeb301a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128914 4183 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128924 4183 projected.go:200] Error preparing data for projected volume kube-api-access-2zpsk for pod openshift-console/downloads-65476884b9-9wcvx: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128928 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97 podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.12892109 +0000 UTC m=+534.821585678 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-x5d97" (UniqueName: "kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128986 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 podName: nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.12893712 +0000 UTC m=+534.821601708 (durationBeforeRetry 1m4s). Error: MountVolume.MountDevice failed for volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.129043 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.129070 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.129098 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.129127 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.129169 4183 secret.go:194] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.129202 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.129221 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.129206928 +0000 UTC m=+534.821871676 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-oauth-config" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.129245 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.129234969 +0000 UTC m=+534.821899657 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.129251 4183 projected.go:269] Couldn't get secret openshift-image-registry/image-registry-tls: object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.129259 4183 projected.go:200] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-585546dd8b-v5m4t: object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.129265 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk podName:6268b7fe-8910-4505-b404-6f1df638105c nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.129256089 +0000 UTC m=+534.821920757 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-2zpsk" (UniqueName: "kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk") pod "downloads-65476884b9-9wcvx" (UID: "6268b7fe-8910-4505-b404-6f1df638105c") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.129175 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.129285 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.12927832 +0000 UTC m=+534.821943018 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.129313 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kgvs\" (UniqueName: \"kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.129322 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.129311801 +0000 UTC m=+534.821976529 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.129361 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.129374 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.129377 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-76gl8\" (UniqueName: \"kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8\") pod \"network-check-target-v54bt\" (UID: \"34a48baf-1bee-4921-8bb2-9b7320e76f79\") " pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.129383 4183 projected.go:200] Error preparing data for projected volume kube-api-access-6kgvs for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.129413 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.129406084 +0000 UTC m=+534.822070782 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kgvs" (UniqueName: "kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.129418 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.129479 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pzb57\" (UniqueName: \"kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.129549 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.129591 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.129706 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-w4r68\" (UniqueName: \"kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.129915 4183 projected.go:294] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.129933 4183 projected.go:294] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.129942 4183 projected.go:200] Error preparing data for projected volume kube-api-access-pzb57 for pod openshift-controller-manager/controller-manager-6ff78978b4-q4vv8: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.129970 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.130003 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.130015 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.130024 4183 projected.go:200] Error preparing data for projected volume kube-api-access-76gl8 for pod openshift-network-diagnostics/network-check-target-v54bt: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.130042 4183 secret.go:194] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.130050 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8 podName:34a48baf-1bee-4921-8bb2-9b7320e76f79 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.130042282 +0000 UTC m=+534.822706890 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-76gl8" (UniqueName: "kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8") pod "network-check-target-v54bt" (UID: "34a48baf-1bee-4921-8bb2-9b7320e76f79") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.130076 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.130066492 +0000 UTC m=+534.822731190 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.130094 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.130086993 +0000 UTC m=+534.822751591 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.130110 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57 podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.130103334 +0000 UTC m=+534.822768012 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-pzb57" (UniqueName: "kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.130113 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.130116 4183 projected.go:294] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.130140 4183 projected.go:294] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.130154 4183 projected.go:200] Error preparing data for projected volume kube-api-access-w4r68 for pod openshift-authentication/oauth-openshift-765b47f944-n2lhl: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.130170 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.130192 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68 podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.130181116 +0000 UTC m=+534.822845794 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-w4r68" (UniqueName: "kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.130208 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.130228 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.130231 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.130225077 +0000 UTC m=+534.822889695 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.130125 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.130256 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tf29r for pod openshift-marketplace/redhat-marketplace-8s8pc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.130281 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r podName:c782cf62-a827-4677-b3c2-6f82c5f09cbb nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.130275148 +0000 UTC m=+534.822939756 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-tf29r" (UniqueName: "kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r") pod "redhat-marketplace-8s8pc" (UID: "c782cf62-a827-4677-b3c2-6f82c5f09cbb") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.130328 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.130402 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.130386452 +0000 UTC m=+534.823051220 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.208450 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.208491 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.208574 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.208626 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.208685 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.208705 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.208912 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.209006 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.209129 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.209377 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.209633 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.209735 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.210142 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.210229 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.231490 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lz9qh\" (UniqueName: \"kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.231622 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\") " pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.231671 4183 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.231707 4183 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.231725 4183 projected.go:200] Error preparing data for projected volume kube-api-access-lz9qh for pod openshift-console/console-84fccc7b6-mkncc: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.231888 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.231862193 +0000 UTC m=+534.924527101 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-lz9qh" (UniqueName: "kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.231918 4183 projected.go:294] Couldn't get configMap openshift-kube-controller-manager/kube-root-ca.crt: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.231941 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/revision-pruner-8-crc: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.231985 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access podName:72854c1e-5ae2-4ed6-9e50-ff3bccde2635 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.231970316 +0000 UTC m=+534.924635074 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access") pod "revision-pruner-8-crc" (UID: "72854c1e-5ae2-4ed6-9e50-ff3bccde2635") : object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.232307 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r7dbp\" (UniqueName: \"kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp\") pod \"redhat-marketplace-rmwfn\" (UID: \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\") " pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.232506 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.232529 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.232537 4183 projected.go:200] Error preparing data for projected volume kube-api-access-r7dbp for pod openshift-marketplace/redhat-marketplace-rmwfn: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.232569 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp podName:9ad279b4-d9dc-42a8-a1c8-a002bd063482 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.232559503 +0000 UTC m=+534.925224131 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-r7dbp" (UniqueName: "kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp") pod "redhat-marketplace-rmwfn" (UID: "9ad279b4-d9dc-42a8-a1c8-a002bd063482") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.432911 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:44 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:44 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:44 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.433049 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.208944 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.209040 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.209074 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.209108 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.209190 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.209203 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.209232 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.208966 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.209004 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.209191 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.209309 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.209347 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.209298 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.209299 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.209457 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.209465 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.209461 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.209498 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.209466 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.209577 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.209596 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.209578 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.209759 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.209997 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.210080 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.210173 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.210212 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.210367 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.210375 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.210480 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.210523 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.210595 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.210657 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.210667 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.210730 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.210931 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.210999 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.211049 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.211121 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.211234 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.211483 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.211504 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.211598 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.211662 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.211889 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.211940 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.212005 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.212086 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.212134 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.212208 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.212311 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.212415 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.212538 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.212544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.212607 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.212626 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.212640 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.212754 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.212898 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.212930 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.213018 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.213579 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.213738 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.214276 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.213883 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.214367 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.214005 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.214106 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.214698 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.214767 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.214958 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.215051 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.215080 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.215152 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.215981 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.216551 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.216605 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.216623 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.216714 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.217006 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.217297 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.217472 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.231951 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.243603 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.243668 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.243685 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.243706 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.243734 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:45Z","lastTransitionTime":"2025-08-13T19:51:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.250376 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.260567 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.270333 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.270440 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.270462 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.270491 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.270527 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:45Z","lastTransitionTime":"2025-08-13T19:51:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.274134 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.288459 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.295272 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.295332 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.295396 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.295420 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.295448 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:45Z","lastTransitionTime":"2025-08-13T19:51:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.298981 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.311313 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.314382 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.315935 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.315968 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.315990 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.316017 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.316042 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:45Z","lastTransitionTime":"2025-08-13T19:51:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.331983 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.334511 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.337573 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.337757 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.337969 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.338098 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.338277 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:45Z","lastTransitionTime":"2025-08-13T19:51:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.352708 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.355406 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.355463 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.373092 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.391013 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.401894 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.409704 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.427029 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.432224 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:45 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:45 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:45 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.432541 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.444272 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.460142 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.484393 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.502688 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.523451 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.541857 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.559654 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.573174 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.592130 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.610392 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.627480 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.648546 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.669644 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.692235 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.711597 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.728160 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.749468 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.768486 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.787670 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.806698 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.823186 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.840522 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.857940 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.876660 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.897585 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.920332 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://07752a5beb70c8c101afc3171b1a8e4c4e2212fc9939840b594a2736d0ab4561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.939978 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.960026 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.976559 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.993377 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.017355 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.041465 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.064493 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.084460 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.105455 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.131559 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.145699 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.161960 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.182054 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.200722 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.208297 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.208425 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.208447 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:46 crc kubenswrapper[4183]: E0813 19:51:46.208494 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.208537 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:46 crc kubenswrapper[4183]: E0813 19:51:46.208674 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.208869 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.208876 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:46 crc kubenswrapper[4183]: E0813 19:51:46.209033 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.208765 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:46 crc kubenswrapper[4183]: E0813 19:51:46.209122 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:46 crc kubenswrapper[4183]: E0813 19:51:46.209296 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:46 crc kubenswrapper[4183]: E0813 19:51:46.209424 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:46 crc kubenswrapper[4183]: E0813 19:51:46.209516 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.216250 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.229922 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.248552 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.269145 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.284604 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.301477 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.326096 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.341728 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.362654 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.382502 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.399989 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.418344 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.433903 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:46 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:46 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:46 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.434038 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.435722 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.450695 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.470748 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.494002 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.209068 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.209259 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.209308 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.209360 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.209453 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.209472 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.209506 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.209537 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.209262 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.209596 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.209601 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.209647 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.209659 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.209708 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.209715 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.209750 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.209769 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.209927 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.209939 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.209973 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.209994 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.210049 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.210052 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.209069 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.210135 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.210140 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.210198 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.210219 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.210259 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.210298 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.210383 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.210453 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.210601 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.210734 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.210953 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.210956 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.211045 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.211078 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.211107 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.211162 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.211169 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.211286 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.211385 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.211504 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.211555 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.211600 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.211663 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.211717 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.211722 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.211908 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.211974 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.212112 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.212132 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.212177 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.212237 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.212245 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.212386 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.212600 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.212633 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.212685 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.212684 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.212767 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.212866 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.212900 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.212976 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.212997 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.213021 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.213067 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.213164 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.213200 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.213292 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.213502 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.213585 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.213596 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.213690 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.213759 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.213875 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.214032 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.214151 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.214221 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.214330 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.214390 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.438614 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:47 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:47 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:47 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.438950 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:48 crc kubenswrapper[4183]: I0813 19:51:48.208982 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:48 crc kubenswrapper[4183]: E0813 19:51:48.209259 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:48 crc kubenswrapper[4183]: I0813 19:51:48.209686 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:48 crc kubenswrapper[4183]: I0813 19:51:48.209951 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:48 crc kubenswrapper[4183]: I0813 19:51:48.210061 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:48 crc kubenswrapper[4183]: E0813 19:51:48.210221 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:48 crc kubenswrapper[4183]: E0813 19:51:48.210078 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:48 crc kubenswrapper[4183]: I0813 19:51:48.209890 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:48 crc kubenswrapper[4183]: I0813 19:51:48.209933 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:48 crc kubenswrapper[4183]: E0813 19:51:48.210378 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:48 crc kubenswrapper[4183]: E0813 19:51:48.210538 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:48 crc kubenswrapper[4183]: E0813 19:51:48.210494 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:48 crc kubenswrapper[4183]: I0813 19:51:48.209919 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:48 crc kubenswrapper[4183]: E0813 19:51:48.211104 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:48 crc kubenswrapper[4183]: I0813 19:51:48.432377 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:48 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:48 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:48 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:48 crc kubenswrapper[4183]: I0813 19:51:48.432483 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.052715 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/0.log" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.054254 4183 generic.go:334] "Generic (PLEG): container finished" podID="475321a1-8b7e-4033-8f72-b05a8b377347" containerID="1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2" exitCode=1 Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.054482 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q88th" event={"ID":"475321a1-8b7e-4033-8f72-b05a8b377347","Type":"ContainerDied","Data":"1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2"} Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.055617 4183 scope.go:117] "RemoveContainer" containerID="1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.080896 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.111828 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.130881 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.153137 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.171905 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.188438 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.208969 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.209062 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.209135 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.209302 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.209351 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.209475 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.209538 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.209656 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.209676 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.209733 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.209934 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.209940 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.210014 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.209980 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.210107 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.210150 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.210196 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.210253 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.210299 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.210372 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.210419 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.210490 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.210493 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.210532 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.210559 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.210378 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.210634 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.210686 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.210703 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.210717 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.210903 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.210919 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.210963 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.211026 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.211028 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.211048 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.211134 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.211177 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.211250 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.211416 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.211524 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.211423 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.211492 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.211947 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.212162 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.212768 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.213038 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.213114 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.212995 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.213184 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.213219 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.213293 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.213349 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.213444 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.213715 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.213750 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.213871 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.213878 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.215652 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.213931 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.213968 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.215924 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.216004 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.213969 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.214049 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.213526 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.214149 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.214453 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.214581 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.214631 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.214687 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.214765 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.214924 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.214987 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.215131 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.215302 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.215442 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.215586 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.216320 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.216390 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.216482 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.216553 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.216621 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.236337 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.252160 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.274387 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.293618 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.309933 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.327415 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.339067 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.356718 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.382858 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.414933 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.435613 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:49 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:49 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:49 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.435738 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.443502 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.462735 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.491399 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.512191 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.540731 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.562040 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.578684 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.602039 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.619953 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.647290 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.670030 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.696913 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.724296 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.764759 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://07752a5beb70c8c101afc3171b1a8e4c4e2212fc9939840b594a2736d0ab4561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.804118 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.838000 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.869325 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.894078 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.934757 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.985716 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.010727 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.033229 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.058686 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.066295 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/0.log" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.066493 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q88th" event={"ID":"475321a1-8b7e-4033-8f72-b05a8b377347","Type":"ContainerStarted","Data":"9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2"} Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.108483 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.136625 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.162404 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.201241 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.210134 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:50 crc kubenswrapper[4183]: E0813 19:51:50.210430 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.210670 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:50 crc kubenswrapper[4183]: E0813 19:51:50.210750 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.212087 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:50 crc kubenswrapper[4183]: E0813 19:51:50.212229 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.212399 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:50 crc kubenswrapper[4183]: E0813 19:51:50.212613 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.214679 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:50 crc kubenswrapper[4183]: E0813 19:51:50.215032 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.215210 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:50 crc kubenswrapper[4183]: E0813 19:51:50.215306 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.215499 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:50 crc kubenswrapper[4183]: E0813 19:51:50.215673 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.252552 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.316722 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.382307 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:50 crc kubenswrapper[4183]: E0813 19:51:50.404173 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.453266 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:50 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:50 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:50 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.453401 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.565462 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.609289 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:48Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71\\\\n2025-08-13T19:51:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:03Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:03Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:51:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.677552 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.774159 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.831854 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.883166 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.908097 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.926107 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.948769 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.969102 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.001579 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.020651 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.048407 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.079964 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.111499 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.137208 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.158627 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.190165 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.208508 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.208583 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.208601 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.208628 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.208533 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.208727 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.208738 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.208755 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.208849 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.208916 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.208971 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.209010 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.209027 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.209032 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.209052 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.209141 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.209206 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.209248 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.209286 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.209306 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.209248 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.209349 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.209284 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.209494 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.209510 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.209531 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.209560 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.209568 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.209623 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.209659 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.209696 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.209720 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.209734 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.209746 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.209858 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.209872 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.209927 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.209933 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.210050 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.210185 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.210190 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.210237 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.210249 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.211364 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.210244 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.210311 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.210319 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.210346 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.211433 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.210486 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.210668 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.210705 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.210730 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.210839 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.210876 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.210947 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.211550 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.210971 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.211045 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.211114 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.211166 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.211226 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.211252 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.211622 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.211627 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.211272 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.211727 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.211976 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.212021 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.212121 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.212206 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.212252 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.212279 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.212362 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.212460 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.212534 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.212636 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.212664 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.212742 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.212970 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.213060 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.213133 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.227696 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.250311 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.278588 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.308426 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.334742 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.366728 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.389535 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.417989 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.432934 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:51 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:51 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:51 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.433063 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.445747 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.474443 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.503261 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.524507 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.551055 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.581610 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.607166 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.634153 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.655177 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.672956 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.691144 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.708276 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.725610 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.744946 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.764432 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.783451 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.805557 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.832052 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.851701 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.877731 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.898050 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.920959 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.941588 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.960968 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.979684 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.998927 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.015163 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.032298 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.060751 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://07752a5beb70c8c101afc3171b1a8e4c4e2212fc9939840b594a2736d0ab4561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.076447 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovnkube-controller/0.log" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.080920 4183 generic.go:334] "Generic (PLEG): container finished" podID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerID="07752a5beb70c8c101afc3171b1a8e4c4e2212fc9939840b594a2736d0ab4561" exitCode=1 Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.081153 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerDied","Data":"07752a5beb70c8c101afc3171b1a8e4c4e2212fc9939840b594a2736d0ab4561"} Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.083173 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.084030 4183 scope.go:117] "RemoveContainer" containerID="07752a5beb70c8c101afc3171b1a8e4c4e2212fc9939840b594a2736d0ab4561" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.102342 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.121961 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.148307 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.169374 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.193019 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.208180 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.208341 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.208629 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:52 crc kubenswrapper[4183]: E0813 19:51:52.209239 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.209289 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.209526 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:52 crc kubenswrapper[4183]: E0813 19:51:52.210536 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:52 crc kubenswrapper[4183]: E0813 19:51:52.209890 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.210743 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:52 crc kubenswrapper[4183]: E0813 19:51:52.211018 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:52 crc kubenswrapper[4183]: E0813 19:51:52.209975 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:52 crc kubenswrapper[4183]: E0813 19:51:52.210266 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.211197 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:52 crc kubenswrapper[4183]: E0813 19:51:52.211552 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.221683 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.240657 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.258307 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.276889 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.307707 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.336555 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.363410 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.390102 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.415228 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.440188 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:52 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:52 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:52 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.440447 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.446705 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.470253 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.493737 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.522771 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:48Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71\\\\n2025-08-13T19:51:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:03Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:03Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:51:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.553383 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.576758 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.604001 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.630113 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.654000 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.677756 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.702363 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.722691 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.744739 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.781291 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.803867 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.822987 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.841762 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.864875 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.894209 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.918285 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.941567 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.962727 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.989228 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.019200 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.088903 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovnkube-controller/0.log" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.093708 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"55ec019d83cfecee513d99ac18e2ee82ef341831cf1ccbf84cdcde598bfcb6b7"} Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.208948 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.209971 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.208981 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.210238 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209022 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.210409 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209057 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.210568 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209066 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.210758 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209093 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209107 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209127 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209138 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209137 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209165 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209175 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209189 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209196 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209207 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209222 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209259 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209285 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209294 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209307 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209321 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209328 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209347 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209356 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209360 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209380 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209385 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209389 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209416 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209413 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209419 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209443 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209450 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209455 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209480 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209481 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209497 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209509 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209513 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209698 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.211222 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.211383 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.211630 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.212123 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.212212 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.212284 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.212351 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.212449 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.212535 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.212617 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.212675 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.212886 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.212988 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.213192 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.213309 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.213427 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.213499 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.213519 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.213554 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.213659 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.213754 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.213959 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.214061 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.214161 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.214261 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.214349 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.214430 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.214501 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.214588 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.214662 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.215191 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.215262 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.215415 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.215648 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.215977 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.216146 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.216296 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.419603 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.433087 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:53 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:53 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:53 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.433565 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.441102 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.466958 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.484898 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.507150 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.531946 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.557615 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.574454 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.595513 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.620263 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.643765 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.661541 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.679196 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.699891 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.719095 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.736644 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.753246 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.779415 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://07752a5beb70c8c101afc3171b1a8e4c4e2212fc9939840b594a2736d0ab4561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07752a5beb70c8c101afc3171b1a8e4c4e2212fc9939840b594a2736d0ab4561\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:51Z\\\",\\\"message\\\":\\\"etworkPolicy event handler 4 for removal\\\\nI0813 19:51:51.514559 14994 handler.go:203] Sending *v1.Namespace event handler 1 for removal\\\\nI0813 19:51:51.514564 14994 handler.go:203] Sending *v1.Namespace event handler 5 for removal\\\\nI0813 19:51:51.514573 14994 handler.go:217] Removed *v1.Namespace event handler 5\\\\nI0813 19:51:51.514581 14994 handler.go:217] Removed *v1.EgressIP event handler 8\\\\nI0813 19:51:51.514588 14994 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:51:51.514589 14994 reflector.go:295] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:51:51.514598 14994 handler.go:217] Removed *v1.Node event handler 7\\\\nI0813 19:51:51.514645 14994 handler.go:217] Removed *v1.EgressFirewall event handler 9\\\\nI0813 19:51:51.514663 14994 handler.go:217] Removed *v1.NetworkPolicy event handler 4\\\\nI0813 19:51:51.514672 14994 handler.go:217] Removed *v1.Namespace event handler 1\\\\nI0813 19:51:51.514741 14994 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:51:51.514881 14994 handler.go:217] Removed *v1.Pod event handler 3\\\\nI0813 19:51:51.514901 14994 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.798894 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.813907 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.829676 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.848644 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.867138 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.884452 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.901600 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.917929 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.934615 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.957559 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.975018 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.988551 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.003492 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.019915 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.045142 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.073909 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.109722 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.157049 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.191993 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:48Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71\\\\n2025-08-13T19:51:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:03Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:03Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:51:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.208069 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:54 crc kubenswrapper[4183]: E0813 19:51:54.208272 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.208464 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:54 crc kubenswrapper[4183]: E0813 19:51:54.208598 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.210139 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:54 crc kubenswrapper[4183]: E0813 19:51:54.210300 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.210490 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:54 crc kubenswrapper[4183]: E0813 19:51:54.210651 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.210912 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:54 crc kubenswrapper[4183]: E0813 19:51:54.211099 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.211282 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:54 crc kubenswrapper[4183]: E0813 19:51:54.211430 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.211619 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:54 crc kubenswrapper[4183]: E0813 19:51:54.211849 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.231007 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.269858 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.316335 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.352863 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.395509 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.431520 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.431754 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:54 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:54 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:54 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.431947 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.469055 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.513971 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.554378 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.591248 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.628882 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.670279 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.670373 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.670410 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.670443 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.670463 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.677002 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.708991 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.749556 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.788439 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.827708 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.869341 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.909182 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.051857 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.073857 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.091768 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.103548 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovnkube-controller/1.log" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.104318 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovnkube-controller/0.log" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.110637 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.111265 4183 generic.go:334] "Generic (PLEG): container finished" podID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerID="55ec019d83cfecee513d99ac18e2ee82ef341831cf1ccbf84cdcde598bfcb6b7" exitCode=1 Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.111326 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerDied","Data":"55ec019d83cfecee513d99ac18e2ee82ef341831cf1ccbf84cdcde598bfcb6b7"} Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.111388 4183 scope.go:117] "RemoveContainer" containerID="07752a5beb70c8c101afc3171b1a8e4c4e2212fc9939840b594a2736d0ab4561" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.113452 4183 scope.go:117] "RemoveContainer" containerID="55ec019d83cfecee513d99ac18e2ee82ef341831cf1ccbf84cdcde598bfcb6b7" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.114359 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.128564 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.150693 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.190205 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.208388 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.208443 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.208536 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.208585 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.208682 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.208729 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.208736 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.208910 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.208920 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.208964 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.208984 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.209041 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.209042 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.209104 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.209117 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.209056 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.209189 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.209204 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.209202 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.209239 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.209313 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.209387 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.209396 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.209440 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.209444 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.209313 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.209510 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.209518 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.209556 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.209557 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.209631 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.209640 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.209713 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.209882 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.209930 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.209944 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.209974 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.209995 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.210058 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.210104 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.210170 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.210140 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.210271 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.210384 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.210436 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.210526 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.210600 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.210701 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.210770 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.210970 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.211021 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.211084 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.211225 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.211368 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.211480 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.211570 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.211660 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.211693 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.211725 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.211768 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.212043 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.212159 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.212241 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.212308 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.212341 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.214199 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.214289 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.212369 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.214414 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.212416 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.212429 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.212484 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.212543 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.212570 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.214617 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.212630 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.212678 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.212725 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.212864 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.212916 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.212973 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.213025 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.229070 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.269118 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.309346 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.349660 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.389738 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.405084 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.428221 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.431603 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:55 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:55 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:55 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.431712 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.470619 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.509111 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.549149 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.603315 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.648403 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.672427 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.672482 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.672497 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.672517 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.672538 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:55Z","lastTransitionTime":"2025-08-13T19:51:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.676602 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.689090 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.694387 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.694458 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.694476 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.694498 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.694525 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:55Z","lastTransitionTime":"2025-08-13T19:51:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.710534 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.711687 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.715274 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.715343 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.715363 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.715384 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.715407 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:55Z","lastTransitionTime":"2025-08-13T19:51:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.729740 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.734139 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.734209 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.734225 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.734245 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.734267 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:55Z","lastTransitionTime":"2025-08-13T19:51:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.748461 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.749506 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.754295 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.754360 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.754376 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.754396 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.754428 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:55Z","lastTransitionTime":"2025-08-13T19:51:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.770551 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.770612 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.793354 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.830858 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.870129 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.911955 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.949662 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.990308 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.028434 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.070402 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.116354 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.118370 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovnkube-controller/1.log" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.151098 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.189539 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.208762 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.208879 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.208893 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.208767 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.208979 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.208847 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.209022 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:56 crc kubenswrapper[4183]: E0813 19:51:56.209090 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:56 crc kubenswrapper[4183]: E0813 19:51:56.209272 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:56 crc kubenswrapper[4183]: E0813 19:51:56.209491 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:56 crc kubenswrapper[4183]: E0813 19:51:56.209720 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:56 crc kubenswrapper[4183]: E0813 19:51:56.209985 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:56 crc kubenswrapper[4183]: E0813 19:51:56.210166 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:56 crc kubenswrapper[4183]: E0813 19:51:56.210329 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.227890 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.269856 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.312263 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.352152 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.392237 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.430765 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.432892 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:56 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:56 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:56 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.432974 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.470358 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.510332 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.548723 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.589165 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.636142 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://55ec019d83cfecee513d99ac18e2ee82ef341831cf1ccbf84cdcde598bfcb6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07752a5beb70c8c101afc3171b1a8e4c4e2212fc9939840b594a2736d0ab4561\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:51Z\\\",\\\"message\\\":\\\"etworkPolicy event handler 4 for removal\\\\nI0813 19:51:51.514559 14994 handler.go:203] Sending *v1.Namespace event handler 1 for removal\\\\nI0813 19:51:51.514564 14994 handler.go:203] Sending *v1.Namespace event handler 5 for removal\\\\nI0813 19:51:51.514573 14994 handler.go:217] Removed *v1.Namespace event handler 5\\\\nI0813 19:51:51.514581 14994 handler.go:217] Removed *v1.EgressIP event handler 8\\\\nI0813 19:51:51.514588 14994 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:51:51.514589 14994 reflector.go:295] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:51:51.514598 14994 handler.go:217] Removed *v1.Node event handler 7\\\\nI0813 19:51:51.514645 14994 handler.go:217] Removed *v1.EgressFirewall event handler 9\\\\nI0813 19:51:51.514663 14994 handler.go:217] Removed *v1.NetworkPolicy event handler 4\\\\nI0813 19:51:51.514672 14994 handler.go:217] Removed *v1.Namespace event handler 1\\\\nI0813 19:51:51.514741 14994 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:51:51.514881 14994 handler.go:217] Removed *v1.Pod event handler 3\\\\nI0813 19:51:51.514901 14994 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ec019d83cfecee513d99ac18e2ee82ef341831cf1ccbf84cdcde598bfcb6b7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"3 16242 handler.go:203] Sending *v1.Node event handler 7 for removal\\\\nI0813 19:51:54.589848 16242 handler.go:217] Removed *v1.Namespace event handler 5\\\\nI0813 19:51:54.589868 16242 handler.go:217] Removed *v1.Namespace event handler 1\\\\nI0813 19:51:54.589895 16242 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:51:54.589924 16242 services_controller.go:231] Shutting down controller ovn-lb-controller\\\\nI0813 19:51:54.589937 16242 reflector.go:295] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:51:54.589952 16242 handler.go:203] Sending *v1.Node event handler 10 for removal\\\\nI0813 19:51:54.589975 16242 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:51:54.589985 16242 handler.go:217] Removed *v1.Node event handler 7\\\\nI0813 19:51:54.589996 16242 handler.go:217] Removed *v1.Node event handler 10\\\\nI0813 19:51:54.590680 16242 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:51:54.591579 16242 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:52Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.675128 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.710190 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.750476 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.787998 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.833890 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.870929 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.910554 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.950076 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.989745 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.039128 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.067853 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.108434 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.148481 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.191345 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.210044 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.210097 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.210128 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.210217 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.210323 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.210356 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.210379 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.210466 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.210591 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.210712 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.210734 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.210757 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.210959 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.210964 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.211001 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.211158 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.211267 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.211361 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.211369 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.211439 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.211569 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.211611 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.211646 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.211729 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.211891 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.212005 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.212067 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.212402 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.212509 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.212673 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.212714 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.212742 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.212676 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.212696 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.212891 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.212902 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.212876 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.212990 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.213073 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.213112 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.213076 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.213225 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.213294 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.213349 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.213305 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.213393 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.213386 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.213453 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.213490 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.213453 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.213610 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.213693 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.213877 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.213991 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.214009 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.214069 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.214091 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.214110 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.214189 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.214291 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.214378 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.214402 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.214526 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.214565 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.214618 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.214680 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.214750 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.214958 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.215037 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.215284 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.215384 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.215512 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.215679 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.215736 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.215760 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.215888 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.215967 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.216091 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.216257 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.216348 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.216425 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.216505 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.234166 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.269748 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.309703 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.351314 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:48Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71\\\\n2025-08-13T19:51:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:03Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:03Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:51:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.393367 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.430134 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.433363 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:57 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:57 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:57 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.433466 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.473916 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.512328 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.552934 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.591686 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.631762 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.671139 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.715296 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.748927 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.791380 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.828504 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.867666 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.910258 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.952209 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.990313 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.029597 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.071537 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.110306 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.151115 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.189829 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.208480 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.208543 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.208610 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:58 crc kubenswrapper[4183]: E0813 19:51:58.208723 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.209082 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:58 crc kubenswrapper[4183]: E0813 19:51:58.209167 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.209275 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:58 crc kubenswrapper[4183]: E0813 19:51:58.209390 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.209363 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.209412 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:58 crc kubenswrapper[4183]: E0813 19:51:58.209621 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:58 crc kubenswrapper[4183]: E0813 19:51:58.209716 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:58 crc kubenswrapper[4183]: E0813 19:51:58.209900 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:58 crc kubenswrapper[4183]: E0813 19:51:58.209959 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.232644 4183 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="fsnotify: can't remove non-existent watch: /etc/kubernetes/kubelet-ca.crt" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.233686 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.269432 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.309528 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.348194 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.391381 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.432032 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.435206 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:58 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:58 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:58 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.435332 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.470307 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.510678 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.546335 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.589101 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.629663 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.672130 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.711319 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.750960 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.795613 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.828649 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.875345 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.915763 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.952986 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.991605 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.028182 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.068754 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.108430 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.151392 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.190051 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.208401 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.208477 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.208671 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.208644 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.208837 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.208856 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.208956 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.208978 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.209026 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.209142 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.209258 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.209315 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.209355 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.209439 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.209457 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.209460 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.209486 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.209546 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.209558 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.209600 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.209608 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.209561 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.209692 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.209880 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.209980 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.209982 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.210022 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.210075 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.210090 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.210143 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.210168 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.210210 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.210211 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.210332 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.210344 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.210402 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.210403 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.210534 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.210543 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.210635 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.210699 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.210748 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.210875 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.210920 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.210968 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.210996 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.211078 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.211120 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.211128 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.211146 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.211291 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.211303 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.211382 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.211445 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.211466 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.211479 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.211673 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.211693 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.211871 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.212139 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.212149 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.212157 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.212335 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.212392 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.212440 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.212505 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.212566 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.212621 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.212669 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.212723 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.212768 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.212238 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.212968 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.213105 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.213345 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.213404 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.213464 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.213556 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.213680 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.214475 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.214700 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.215134 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.234294 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://55ec019d83cfecee513d99ac18e2ee82ef341831cf1ccbf84cdcde598bfcb6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07752a5beb70c8c101afc3171b1a8e4c4e2212fc9939840b594a2736d0ab4561\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:51Z\\\",\\\"message\\\":\\\"etworkPolicy event handler 4 for removal\\\\nI0813 19:51:51.514559 14994 handler.go:203] Sending *v1.Namespace event handler 1 for removal\\\\nI0813 19:51:51.514564 14994 handler.go:203] Sending *v1.Namespace event handler 5 for removal\\\\nI0813 19:51:51.514573 14994 handler.go:217] Removed *v1.Namespace event handler 5\\\\nI0813 19:51:51.514581 14994 handler.go:217] Removed *v1.EgressIP event handler 8\\\\nI0813 19:51:51.514588 14994 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:51:51.514589 14994 reflector.go:295] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:51:51.514598 14994 handler.go:217] Removed *v1.Node event handler 7\\\\nI0813 19:51:51.514645 14994 handler.go:217] Removed *v1.EgressFirewall event handler 9\\\\nI0813 19:51:51.514663 14994 handler.go:217] Removed *v1.NetworkPolicy event handler 4\\\\nI0813 19:51:51.514672 14994 handler.go:217] Removed *v1.Namespace event handler 1\\\\nI0813 19:51:51.514741 14994 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:51:51.514881 14994 handler.go:217] Removed *v1.Pod event handler 3\\\\nI0813 19:51:51.514901 14994 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ec019d83cfecee513d99ac18e2ee82ef341831cf1ccbf84cdcde598bfcb6b7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"3 16242 handler.go:203] Sending *v1.Node event handler 7 for removal\\\\nI0813 19:51:54.589848 16242 handler.go:217] Removed *v1.Namespace event handler 5\\\\nI0813 19:51:54.589868 16242 handler.go:217] Removed *v1.Namespace event handler 1\\\\nI0813 19:51:54.589895 16242 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:51:54.589924 16242 services_controller.go:231] Shutting down controller ovn-lb-controller\\\\nI0813 19:51:54.589937 16242 reflector.go:295] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:51:54.589952 16242 handler.go:203] Sending *v1.Node event handler 10 for removal\\\\nI0813 19:51:54.589975 16242 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:51:54.589985 16242 handler.go:217] Removed *v1.Node event handler 7\\\\nI0813 19:51:54.589996 16242 handler.go:217] Removed *v1.Node event handler 10\\\\nI0813 19:51:54.590680 16242 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:51:54.591579 16242 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:52Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.274943 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.311017 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.348750 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.389389 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.432355 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.434100 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:59 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:59 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:59 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.434174 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.470411 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.514230 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.548702 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.592005 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.637684 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.671538 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.709341 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.747923 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.792326 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.830866 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.869021 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.911656 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.949686 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:48Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71\\\\n2025-08-13T19:51:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:03Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:03Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:51:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.989656 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:00 crc kubenswrapper[4183]: I0813 19:52:00.029718 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:00 crc kubenswrapper[4183]: I0813 19:52:00.069555 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:00 crc kubenswrapper[4183]: I0813 19:52:00.113996 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:00 crc kubenswrapper[4183]: I0813 19:52:00.161651 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:00 crc kubenswrapper[4183]: I0813 19:52:00.193280 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:00 crc kubenswrapper[4183]: I0813 19:52:00.208472 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:00 crc kubenswrapper[4183]: I0813 19:52:00.208610 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:00 crc kubenswrapper[4183]: E0813 19:52:00.208720 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:00 crc kubenswrapper[4183]: E0813 19:52:00.208911 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:00 crc kubenswrapper[4183]: I0813 19:52:00.208977 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:00 crc kubenswrapper[4183]: E0813 19:52:00.209068 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:00 crc kubenswrapper[4183]: I0813 19:52:00.209078 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:00 crc kubenswrapper[4183]: E0813 19:52:00.209166 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:00 crc kubenswrapper[4183]: I0813 19:52:00.209223 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:00 crc kubenswrapper[4183]: I0813 19:52:00.208480 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:00 crc kubenswrapper[4183]: E0813 19:52:00.209298 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:00 crc kubenswrapper[4183]: I0813 19:52:00.209344 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:00 crc kubenswrapper[4183]: E0813 19:52:00.209407 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:00 crc kubenswrapper[4183]: E0813 19:52:00.209705 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:00 crc kubenswrapper[4183]: I0813 19:52:00.232660 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:00 crc kubenswrapper[4183]: E0813 19:52:00.407592 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:52:00 crc kubenswrapper[4183]: I0813 19:52:00.437431 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:00 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:00 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:00 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:00 crc kubenswrapper[4183]: I0813 19:52:00.437677 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.169739 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.186957 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.203493 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.208761 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.209032 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.209228 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.209326 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.209441 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.209510 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.209613 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.209689 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.209887 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.209978 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.210111 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.210138 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.210171 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.210211 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.210291 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.210337 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.210417 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.210115 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.210330 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.210508 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.210535 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.210538 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.210313 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.210595 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.210647 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.210648 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.210698 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.210710 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.210843 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.210870 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.210890 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.210993 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.210998 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.211027 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.211059 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.211106 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.211113 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.211150 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.211200 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.211170 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.211176 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.211382 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.211448 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.211459 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.211498 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.211540 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.211553 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.211577 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.211606 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.211609 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.211758 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.211914 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.212024 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.212121 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.212191 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.212305 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.212418 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.212557 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.212610 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.212682 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.212757 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.212997 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.213124 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.213297 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.213317 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.213331 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.213537 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.213590 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.213604 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.213680 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.213638 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.213756 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.213922 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.214035 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.214238 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.214316 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.214663 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.214731 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.214934 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.215041 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.215084 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.215100 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.222402 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.238596 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.254125 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.433438 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:01 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:01 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:01 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.433573 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:02 crc kubenswrapper[4183]: I0813 19:52:02.209234 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:02 crc kubenswrapper[4183]: I0813 19:52:02.209293 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:02 crc kubenswrapper[4183]: I0813 19:52:02.209239 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:02 crc kubenswrapper[4183]: I0813 19:52:02.209422 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:02 crc kubenswrapper[4183]: I0813 19:52:02.209448 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:02 crc kubenswrapper[4183]: I0813 19:52:02.209708 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:02 crc kubenswrapper[4183]: E0813 19:52:02.209922 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:02 crc kubenswrapper[4183]: E0813 19:52:02.209994 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:02 crc kubenswrapper[4183]: E0813 19:52:02.210099 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:02 crc kubenswrapper[4183]: E0813 19:52:02.210180 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:02 crc kubenswrapper[4183]: E0813 19:52:02.210381 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:02 crc kubenswrapper[4183]: E0813 19:52:02.210590 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:02 crc kubenswrapper[4183]: I0813 19:52:02.210612 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:02 crc kubenswrapper[4183]: E0813 19:52:02.210836 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:02 crc kubenswrapper[4183]: I0813 19:52:02.433352 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:02 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:02 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:02 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:02 crc kubenswrapper[4183]: I0813 19:52:02.433474 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.209181 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.209443 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.209506 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.209624 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.209867 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.209928 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.209972 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.209980 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.210037 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.210147 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.210183 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.210261 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.210337 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.210387 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.210453 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.210518 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.210554 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.210602 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.210665 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.210699 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.210745 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.210904 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.210947 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.210998 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.211053 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.211086 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.211143 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.211213 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.211252 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.211307 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.211387 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.211420 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.211466 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.211518 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.211552 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.211597 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.211653 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.212637 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.212888 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.212930 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.213022 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.213140 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.213156 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.213233 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.213340 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.213364 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.213425 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.213455 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.213486 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.213523 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.213652 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.213650 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.213860 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.213925 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.214033 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.214121 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.214208 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.214247 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.214335 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.214609 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.214651 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.214625 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.214700 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.214763 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.214892 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.214955 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.214961 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.214966 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.215049 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.215097 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.215147 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.215252 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.215297 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.215315 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.215450 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.215529 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.215599 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.215655 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.215743 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.215960 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.216128 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.216257 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.433926 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:03 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:03 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:03 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.434116 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:04 crc kubenswrapper[4183]: I0813 19:52:04.208642 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:04 crc kubenswrapper[4183]: E0813 19:52:04.208911 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:04 crc kubenswrapper[4183]: I0813 19:52:04.209074 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:04 crc kubenswrapper[4183]: E0813 19:52:04.209153 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:04 crc kubenswrapper[4183]: I0813 19:52:04.209268 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:04 crc kubenswrapper[4183]: E0813 19:52:04.209347 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:04 crc kubenswrapper[4183]: I0813 19:52:04.209448 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:04 crc kubenswrapper[4183]: E0813 19:52:04.209565 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:04 crc kubenswrapper[4183]: I0813 19:52:04.209686 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:04 crc kubenswrapper[4183]: E0813 19:52:04.209759 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:04 crc kubenswrapper[4183]: I0813 19:52:04.209963 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:04 crc kubenswrapper[4183]: E0813 19:52:04.210037 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:04 crc kubenswrapper[4183]: I0813 19:52:04.210132 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:04 crc kubenswrapper[4183]: E0813 19:52:04.210201 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:04 crc kubenswrapper[4183]: I0813 19:52:04.433388 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:04 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:04 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:04 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:04 crc kubenswrapper[4183]: I0813 19:52:04.433530 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.208499 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.208580 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.208534 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.208619 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.208583 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.208701 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.208714 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.208706 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.208920 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.208534 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.208961 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.209077 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.209111 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.208979 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.209200 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.209228 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.209297 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.209348 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.209400 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.209569 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.209608 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.209626 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.209662 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.209668 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.209750 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.209933 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.209933 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.210067 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.210180 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.210198 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.210273 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.210273 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.210431 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.210491 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.210540 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.210636 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.210738 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.210886 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.210978 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.211017 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.211060 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.211118 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.211290 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.211413 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.211524 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.211720 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.211993 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.212008 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.212103 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.212270 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.212281 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.212274 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.212334 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.212508 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.212597 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.212657 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.213739 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.212696 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.212726 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.213962 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.212948 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.213008 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.213094 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.213135 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.214089 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.213214 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.213344 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.213354 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.213610 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.214222 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.214351 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.214474 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.214598 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.214957 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.215277 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.215485 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.215636 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.215907 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.216190 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.216347 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.216450 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.216548 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.241622 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.269937 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.291147 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.310586 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.326667 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.346252 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.363210 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.381545 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.400245 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.411345 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.417429 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.433748 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:05 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:05 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:05 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.433937 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.434654 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.455425 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.472571 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.494716 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.511286 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.536543 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.556634 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.573265 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.588031 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.605925 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.622428 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.639133 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.654938 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.676539 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.701538 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.719491 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.738991 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.757872 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.773712 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.790734 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.812314 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.828025 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.841756 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.857351 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.883484 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://55ec019d83cfecee513d99ac18e2ee82ef341831cf1ccbf84cdcde598bfcb6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07752a5beb70c8c101afc3171b1a8e4c4e2212fc9939840b594a2736d0ab4561\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:51Z\\\",\\\"message\\\":\\\"etworkPolicy event handler 4 for removal\\\\nI0813 19:51:51.514559 14994 handler.go:203] Sending *v1.Namespace event handler 1 for removal\\\\nI0813 19:51:51.514564 14994 handler.go:203] Sending *v1.Namespace event handler 5 for removal\\\\nI0813 19:51:51.514573 14994 handler.go:217] Removed *v1.Namespace event handler 5\\\\nI0813 19:51:51.514581 14994 handler.go:217] Removed *v1.EgressIP event handler 8\\\\nI0813 19:51:51.514588 14994 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:51:51.514589 14994 reflector.go:295] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:51:51.514598 14994 handler.go:217] Removed *v1.Node event handler 7\\\\nI0813 19:51:51.514645 14994 handler.go:217] Removed *v1.EgressFirewall event handler 9\\\\nI0813 19:51:51.514663 14994 handler.go:217] Removed *v1.NetworkPolicy event handler 4\\\\nI0813 19:51:51.514672 14994 handler.go:217] Removed *v1.Namespace event handler 1\\\\nI0813 19:51:51.514741 14994 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:51:51.514881 14994 handler.go:217] Removed *v1.Pod event handler 3\\\\nI0813 19:51:51.514901 14994 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ec019d83cfecee513d99ac18e2ee82ef341831cf1ccbf84cdcde598bfcb6b7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"3 16242 handler.go:203] Sending *v1.Node event handler 7 for removal\\\\nI0813 19:51:54.589848 16242 handler.go:217] Removed *v1.Namespace event handler 5\\\\nI0813 19:51:54.589868 16242 handler.go:217] Removed *v1.Namespace event handler 1\\\\nI0813 19:51:54.589895 16242 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:51:54.589924 16242 services_controller.go:231] Shutting down controller ovn-lb-controller\\\\nI0813 19:51:54.589937 16242 reflector.go:295] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:51:54.589952 16242 handler.go:203] Sending *v1.Node event handler 10 for removal\\\\nI0813 19:51:54.589975 16242 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:51:54.589985 16242 handler.go:217] Removed *v1.Node event handler 7\\\\nI0813 19:51:54.589996 16242 handler.go:217] Removed *v1.Node event handler 10\\\\nI0813 19:51:54.590680 16242 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:51:54.591579 16242 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:52Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.935102 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.944437 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.944942 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.945077 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.945250 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.945384 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:05Z","lastTransitionTime":"2025-08-13T19:52:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.959048 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.977053 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.983836 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.984156 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.984287 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.984379 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.984545 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:05Z","lastTransitionTime":"2025-08-13T19:52:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.987475 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: E0813 19:52:06.009425 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.015105 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.015215 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.015231 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.015267 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.015291 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:06Z","lastTransitionTime":"2025-08-13T19:52:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.020379 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: E0813 19:52:06.028933 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.033686 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.033718 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.033732 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.033751 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.033858 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:06Z","lastTransitionTime":"2025-08-13T19:52:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.038611 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: E0813 19:52:06.049417 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.052929 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.054481 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.054542 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.054565 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.054592 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.054617 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:06Z","lastTransitionTime":"2025-08-13T19:52:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.068530 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: E0813 19:52:06.070432 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: E0813 19:52:06.070487 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.085959 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.099899 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.119378 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.148905 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.169759 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.185748 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.200450 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.208300 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.208451 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.208489 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:06 crc kubenswrapper[4183]: E0813 19:52:06.208490 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.208561 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:06 crc kubenswrapper[4183]: E0813 19:52:06.208679 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:06 crc kubenswrapper[4183]: E0813 19:52:06.208870 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.208937 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:06 crc kubenswrapper[4183]: E0813 19:52:06.209001 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.209023 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:06 crc kubenswrapper[4183]: E0813 19:52:06.209116 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:06 crc kubenswrapper[4183]: E0813 19:52:06.209298 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.209714 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:06 crc kubenswrapper[4183]: E0813 19:52:06.210715 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.216865 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.237627 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.253441 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.269458 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.289357 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:48Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71\\\\n2025-08-13T19:51:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:03Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:03Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:51:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.305318 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.319150 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.343453 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.362951 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.382658 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.401025 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.416378 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.431650 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.433563 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:06 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:06 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:06 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.433682 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.449299 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.464728 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.481490 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.496761 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.511219 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.208966 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.209173 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.209253 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.209300 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.209477 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.209486 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.209590 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.209655 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.209694 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.209748 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.209886 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.209930 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.209981 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.210030 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.210231 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.210281 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.210337 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.210376 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.210427 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.210489 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.210523 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.210576 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.210634 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.210706 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.210844 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.210918 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.211046 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.211166 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.211226 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.211354 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.211456 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.211507 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.211485 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.211688 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.211843 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.211176 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.211661 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.209197 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.211942 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.211981 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.211633 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.212264 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.212360 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.212504 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.212669 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.212876 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.213014 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.213130 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.213359 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.213675 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.215167 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.215250 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.215266 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.215333 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.215357 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.215451 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.215472 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.215636 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.215704 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.215989 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.216144 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.216255 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.216382 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.216497 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.216611 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.216646 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.216716 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.216747 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.216888 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.216959 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.216436 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.217066 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.217088 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.217343 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.217393 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.217207 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.217281 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.217465 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.217163 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.217652 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.217706 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.217756 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.432160 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:07 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:07 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:07 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.432324 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:08 crc kubenswrapper[4183]: I0813 19:52:08.209023 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:08 crc kubenswrapper[4183]: I0813 19:52:08.209233 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:08 crc kubenswrapper[4183]: I0813 19:52:08.209410 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:08 crc kubenswrapper[4183]: I0813 19:52:08.209421 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:08 crc kubenswrapper[4183]: E0813 19:52:08.209661 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:08 crc kubenswrapper[4183]: I0813 19:52:08.209736 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:08 crc kubenswrapper[4183]: E0813 19:52:08.209894 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:08 crc kubenswrapper[4183]: I0813 19:52:08.209947 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:08 crc kubenswrapper[4183]: E0813 19:52:08.210028 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:08 crc kubenswrapper[4183]: I0813 19:52:08.210055 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:08 crc kubenswrapper[4183]: E0813 19:52:08.210135 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:08 crc kubenswrapper[4183]: E0813 19:52:08.210139 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:08 crc kubenswrapper[4183]: E0813 19:52:08.210185 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:08 crc kubenswrapper[4183]: E0813 19:52:08.210297 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:08 crc kubenswrapper[4183]: I0813 19:52:08.432478 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:08 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:08 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:08 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:08 crc kubenswrapper[4183]: I0813 19:52:08.432589 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.208328 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.208390 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.208496 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.208527 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.208565 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.208606 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.208607 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.208632 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.208655 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.208681 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.208691 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.208717 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.208743 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.208748 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.208864 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.208882 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.208941 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.209005 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.209015 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.209078 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.209106 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.209119 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.210082 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.209131 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.210109 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.210154 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.209145 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.209158 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.210298 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.209172 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.209230 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.209250 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.210301 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.209277 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.209288 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.209329 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.209382 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.209416 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.209457 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.209475 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.209490 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.210610 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.209504 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.209519 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.210688 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.209533 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.209549 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.209566 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.210884 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.210976 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.209581 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.211017 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.210980 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.209603 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.211123 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.209620 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.211159 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.209684 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.209705 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.209758 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.210419 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.210636 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.211762 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.211947 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.212072 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.212160 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.212236 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.212294 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.212385 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.212467 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.212539 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.212611 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.212689 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.212751 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.212920 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.212988 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.213051 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.213100 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.213204 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.213256 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.213312 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.213361 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.214453 4183 scope.go:117] "RemoveContainer" containerID="55ec019d83cfecee513d99ac18e2ee82ef341831cf1ccbf84cdcde598bfcb6b7" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.235903 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.253683 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.268602 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.283004 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.305636 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.320032 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.348304 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.365573 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.386142 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.404931 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.425928 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.433930 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:09 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:09 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:09 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.434051 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.450073 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.466041 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.484876 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.509271 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.533459 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.551080 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.569356 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.585374 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.610325 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.635148 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.655616 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.677546 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.693348 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.717671 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.733954 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.759086 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.792389 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://55ec019d83cfecee513d99ac18e2ee82ef341831cf1ccbf84cdcde598bfcb6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ec019d83cfecee513d99ac18e2ee82ef341831cf1ccbf84cdcde598bfcb6b7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"3 16242 handler.go:203] Sending *v1.Node event handler 7 for removal\\\\nI0813 19:51:54.589848 16242 handler.go:217] Removed *v1.Namespace event handler 5\\\\nI0813 19:51:54.589868 16242 handler.go:217] Removed *v1.Namespace event handler 1\\\\nI0813 19:51:54.589895 16242 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:51:54.589924 16242 services_controller.go:231] Shutting down controller ovn-lb-controller\\\\nI0813 19:51:54.589937 16242 reflector.go:295] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:51:54.589952 16242 handler.go:203] Sending *v1.Node event handler 10 for removal\\\\nI0813 19:51:54.589975 16242 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:51:54.589985 16242 handler.go:217] Removed *v1.Node event handler 7\\\\nI0813 19:51:54.589996 16242 handler.go:217] Removed *v1.Node event handler 10\\\\nI0813 19:51:54.590680 16242 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:51:54.591579 16242 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.812763 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.837635 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.855295 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.870753 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.892653 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.909739 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.925691 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.941728 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.955310 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.193066 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovnkube-controller/1.log" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.198695 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"2b9499014ac6e90a7470da179079d21d771343cf59f1d9242bb4876b4f66e0aa"} Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.199424 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.208846 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.209051 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.208765 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.208917 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:10 crc kubenswrapper[4183]: E0813 19:52:10.209421 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:10 crc kubenswrapper[4183]: E0813 19:52:10.209532 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.208950 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.208981 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.209016 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:10 crc kubenswrapper[4183]: E0813 19:52:10.210147 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:10 crc kubenswrapper[4183]: E0813 19:52:10.211014 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:10 crc kubenswrapper[4183]: E0813 19:52:10.211200 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:10 crc kubenswrapper[4183]: E0813 19:52:10.211375 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:10 crc kubenswrapper[4183]: E0813 19:52:10.211544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.384450 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: E0813 19:52:10.413605 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.415973 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.434354 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:10 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:10 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:10 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.434495 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.435322 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.463767 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.490287 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.513656 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.531393 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.559318 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.576538 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:48Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71\\\\n2025-08-13T19:51:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:03Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:03Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:51:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.595912 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.612337 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.630337 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.650673 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.671237 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.691461 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.711148 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.729313 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.745359 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.762311 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.780161 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.800473 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.815505 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.838247 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.855675 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.873421 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.890107 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.910909 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.930653 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.947686 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.964867 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.980401 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.997023 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.012398 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.038504 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.058439 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.074053 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.092033 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.110944 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:48Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71\\\\n2025-08-13T19:51:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:03Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:03Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:51:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.130460 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.146314 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.164420 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.181987 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.199167 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.206704 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovnkube-controller/2.log" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.207918 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovnkube-controller/1.log" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.208257 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.208313 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.208438 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.208523 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.208540 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.208535 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.208584 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.208453 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.208483 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.208647 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.208667 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.208447 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.208710 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.208723 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.208755 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.208759 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.208834 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.208981 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.209037 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.209042 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.209038 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.209142 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.209161 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.209178 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.209204 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.209343 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.209392 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.209402 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.209468 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.209524 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.209542 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.209594 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.209659 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.209697 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.209709 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.209711 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.209854 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.209906 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.210000 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.210007 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.210056 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.210076 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.210120 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.210133 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.210223 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.210314 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.210384 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.210467 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.210505 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.210586 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.210758 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.210891 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.211007 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.211094 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.211161 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.211284 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.211345 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.211425 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.211459 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.211578 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.211652 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.211690 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.211724 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.211763 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.211889 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.211893 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.211949 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.211971 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.212017 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.212075 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.212143 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.212221 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.212385 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.212566 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.212647 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.212658 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.212721 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.212861 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.212950 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.213019 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.213197 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.213369 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.220688 4183 generic.go:334] "Generic (PLEG): container finished" podID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerID="2b9499014ac6e90a7470da179079d21d771343cf59f1d9242bb4876b4f66e0aa" exitCode=1 Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.220724 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerDied","Data":"2b9499014ac6e90a7470da179079d21d771343cf59f1d9242bb4876b4f66e0aa"} Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.220755 4183 scope.go:117] "RemoveContainer" containerID="55ec019d83cfecee513d99ac18e2ee82ef341831cf1ccbf84cdcde598bfcb6b7" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.222944 4183 scope.go:117] "RemoveContainer" containerID="2b9499014ac6e90a7470da179079d21d771343cf59f1d9242bb4876b4f66e0aa" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.223746 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.224423 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.239353 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.260542 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.285055 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.301059 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.317102 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.333940 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.349865 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.367731 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.383535 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.399153 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.413553 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.427553 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.432442 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:11 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:11 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:11 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.432594 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.444611 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.459377 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.476690 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.490626 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.505916 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.529938 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.547010 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.564254 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.579887 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.594465 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.608076 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.621930 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.634400 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.645167 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.659323 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.672023 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.687119 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.702248 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.719733 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.742523 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.785341 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.826745 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.865540 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.904915 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.944458 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.987002 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.023663 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.062628 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.104674 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.162176 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2b9499014ac6e90a7470da179079d21d771343cf59f1d9242bb4876b4f66e0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ec019d83cfecee513d99ac18e2ee82ef341831cf1ccbf84cdcde598bfcb6b7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"3 16242 handler.go:203] Sending *v1.Node event handler 7 for removal\\\\nI0813 19:51:54.589848 16242 handler.go:217] Removed *v1.Namespace event handler 5\\\\nI0813 19:51:54.589868 16242 handler.go:217] Removed *v1.Namespace event handler 1\\\\nI0813 19:51:54.589895 16242 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:51:54.589924 16242 services_controller.go:231] Shutting down controller ovn-lb-controller\\\\nI0813 19:51:54.589937 16242 reflector.go:295] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:51:54.589952 16242 handler.go:203] Sending *v1.Node event handler 10 for removal\\\\nI0813 19:51:54.589975 16242 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:51:54.589985 16242 handler.go:217] Removed *v1.Node event handler 7\\\\nI0813 19:51:54.589996 16242 handler.go:217] Removed *v1.Node event handler 10\\\\nI0813 19:51:54.590680 16242 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:51:54.591579 16242 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:52:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.208512 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.208631 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.208599 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.208693 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:12 crc kubenswrapper[4183]: E0813 19:52:12.208750 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:12 crc kubenswrapper[4183]: E0813 19:52:12.209019 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.208766 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:12 crc kubenswrapper[4183]: E0813 19:52:12.209080 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.208869 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.209273 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:12 crc kubenswrapper[4183]: E0813 19:52:12.209373 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:12 crc kubenswrapper[4183]: E0813 19:52:12.209560 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:12 crc kubenswrapper[4183]: E0813 19:52:12.209672 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:12 crc kubenswrapper[4183]: E0813 19:52:12.209852 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.220029 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.225077 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovnkube-controller/2.log" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.231205 4183 scope.go:117] "RemoveContainer" containerID="2b9499014ac6e90a7470da179079d21d771343cf59f1d9242bb4876b4f66e0aa" Aug 13 19:52:12 crc kubenswrapper[4183]: E0813 19:52:12.231753 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.243493 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.263920 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.303742 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.343108 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.383571 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.425073 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.432973 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:12 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:12 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:12 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.433311 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.464596 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.506033 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.550945 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.584192 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.625323 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.664291 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:48Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71\\\\n2025-08-13T19:51:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:03Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:03Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:51:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.702715 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.742888 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.784249 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.826128 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.866561 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.907159 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.943165 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.984025 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.034256 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.068620 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.110215 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.144326 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.186159 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.208501 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.208718 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.208737 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.208922 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.209011 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.209011 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.209121 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.209136 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.209146 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.209165 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.209122 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.209196 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.209221 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.209251 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.209260 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.209324 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.209350 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.209354 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.209353 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.209420 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.209425 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.209435 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.209494 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.209500 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.209324 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.209561 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.209575 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.209586 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.209667 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.209675 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.209710 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.209735 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.209961 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.210021 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.210051 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.210098 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.210118 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.210121 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.210218 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.210381 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.210556 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.210735 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.210753 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.210767 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.210930 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.210991 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.211088 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.211133 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.211172 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.211209 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.211239 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.211303 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.211277 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.211335 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.211373 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.211398 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.211469 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.211597 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.211727 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.211753 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.211730 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.211599 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.211949 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.212175 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.212267 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.213563 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.213696 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.213874 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.213964 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.214040 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.214165 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.214245 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.214511 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.214675 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.215080 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.215210 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.215412 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.215670 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.215962 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.216157 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.216284 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.216375 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.228249 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.267215 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.318565 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.352450 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.385065 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.424902 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.434553 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:13 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:13 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:13 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.434632 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.467043 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.506485 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.548500 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.586552 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.624309 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.666883 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.706643 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.748034 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.788673 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.825458 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.864231 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.912549 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.942434 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.987846 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.025168 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.066442 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.106490 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.147068 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.194562 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.208727 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.208967 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:14 crc kubenswrapper[4183]: E0813 19:52:14.209049 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.209112 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.209116 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.209176 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:14 crc kubenswrapper[4183]: E0813 19:52:14.209304 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.209386 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:14 crc kubenswrapper[4183]: E0813 19:52:14.209727 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.209178 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:14 crc kubenswrapper[4183]: E0813 19:52:14.209416 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:14 crc kubenswrapper[4183]: E0813 19:52:14.209526 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:14 crc kubenswrapper[4183]: E0813 19:52:14.209628 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:14 crc kubenswrapper[4183]: E0813 19:52:14.210471 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.224109 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.271373 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.311935 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.345095 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.390624 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.424116 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.432526 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:14 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:14 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:14 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.432615 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.464420 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.503521 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.545507 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.592057 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2b9499014ac6e90a7470da179079d21d771343cf59f1d9242bb4876b4f66e0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b9499014ac6e90a7470da179079d21d771343cf59f1d9242bb4876b4f66e0aa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:10Z\\\",\\\"message\\\":\\\"handler.go:203] Sending *v1.Namespace event handler 1 for removal\\\\nI0813 19:52:10.825320 16600 handler.go:203] Sending *v1.Namespace event handler 5 for removal\\\\nI0813 19:52:10.825330 16600 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:52:10.825339 16600 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:52:10.825369 16600 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:52:10.825371 16600 handler.go:217] Removed *v1.Namespace event handler 1\\\\nI0813 19:52:10.825412 16600 reflector.go:295] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:52:10.825423 16600 handler.go:217] Removed *v1.Namespace event handler 5\\\\nI0813 19:52:10.825382 16600 metrics.go:552] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0813 19:52:10.825464 16600 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nF0813 19:52:10.825509 16600 ovnkube.go:136] failed to run ovnkube: failed to start node network c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.626276 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.663963 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.703505 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.744672 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.783516 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.824472 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.864064 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.905381 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.948008 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.992917 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.038513 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.064933 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.104579 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.145413 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.198465 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.209281 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.209341 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.209379 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.209300 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.209473 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.209331 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.209564 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.209647 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.209697 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.209725 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.209763 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.209158 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.209628 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.210063 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.210192 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.210362 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.210398 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.210510 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.210563 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.210687 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.210714 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.210838 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.210922 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.210971 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.211023 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.210535 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.211159 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.211291 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.211312 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.211400 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.211492 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.211636 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.211703 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.211754 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.212011 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.212042 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.212086 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.212085 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.212114 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.212144 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.212227 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.212479 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.212667 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.212940 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.213096 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.213665 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.213557 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.214067 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.213841 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.214173 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.214254 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.214312 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.213766 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.213906 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.213935 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.213952 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.213973 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.214557 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.214676 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.214388 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.214695 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.214765 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.214947 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.215017 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.215092 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.215121 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.215198 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.215294 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.215379 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.215513 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.215973 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.216165 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.216327 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.216482 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.216638 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.216750 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.216953 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.216999 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.217059 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.217105 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.217158 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.217219 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.227402 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.268129 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.309977 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.349363 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.390655 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.415404 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.429276 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.431279 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:15 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:15 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:15 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.431351 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.466036 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.507253 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.544295 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.590323 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2b9499014ac6e90a7470da179079d21d771343cf59f1d9242bb4876b4f66e0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b9499014ac6e90a7470da179079d21d771343cf59f1d9242bb4876b4f66e0aa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:10Z\\\",\\\"message\\\":\\\"handler.go:203] Sending *v1.Namespace event handler 1 for removal\\\\nI0813 19:52:10.825320 16600 handler.go:203] Sending *v1.Namespace event handler 5 for removal\\\\nI0813 19:52:10.825330 16600 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:52:10.825339 16600 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:52:10.825369 16600 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:52:10.825371 16600 handler.go:217] Removed *v1.Namespace event handler 1\\\\nI0813 19:52:10.825412 16600 reflector.go:295] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:52:10.825423 16600 handler.go:217] Removed *v1.Namespace event handler 5\\\\nI0813 19:52:10.825382 16600 metrics.go:552] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0813 19:52:10.825464 16600 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nF0813 19:52:10.825509 16600 ovnkube.go:136] failed to run ovnkube: failed to start node network c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.624546 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.665430 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.703927 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.746173 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.790655 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.824177 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.865302 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.905082 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.946521 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.992266 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.024555 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.067531 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.107490 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.146206 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.190060 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.209295 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.209617 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.209843 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:16 crc kubenswrapper[4183]: E0813 19:52:16.210399 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.210627 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.210709 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:16 crc kubenswrapper[4183]: E0813 19:52:16.210851 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.210997 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:16 crc kubenswrapper[4183]: E0813 19:52:16.211047 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.211172 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:16 crc kubenswrapper[4183]: E0813 19:52:16.211226 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:16 crc kubenswrapper[4183]: E0813 19:52:16.211362 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:16 crc kubenswrapper[4183]: E0813 19:52:16.211400 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:16 crc kubenswrapper[4183]: E0813 19:52:16.211472 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.231890 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.273999 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.305629 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:48Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71\\\\n2025-08-13T19:51:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:03Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:03Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:51:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.346866 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.386216 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.414086 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.414183 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.414204 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.414229 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.414260 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:16Z","lastTransitionTime":"2025-08-13T19:52:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.422942 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: E0813 19:52:16.429501 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.433761 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:16 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:16 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:16 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.434145 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.437038 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.437076 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.437088 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.437109 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.437136 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:16Z","lastTransitionTime":"2025-08-13T19:52:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:16 crc kubenswrapper[4183]: E0813 19:52:16.454608 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.459745 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.460019 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.460041 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.460061 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.460107 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:16Z","lastTransitionTime":"2025-08-13T19:52:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.466764 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: E0813 19:52:16.477042 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.482659 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.482889 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.483021 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.483137 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.483254 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:16Z","lastTransitionTime":"2025-08-13T19:52:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:16 crc kubenswrapper[4183]: E0813 19:52:16.497658 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.502267 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.502326 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.502343 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.502363 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.502392 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:16Z","lastTransitionTime":"2025-08-13T19:52:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.510712 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: E0813 19:52:16.517856 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: E0813 19:52:16.517912 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.545277 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.584994 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.624376 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.665034 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.704513 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.744732 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.787206 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.824978 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.865716 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.906553 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.944654 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.984642 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.026637 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.071659 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.110878 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.154617 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.193322 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.209187 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.209441 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.209551 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.209642 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.209452 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.210042 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.209352 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.209413 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.209231 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.209493 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.210440 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.210554 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.210681 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.210730 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.210772 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.210500 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.210978 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.211016 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.211060 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.211099 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.211499 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.211986 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.212301 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.212431 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.212596 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.212711 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.212921 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.213070 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.213274 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.213289 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.213383 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.213487 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.213574 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.213673 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.213693 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.213931 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.213993 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.214004 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.214143 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.214306 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.214382 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.214479 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.214536 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.214609 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.214682 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.214875 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.215036 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.215060 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.214325 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.215290 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.215388 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.215403 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.215504 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.215544 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.215561 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.215597 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.215631 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.215707 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.215976 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.216078 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.216128 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.216354 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.216357 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.216433 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.216524 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.216690 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.216978 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.217152 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.217273 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.217433 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.217524 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.217651 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.217727 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.217989 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.218099 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.218240 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.218345 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.218481 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.218608 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.219307 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.219481 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.219942 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.231440 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.269561 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.310566 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.350303 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.386924 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.428004 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.432319 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:17 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:17 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:17 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.432423 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.466660 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.505851 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.548122 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.586931 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.631724 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.668022 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.707047 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.747098 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.788624 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.826323 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.873994 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.908004 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.950001 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:18 crc kubenswrapper[4183]: I0813 19:52:18.209254 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:18 crc kubenswrapper[4183]: I0813 19:52:18.209530 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:18 crc kubenswrapper[4183]: I0813 19:52:18.209368 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:18 crc kubenswrapper[4183]: E0813 19:52:18.209742 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:18 crc kubenswrapper[4183]: I0813 19:52:18.209392 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:18 crc kubenswrapper[4183]: E0813 19:52:18.210171 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:18 crc kubenswrapper[4183]: E0813 19:52:18.210282 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:18 crc kubenswrapper[4183]: I0813 19:52:18.209430 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:18 crc kubenswrapper[4183]: E0813 19:52:18.210419 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:18 crc kubenswrapper[4183]: I0813 19:52:18.209449 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:18 crc kubenswrapper[4183]: E0813 19:52:18.210583 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:18 crc kubenswrapper[4183]: I0813 19:52:18.209496 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:18 crc kubenswrapper[4183]: E0813 19:52:18.210889 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:18 crc kubenswrapper[4183]: E0813 19:52:18.211124 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:18 crc kubenswrapper[4183]: I0813 19:52:18.432039 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:18 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:18 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:18 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:18 crc kubenswrapper[4183]: I0813 19:52:18.432145 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.209429 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.209531 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.209570 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.209536 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.209646 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.209682 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.209710 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.209443 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.209741 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.209476 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.209892 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.209498 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.209974 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.209764 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.209448 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.210041 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.210077 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.210086 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.210090 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.210204 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.210215 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.210231 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.210303 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.210340 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.210554 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.210564 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.210661 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.210714 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.210751 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.210925 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.210664 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.211040 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.211099 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.211169 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.211199 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.211244 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.211314 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.211370 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.211460 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.211491 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.211539 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.211602 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.211608 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.211696 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.211722 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.211904 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.212034 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.212395 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.212456 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.212475 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.212546 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.212584 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.212594 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.212643 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.212687 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.212735 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.213026 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.213149 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.213152 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.213201 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.213210 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.213332 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.213460 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.213529 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.213549 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.213853 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.213881 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.213942 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.214184 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.214322 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.214368 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.214421 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.214497 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.214585 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.214669 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.214741 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.214888 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.214935 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.215016 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.215130 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.215324 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.215353 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.433021 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:19 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:19 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:19 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.433099 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:20 crc kubenswrapper[4183]: I0813 19:52:20.208142 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:20 crc kubenswrapper[4183]: I0813 19:52:20.208219 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:20 crc kubenswrapper[4183]: I0813 19:52:20.208332 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:20 crc kubenswrapper[4183]: E0813 19:52:20.208339 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:20 crc kubenswrapper[4183]: I0813 19:52:20.208452 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:20 crc kubenswrapper[4183]: E0813 19:52:20.208599 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:20 crc kubenswrapper[4183]: E0813 19:52:20.208688 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:20 crc kubenswrapper[4183]: I0813 19:52:20.208732 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:20 crc kubenswrapper[4183]: I0813 19:52:20.208761 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:20 crc kubenswrapper[4183]: I0813 19:52:20.208741 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:20 crc kubenswrapper[4183]: E0813 19:52:20.208959 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:20 crc kubenswrapper[4183]: E0813 19:52:20.209098 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:20 crc kubenswrapper[4183]: E0813 19:52:20.209358 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:20 crc kubenswrapper[4183]: E0813 19:52:20.209479 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:20 crc kubenswrapper[4183]: E0813 19:52:20.416675 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:52:20 crc kubenswrapper[4183]: I0813 19:52:20.432598 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:20 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:20 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:20 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:20 crc kubenswrapper[4183]: I0813 19:52:20.432692 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.209218 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.209314 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.209366 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.209404 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.209428 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.209490 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.209510 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.209521 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.209561 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.209659 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.209672 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.209693 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.209755 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.209766 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.209897 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.209924 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.209955 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.209958 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.210026 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.210071 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.209283 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.210113 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.210141 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.210162 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.210147 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.210182 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.210193 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.210196 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.210149 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.210225 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.210346 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.210431 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.210433 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.210494 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.210527 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.210554 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.210586 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.210688 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.210719 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.210944 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.211049 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.211105 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.211155 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.211261 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.211323 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.211372 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.211460 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.211556 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.211598 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.211614 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.211618 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.211698 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.211898 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.211928 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.211943 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.212066 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.212103 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.212143 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.212196 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.212229 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.212235 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.212364 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.212367 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.212454 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.212559 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.212663 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.212702 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.212949 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.213066 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.213097 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.213171 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.213337 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.213392 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.213397 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.213438 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.213531 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.213681 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.213868 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.213944 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.214039 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.214179 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.214289 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.431557 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:21 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:21 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:21 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.431667 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:22 crc kubenswrapper[4183]: I0813 19:52:22.209145 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:22 crc kubenswrapper[4183]: I0813 19:52:22.209307 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:22 crc kubenswrapper[4183]: E0813 19:52:22.209413 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:22 crc kubenswrapper[4183]: I0813 19:52:22.209472 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:22 crc kubenswrapper[4183]: I0813 19:52:22.209602 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:22 crc kubenswrapper[4183]: I0813 19:52:22.209673 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:22 crc kubenswrapper[4183]: I0813 19:52:22.209634 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:22 crc kubenswrapper[4183]: E0813 19:52:22.209978 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:22 crc kubenswrapper[4183]: E0813 19:52:22.210255 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:22 crc kubenswrapper[4183]: I0813 19:52:22.210294 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:22 crc kubenswrapper[4183]: E0813 19:52:22.210528 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:22 crc kubenswrapper[4183]: E0813 19:52:22.210907 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:22 crc kubenswrapper[4183]: E0813 19:52:22.210930 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:22 crc kubenswrapper[4183]: E0813 19:52:22.211204 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:22 crc kubenswrapper[4183]: I0813 19:52:22.432638 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:22 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:22 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:22 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:22 crc kubenswrapper[4183]: I0813 19:52:22.433195 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.209546 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.209642 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.209737 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.209896 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.209940 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.209970 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.210016 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.210031 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.210166 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.210170 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.210243 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.210301 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.210320 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.210391 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.210408 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.210493 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.210521 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.210570 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.210601 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.210631 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.210642 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.210692 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.210725 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.210742 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.210892 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.210945 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.210959 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.210997 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.211050 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.211055 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.211130 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.211153 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.211200 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.211237 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.211250 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.211284 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.211358 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.211412 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.211432 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.211485 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.211518 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.211545 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.211588 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.211601 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.211659 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.211678 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.211721 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.211749 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.211877 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.211920 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.211951 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.211973 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.212008 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.212035 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.212064 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.212094 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.212124 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.212150 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.212251 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.212357 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.212473 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.212552 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.212631 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.212757 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.213264 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.213427 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.213549 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.213654 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.213742 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.213954 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.214248 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.214501 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.214623 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.214729 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.214892 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.214943 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.215004 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.215134 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.215304 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.215414 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.215572 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.215686 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.432541 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:23 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:23 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:23 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.432657 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:24 crc kubenswrapper[4183]: I0813 19:52:24.208891 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:24 crc kubenswrapper[4183]: I0813 19:52:24.208958 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:24 crc kubenswrapper[4183]: I0813 19:52:24.209001 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:24 crc kubenswrapper[4183]: I0813 19:52:24.208907 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:24 crc kubenswrapper[4183]: I0813 19:52:24.209165 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:24 crc kubenswrapper[4183]: E0813 19:52:24.209179 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:24 crc kubenswrapper[4183]: I0813 19:52:24.209219 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:24 crc kubenswrapper[4183]: I0813 19:52:24.209284 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:24 crc kubenswrapper[4183]: E0813 19:52:24.209441 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:24 crc kubenswrapper[4183]: E0813 19:52:24.209695 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:24 crc kubenswrapper[4183]: E0813 19:52:24.209763 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:24 crc kubenswrapper[4183]: E0813 19:52:24.210113 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:24 crc kubenswrapper[4183]: E0813 19:52:24.210254 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:24 crc kubenswrapper[4183]: E0813 19:52:24.210625 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:24 crc kubenswrapper[4183]: I0813 19:52:24.432433 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:24 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:24 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:24 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:24 crc kubenswrapper[4183]: I0813 19:52:24.432563 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.208312 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.208436 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.208462 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.208436 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.208554 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.208565 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.208585 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.208591 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.208665 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.208627 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.208693 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.208640 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.208719 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.208714 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.208909 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.208918 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.209006 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.209054 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.209063 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.208913 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.209151 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.209155 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.209164 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.209201 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.209241 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.209303 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.209383 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.209402 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.209487 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.209509 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.209571 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.209573 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.209607 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.209663 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.209748 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.209867 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.209908 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.210215 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.210382 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.210476 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.210522 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.210577 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.210688 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.210749 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.210942 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.210994 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.211055 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.211150 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.211224 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.211368 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.211445 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.211587 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.211717 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.212092 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.212242 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.213224 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.213242 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.213280 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.213325 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.213360 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.213464 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.213573 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.213580 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.213645 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.213645 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.213702 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.213758 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.213765 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.213893 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.213950 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.214007 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.214015 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.214086 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.214169 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.214207 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.214252 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.214319 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.214370 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.214420 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.214465 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.214562 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.214640 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.216566 4183 scope.go:117] "RemoveContainer" containerID="2b9499014ac6e90a7470da179079d21d771343cf59f1d9242bb4876b4f66e0aa" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.217610 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.226358 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.242549 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.299749 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2b9499014ac6e90a7470da179079d21d771343cf59f1d9242bb4876b4f66e0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b9499014ac6e90a7470da179079d21d771343cf59f1d9242bb4876b4f66e0aa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:10Z\\\",\\\"message\\\":\\\"handler.go:203] Sending *v1.Namespace event handler 1 for removal\\\\nI0813 19:52:10.825320 16600 handler.go:203] Sending *v1.Namespace event handler 5 for removal\\\\nI0813 19:52:10.825330 16600 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:52:10.825339 16600 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:52:10.825369 16600 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:52:10.825371 16600 handler.go:217] Removed *v1.Namespace event handler 1\\\\nI0813 19:52:10.825412 16600 reflector.go:295] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:52:10.825423 16600 handler.go:217] Removed *v1.Namespace event handler 5\\\\nI0813 19:52:10.825382 16600 metrics.go:552] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0813 19:52:10.825464 16600 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nF0813 19:52:10.825509 16600 ovnkube.go:136] failed to run ovnkube: failed to start node network c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.341904 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.359156 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.375407 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.390704 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.409386 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.417898 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.426634 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.431590 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:25 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:25 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:25 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.431688 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.444429 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.461537 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.478427 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.503501 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.518568 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.532860 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.546481 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.564679 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.581764 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.595706 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.611643 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.627545 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:48Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71\\\\n2025-08-13T19:51:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:03Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:03Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:51:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.640945 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.653988 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.669505 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.684758 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.703379 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.720294 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.738580 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.756591 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.771551 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.785974 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.802235 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.820982 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.844721 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.861896 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.879449 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.901580 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.919540 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.935967 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.952310 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.970757 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.987613 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.005661 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.040874 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.056947 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.071417 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.087879 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.100908 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.117528 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.137686 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.152342 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.169756 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.184095 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.200458 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.208008 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.208180 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.208199 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:26 crc kubenswrapper[4183]: E0813 19:52:26.208193 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.208477 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.208523 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:26 crc kubenswrapper[4183]: E0813 19:52:26.208602 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.208647 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.208524 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:26 crc kubenswrapper[4183]: E0813 19:52:26.209079 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:26 crc kubenswrapper[4183]: E0813 19:52:26.209086 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:26 crc kubenswrapper[4183]: E0813 19:52:26.209195 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:26 crc kubenswrapper[4183]: E0813 19:52:26.209271 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:26 crc kubenswrapper[4183]: E0813 19:52:26.209587 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.218174 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.238245 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.258348 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.278322 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.295976 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.312292 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.329156 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.349715 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.369468 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.389175 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.458509 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.462533 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:26 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:26 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:26 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.462605 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.475948 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.489750 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.678978 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.679077 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.679100 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.679127 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.679154 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:26Z","lastTransitionTime":"2025-08-13T19:52:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:26 crc kubenswrapper[4183]: E0813 19:52:26.695941 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.701423 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.701486 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.701503 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.701549 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.701580 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:26Z","lastTransitionTime":"2025-08-13T19:52:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:26 crc kubenswrapper[4183]: E0813 19:52:26.714964 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.720245 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.720524 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.720668 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.720902 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.721022 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:26Z","lastTransitionTime":"2025-08-13T19:52:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:26 crc kubenswrapper[4183]: E0813 19:52:26.742042 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.748221 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.748300 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.748325 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.748354 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.748382 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:26Z","lastTransitionTime":"2025-08-13T19:52:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:26 crc kubenswrapper[4183]: E0813 19:52:26.765415 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.772596 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.772711 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.772753 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.773066 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.773111 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:26Z","lastTransitionTime":"2025-08-13T19:52:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:26 crc kubenswrapper[4183]: E0813 19:52:26.798253 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: E0813 19:52:26.798717 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.208211 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.208286 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.208401 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.208401 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.208425 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.208492 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.208515 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.208532 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.208565 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.208575 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.208211 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.208709 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.208725 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.208751 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.208464 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.208754 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.208916 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.209011 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.209210 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.209283 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.209376 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.209420 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.209480 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.209538 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.209570 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.209615 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.209658 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.209681 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.209748 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.209754 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.209893 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.210037 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.210247 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.210308 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.210399 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.210406 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.210435 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.210520 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.210530 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.210572 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.210588 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.210623 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.210666 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.210691 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.210757 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.210741 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.210872 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.211000 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.211011 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.211138 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.211236 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.211289 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.211309 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.211406 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.211605 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.211886 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.211920 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.211940 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.211981 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.212012 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.213110 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.213131 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.213168 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.213206 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.213240 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.213276 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.213310 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.213435 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.213468 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.213555 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.213667 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.213905 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.214289 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.214348 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.214439 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.214585 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.214670 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.214970 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.215265 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.215478 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.215682 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.216468 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.432599 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:27 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:27 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:27 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.432681 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:28 crc kubenswrapper[4183]: I0813 19:52:28.208615 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:28 crc kubenswrapper[4183]: I0813 19:52:28.209006 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:28 crc kubenswrapper[4183]: I0813 19:52:28.209077 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:28 crc kubenswrapper[4183]: I0813 19:52:28.208723 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:28 crc kubenswrapper[4183]: I0813 19:52:28.208760 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:28 crc kubenswrapper[4183]: I0813 19:52:28.208766 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:28 crc kubenswrapper[4183]: I0813 19:52:28.208875 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:28 crc kubenswrapper[4183]: E0813 19:52:28.209576 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:28 crc kubenswrapper[4183]: E0813 19:52:28.209754 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:28 crc kubenswrapper[4183]: E0813 19:52:28.210062 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:28 crc kubenswrapper[4183]: E0813 19:52:28.210128 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:28 crc kubenswrapper[4183]: E0813 19:52:28.210259 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:28 crc kubenswrapper[4183]: E0813 19:52:28.210355 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:28 crc kubenswrapper[4183]: E0813 19:52:28.210577 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:28 crc kubenswrapper[4183]: I0813 19:52:28.432746 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:28 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:28 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:28 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:28 crc kubenswrapper[4183]: I0813 19:52:28.432951 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.208420 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.208569 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.208682 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.208710 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.208741 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.208449 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.208493 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.208977 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.209110 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.209127 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.209171 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.209217 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.209225 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.209264 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.209329 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.209347 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.209412 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.209453 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.209509 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.209610 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.209644 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.209697 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.209864 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.209914 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.209970 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.210063 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.210096 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.210145 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.210223 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.210259 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.210307 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.208532 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.210396 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.210396 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.210495 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.210600 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.210634 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.210749 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.210758 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.210945 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.211056 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.211202 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.211318 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.211467 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.211519 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.211595 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.211650 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.211857 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.212935 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.213087 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.213271 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.213445 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.213453 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.213515 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.213539 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.213663 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.214073 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.214194 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.214396 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.214598 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.214935 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.215057 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.215190 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.215389 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.215585 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.215715 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.215891 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.215961 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.216197 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.216303 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.216392 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.216507 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.216593 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.216730 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.216761 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.217649 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.218064 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.218197 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.218350 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.218472 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.218705 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.219115 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.432152 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:29 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:29 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:29 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.432228 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:30 crc kubenswrapper[4183]: I0813 19:52:30.209054 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:30 crc kubenswrapper[4183]: E0813 19:52:30.209563 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:30 crc kubenswrapper[4183]: I0813 19:52:30.209054 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:30 crc kubenswrapper[4183]: E0813 19:52:30.209953 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:30 crc kubenswrapper[4183]: I0813 19:52:30.209107 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:30 crc kubenswrapper[4183]: I0813 19:52:30.209143 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:30 crc kubenswrapper[4183]: E0813 19:52:30.210293 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:30 crc kubenswrapper[4183]: I0813 19:52:30.209146 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:30 crc kubenswrapper[4183]: I0813 19:52:30.209168 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:30 crc kubenswrapper[4183]: I0813 19:52:30.209184 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:30 crc kubenswrapper[4183]: E0813 19:52:30.210179 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:30 crc kubenswrapper[4183]: E0813 19:52:30.210504 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:30 crc kubenswrapper[4183]: E0813 19:52:30.210650 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:30 crc kubenswrapper[4183]: E0813 19:52:30.211031 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:30 crc kubenswrapper[4183]: E0813 19:52:30.419375 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:52:30 crc kubenswrapper[4183]: I0813 19:52:30.432321 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:30 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:30 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:30 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:30 crc kubenswrapper[4183]: I0813 19:52:30.432414 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.208263 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.208359 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.208375 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.208457 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.208473 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.208484 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.208519 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.208575 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.208603 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.208662 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.208670 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.208678 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.208692 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.208729 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.208764 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.208764 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.208530 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.208661 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.208959 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.208973 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.208980 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.208986 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.208960 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.209042 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.209072 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.209043 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.209080 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.209120 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.209046 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.209159 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.209153 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.209196 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.209205 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.209214 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.209208 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.209284 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.209301 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.209401 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.209528 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.209648 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.209748 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.209891 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.210051 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.210230 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.210342 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.210617 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.211003 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.211155 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.211249 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.211448 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.211558 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.211573 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.211568 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.211666 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.211710 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.211869 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.211877 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.211970 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.212065 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.212201 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.212498 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.212541 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.212703 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.212834 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.212849 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.212993 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.213119 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.213172 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.213185 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.213314 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.213511 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.213705 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.213747 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.213864 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.213940 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.214011 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.214086 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.214149 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.214208 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.214266 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.214292 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.214362 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.432358 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:31 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:31 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:31 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.432514 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:32 crc kubenswrapper[4183]: I0813 19:52:32.209266 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:32 crc kubenswrapper[4183]: I0813 19:52:32.209366 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:32 crc kubenswrapper[4183]: I0813 19:52:32.209368 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:32 crc kubenswrapper[4183]: I0813 19:52:32.209404 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:32 crc kubenswrapper[4183]: I0813 19:52:32.209440 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:32 crc kubenswrapper[4183]: I0813 19:52:32.209469 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:32 crc kubenswrapper[4183]: E0813 19:52:32.210012 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:32 crc kubenswrapper[4183]: E0813 19:52:32.210327 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:32 crc kubenswrapper[4183]: E0813 19:52:32.210407 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:32 crc kubenswrapper[4183]: E0813 19:52:32.210494 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:32 crc kubenswrapper[4183]: E0813 19:52:32.210654 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:32 crc kubenswrapper[4183]: E0813 19:52:32.210750 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:32 crc kubenswrapper[4183]: I0813 19:52:32.210957 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:32 crc kubenswrapper[4183]: E0813 19:52:32.211220 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:32 crc kubenswrapper[4183]: I0813 19:52:32.432208 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:32 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:32 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:32 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:32 crc kubenswrapper[4183]: I0813 19:52:32.432295 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.209599 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.210435 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.210247 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.210294 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.210903 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.210334 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.210371 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.210402 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.211390 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.210655 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.211523 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.211606 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.211686 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.212183 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.211154 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.211222 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.212268 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.212380 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.212635 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.212788 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.212640 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.213111 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.213116 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.213341 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.213532 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.213535 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.213700 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.213932 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.214036 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.214088 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.214098 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.214199 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.214250 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.214266 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.214404 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.214422 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.214498 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.214512 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.214569 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.214595 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.214664 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.214725 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.214917 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.214974 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.215038 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.215074 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.215145 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.215170 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.215246 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.215255 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.215348 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.215450 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.215497 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.215602 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.215654 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.215719 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.215893 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.216063 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.216117 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.216246 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.216290 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.216325 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.216355 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.216423 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.216481 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.216574 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.216638 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.216709 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.216873 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.216924 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.217035 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.217139 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.217256 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.217301 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.217345 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.217400 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.217466 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.217541 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.217616 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.217692 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.217777 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.217921 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.432982 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:33 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:33 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:33 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.433080 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:34 crc kubenswrapper[4183]: I0813 19:52:34.209165 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:34 crc kubenswrapper[4183]: I0813 19:52:34.209269 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:34 crc kubenswrapper[4183]: I0813 19:52:34.209348 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:34 crc kubenswrapper[4183]: E0813 19:52:34.209351 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:34 crc kubenswrapper[4183]: I0813 19:52:34.209165 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:34 crc kubenswrapper[4183]: E0813 19:52:34.209459 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:34 crc kubenswrapper[4183]: E0813 19:52:34.209521 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:34 crc kubenswrapper[4183]: I0813 19:52:34.209547 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:34 crc kubenswrapper[4183]: I0813 19:52:34.209605 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:34 crc kubenswrapper[4183]: E0813 19:52:34.209731 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:34 crc kubenswrapper[4183]: E0813 19:52:34.209868 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:34 crc kubenswrapper[4183]: E0813 19:52:34.209931 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:34 crc kubenswrapper[4183]: I0813 19:52:34.209933 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:34 crc kubenswrapper[4183]: E0813 19:52:34.210057 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:34 crc kubenswrapper[4183]: I0813 19:52:34.432071 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:34 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:34 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:34 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:34 crc kubenswrapper[4183]: I0813 19:52:34.432196 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.208349 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.208448 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.208467 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.208487 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.208581 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.208717 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.208735 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.208872 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.208578 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.209042 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.209072 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.209087 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.209129 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.209134 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.209162 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.209256 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.209261 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.209423 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.209519 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.209592 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.209636 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.209643 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.209684 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.209741 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.209938 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.210024 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.210138 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.210147 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.210174 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.210211 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.210231 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.210261 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.210336 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.210337 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.210365 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.210417 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.210465 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.210483 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.210555 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.210698 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.210722 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.210765 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.211062 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.211207 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.211212 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.211244 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.211409 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.211612 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.213110 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.213195 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.213266 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.213270 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.213318 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.213380 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.213437 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.213445 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.213476 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.213499 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.213555 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.213576 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.213660 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.213761 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.213912 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.213990 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.214062 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.214134 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.214206 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.214322 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.214362 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.214401 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.214496 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.214525 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.214664 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.214708 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.214946 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.214972 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.215050 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.215170 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.215234 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.215312 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.215604 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.215715 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.230689 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.253852 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.272341 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.294992 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.316486 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.345283 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.367209 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.386589 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.406740 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.422091 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.434961 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:35 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:35 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:35 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.435098 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.455720 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.484301 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.504250 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.526163 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.545954 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.561206 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.582883 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.601440 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.619163 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.636193 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.655635 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.673654 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.697355 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.721909 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.742057 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.764238 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.786316 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.808679 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.834060 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.851181 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.869679 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.889315 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.912331 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.939384 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2b9499014ac6e90a7470da179079d21d771343cf59f1d9242bb4876b4f66e0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b9499014ac6e90a7470da179079d21d771343cf59f1d9242bb4876b4f66e0aa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:10Z\\\",\\\"message\\\":\\\"handler.go:203] Sending *v1.Namespace event handler 1 for removal\\\\nI0813 19:52:10.825320 16600 handler.go:203] Sending *v1.Namespace event handler 5 for removal\\\\nI0813 19:52:10.825330 16600 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:52:10.825339 16600 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:52:10.825369 16600 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:52:10.825371 16600 handler.go:217] Removed *v1.Namespace event handler 1\\\\nI0813 19:52:10.825412 16600 reflector.go:295] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:52:10.825423 16600 handler.go:217] Removed *v1.Namespace event handler 5\\\\nI0813 19:52:10.825382 16600 metrics.go:552] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0813 19:52:10.825464 16600 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nF0813 19:52:10.825509 16600 ovnkube.go:136] failed to run ovnkube: failed to start node network c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.962990 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.982439 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.011493 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.031604 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.049744 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.068270 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.088919 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.105469 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.119708 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.137208 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.163378 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.178411 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.193207 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.208382 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.208558 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.208569 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.208570 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.208657 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.208668 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.208835 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:36 crc kubenswrapper[4183]: E0813 19:52:36.208928 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:36 crc kubenswrapper[4183]: E0813 19:52:36.209029 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:36 crc kubenswrapper[4183]: E0813 19:52:36.209155 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:36 crc kubenswrapper[4183]: E0813 19:52:36.209274 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:36 crc kubenswrapper[4183]: E0813 19:52:36.209496 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:36 crc kubenswrapper[4183]: E0813 19:52:36.209619 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:36 crc kubenswrapper[4183]: E0813 19:52:36.210040 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.211242 4183 scope.go:117] "RemoveContainer" containerID="2b9499014ac6e90a7470da179079d21d771343cf59f1d9242bb4876b4f66e0aa" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.237267 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.285341 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.319504 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.339747 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.361300 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.383338 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:48Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71\\\\n2025-08-13T19:51:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:03Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:03Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:51:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.402186 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.420719 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.432356 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:36 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:36 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:36 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.432490 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.440084 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.464341 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.497535 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.523495 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.545963 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.562734 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.584371 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.612921 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.638333 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.658049 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.674476 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.693032 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.709945 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.176344 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.176444 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.176468 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.176499 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.176536 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:37Z","lastTransitionTime":"2025-08-13T19:52:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.196779 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.205346 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.205583 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.205616 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.205644 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.205894 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:37Z","lastTransitionTime":"2025-08-13T19:52:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.208382 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.208447 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.208484 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.208523 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.208488 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.208551 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.208385 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.208403 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.208602 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.208629 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.208646 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.208406 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.208450 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.208383 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.208716 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.208492 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.208757 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.208777 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.208932 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.208945 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.208956 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.209016 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.209037 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.209043 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.208643 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.208735 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.208445 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.209134 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.209149 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.209136 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.209202 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.209207 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.209277 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.209313 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.209316 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.209379 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.209452 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.209534 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.209656 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.209940 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.210009 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.210167 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.210189 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.210237 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.210257 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.210321 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.210346 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.210377 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.210388 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.210539 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.210572 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.210587 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.210615 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.210688 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.210755 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.210922 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.210990 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.211021 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.211038 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.211040 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.211108 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.211298 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.211345 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.211401 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.211446 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.211557 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.211629 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.211742 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.212001 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.212069 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.212155 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.212321 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.212360 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.212424 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.212527 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.212596 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.212722 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.212973 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.212866 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.212936 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.213106 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.213223 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.230112 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.236917 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.236998 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.237030 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.237061 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.237097 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:37Z","lastTransitionTime":"2025-08-13T19:52:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.256130 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.266169 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.266257 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.266285 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.266318 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.266363 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:37Z","lastTransitionTime":"2025-08-13T19:52:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.292768 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.303859 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.303900 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.303913 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.303933 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.303961 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:37Z","lastTransitionTime":"2025-08-13T19:52:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.324874 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.324934 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.337735 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovnkube-controller/2.log" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.342713 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf"} Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.343674 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.363228 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.386204 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.408880 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.433285 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:37 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:37 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:37 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.433401 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.444023 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.468489 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.487294 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.513565 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.539382 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:48Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71\\\\n2025-08-13T19:51:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:03Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:03Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:51:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.563409 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.587610 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.604960 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.624506 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.646496 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.665473 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.683084 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.701585 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.718976 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.737227 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.754727 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.775330 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.794554 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.810987 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.831508 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.846189 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.862428 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.879920 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.895570 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.911611 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.926516 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.942401 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.960427 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.979386 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.998986 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.016299 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.033333 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.050437 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.070426 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.090225 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.110010 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.127127 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.149472 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.162708 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.180225 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.200313 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.208270 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.208329 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.208366 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.208420 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.208507 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.208538 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:38 crc kubenswrapper[4183]: E0813 19:52:38.208600 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:38 crc kubenswrapper[4183]: E0813 19:52:38.208860 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:38 crc kubenswrapper[4183]: E0813 19:52:38.208952 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:38 crc kubenswrapper[4183]: E0813 19:52:38.209069 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:38 crc kubenswrapper[4183]: E0813 19:52:38.209204 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:38 crc kubenswrapper[4183]: E0813 19:52:38.209251 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.209334 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:38 crc kubenswrapper[4183]: E0813 19:52:38.209473 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.223468 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.242190 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.265950 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.286164 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.303585 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.321033 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.336490 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.348697 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/1.log" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.349385 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/0.log" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.349487 4183 generic.go:334] "Generic (PLEG): container finished" podID="475321a1-8b7e-4033-8f72-b05a8b377347" containerID="9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2" exitCode=1 Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.349571 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q88th" event={"ID":"475321a1-8b7e-4033-8f72-b05a8b377347","Type":"ContainerDied","Data":"9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2"} Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.349612 4183 scope.go:117] "RemoveContainer" containerID="1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.350361 4183 scope.go:117] "RemoveContainer" containerID="9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2" Aug 13 19:52:38 crc kubenswrapper[4183]: E0813 19:52:38.350946 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\"" pod="openshift-multus/multus-q88th" podUID="475321a1-8b7e-4033-8f72-b05a8b377347" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.360041 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovnkube-controller/3.log" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.363171 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovnkube-controller/2.log" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.369945 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.370756 4183 generic.go:334] "Generic (PLEG): container finished" podID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerID="ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf" exitCode=1 Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.370889 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerDied","Data":"ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf"} Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.372999 4183 scope.go:117] "RemoveContainer" containerID="ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf" Aug 13 19:52:38 crc kubenswrapper[4183]: E0813 19:52:38.375539 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.396916 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.416534 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.434054 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.435514 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:38 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:38 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:38 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.435584 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.444347 4183 scope.go:117] "RemoveContainer" containerID="2b9499014ac6e90a7470da179079d21d771343cf59f1d9242bb4876b4f66e0aa" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.457290 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b9499014ac6e90a7470da179079d21d771343cf59f1d9242bb4876b4f66e0aa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:10Z\\\",\\\"message\\\":\\\"handler.go:203] Sending *v1.Namespace event handler 1 for removal\\\\nI0813 19:52:10.825320 16600 handler.go:203] Sending *v1.Namespace event handler 5 for removal\\\\nI0813 19:52:10.825330 16600 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:52:10.825339 16600 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:52:10.825369 16600 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:52:10.825371 16600 handler.go:217] Removed *v1.Namespace event handler 1\\\\nI0813 19:52:10.825412 16600 reflector.go:295] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:52:10.825423 16600 handler.go:217] Removed *v1.Namespace event handler 5\\\\nI0813 19:52:10.825382 16600 metrics.go:552] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0813 19:52:10.825464 16600 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nF0813 19:52:10.825509 16600 ovnkube.go:136] failed to run ovnkube: failed to start node network c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:52:36Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.474354 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.489300 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.504978 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.520207 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.534614 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.552553 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.567979 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.582668 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.599143 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.619122 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.635775 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.652259 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.667767 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.706336 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.742013 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.781196 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.821572 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.863521 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.914224 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.939955 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.980985 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.021846 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.064911 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.103720 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.142686 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.182336 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.209271 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.209271 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.209476 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.209656 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.209919 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.209687 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.209889 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.209928 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.209938 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.210091 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.210134 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.210153 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.210188 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.210207 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.210102 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.210259 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.210327 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.210284 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.210424 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.210427 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.210442 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.210144 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.210502 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.210542 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.210544 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.210669 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.210729 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.210943 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.211021 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.211115 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.211171 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.211170 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.211182 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.211285 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.211321 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.211358 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.211398 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.211503 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.211542 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.211545 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.211670 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.211758 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.211890 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.211946 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.211954 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.212008 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.212049 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.212064 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.212085 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.212174 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.212254 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.212481 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.212541 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.212610 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.212634 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.212696 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.212734 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.212904 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.212921 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.213067 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.213086 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.213239 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.213328 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.213473 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.213532 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.213713 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.213905 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.213765 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.213768 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.213992 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.214040 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.213956 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.214078 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.214152 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.214289 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.214553 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.214683 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.214895 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.215033 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.215163 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.215599 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.215758 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.225661 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:48Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71\\\\n2025-08-13T19:51:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:03Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:03Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:51:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615\\\\n2025-08-13T19:51:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:53Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:53Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:52:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.260868 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.300564 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.343139 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.376348 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovnkube-controller/3.log" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.382404 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/1.log" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.383272 4183 scope.go:117] "RemoveContainer" containerID="ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.383985 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.387456 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.422137 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.432161 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:39 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:39 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:39 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.432261 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.462661 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.502527 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.542061 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.583749 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.623154 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.664280 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.702579 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.742490 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.784263 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.819964 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.862032 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.902937 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.944042 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.984281 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.023729 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.062937 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.101556 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.141470 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.182193 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.211191 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.211241 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.211379 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:40 crc kubenswrapper[4183]: E0813 19:52:40.211402 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:40 crc kubenswrapper[4183]: E0813 19:52:40.211540 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.211591 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:40 crc kubenswrapper[4183]: E0813 19:52:40.211678 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.211732 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.211862 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:40 crc kubenswrapper[4183]: E0813 19:52:40.211940 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:40 crc kubenswrapper[4183]: E0813 19:52:40.211956 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.212009 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:40 crc kubenswrapper[4183]: E0813 19:52:40.212169 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:40 crc kubenswrapper[4183]: E0813 19:52:40.212255 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.222570 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.271469 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.303912 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.340687 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.382498 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.420310 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: E0813 19:52:40.423025 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.432362 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:40 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:40 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:40 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.432457 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.467180 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.499089 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.541457 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.580393 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.622895 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.663042 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.701558 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.741302 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.782293 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.824763 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.863089 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.903084 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.944534 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.982268 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.027964 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.061227 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.101132 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.143663 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.209249 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.209629 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.209678 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.209688 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.209376 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.209413 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.209946 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.209978 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.210002 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.210050 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.210079 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.210127 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.210138 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.209341 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.210168 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.210195 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.209426 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.209485 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.210251 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.209485 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.209510 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.209522 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.209547 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.209564 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.209572 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.210367 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.209594 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.209604 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.210248 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.209446 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.210478 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.210479 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.210501 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.210548 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.210580 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.210664 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.210695 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.210710 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.210890 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.210983 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.211021 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.211072 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.211076 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.211120 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.211175 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.211344 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.211510 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.211671 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.211730 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.211746 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.211909 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.211973 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.212008 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.212080 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.212146 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.212169 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.212219 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.212211 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.212307 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.212482 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.212656 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.212884 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.212995 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.213105 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.213209 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.213312 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.213465 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.213581 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.213734 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.213767 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.213896 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.213927 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.214005 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.214148 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.214214 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.214282 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.214319 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.214393 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.214492 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.214633 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.214749 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.215040 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.219652 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b9499014ac6e90a7470da179079d21d771343cf59f1d9242bb4876b4f66e0aa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:10Z\\\",\\\"message\\\":\\\"handler.go:203] Sending *v1.Namespace event handler 1 for removal\\\\nI0813 19:52:10.825320 16600 handler.go:203] Sending *v1.Namespace event handler 5 for removal\\\\nI0813 19:52:10.825330 16600 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:52:10.825339 16600 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:52:10.825369 16600 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:52:10.825371 16600 handler.go:217] Removed *v1.Namespace event handler 1\\\\nI0813 19:52:10.825412 16600 reflector.go:295] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:52:10.825423 16600 handler.go:217] Removed *v1.Namespace event handler 5\\\\nI0813 19:52:10.825382 16600 metrics.go:552] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0813 19:52:10.825464 16600 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nF0813 19:52:10.825509 16600 ovnkube.go:136] failed to run ovnkube: failed to start node network c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\".4\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0813 19:52:37.663652 17150 metrics.go:552] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0813 19:52:37.664114 17150 ovnkube.go:136] failed to run ovnkube: failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z\\\\nI0813 19:52:37.663319 17150 services_controller.go:421] Built service openshift-kube-apiserver/apiserver cluster-wide LB []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver/apiserver_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver/apiserver\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.86\\\\\\\", Po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:36Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.241419 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.260555 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.300637 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.343492 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.382770 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.421583 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.433422 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:41 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:41 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:41 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.433589 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.466744 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.501232 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.540895 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.582045 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.622079 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.669053 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.701597 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.741217 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.785021 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.829952 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.862883 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.900544 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.940912 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.982482 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:48Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71\\\\n2025-08-13T19:51:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:03Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:03Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:51:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615\\\\n2025-08-13T19:51:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:53Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:53Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:52:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.032242 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.075143 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.107989 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.145901 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.183904 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.208642 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.208701 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.208904 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:42 crc kubenswrapper[4183]: E0813 19:52:42.208943 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.208972 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.209018 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:42 crc kubenswrapper[4183]: E0813 19:52:42.209108 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.209171 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:42 crc kubenswrapper[4183]: E0813 19:52:42.209271 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.209312 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:42 crc kubenswrapper[4183]: E0813 19:52:42.209472 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:42 crc kubenswrapper[4183]: E0813 19:52:42.209581 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:42 crc kubenswrapper[4183]: E0813 19:52:42.209667 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:42 crc kubenswrapper[4183]: E0813 19:52:42.209847 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.223855 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.265003 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.304026 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.342508 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.384622 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.421764 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.433003 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:42 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:42 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:42 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.433136 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.467297 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.500714 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.543349 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.581567 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.620907 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.660877 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.702673 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.741913 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.786140 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.821018 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.862122 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.901428 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.940972 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.980905 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.024003 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.062070 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.104511 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.142213 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.178631 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.208759 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.210006 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.208911 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.210181 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.210287 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.208949 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.210398 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.208960 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.210585 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.208971 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.208978 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209001 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209018 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209018 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209039 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209078 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209085 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209091 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209096 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209118 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209117 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209138 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209141 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209148 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209149 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209182 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209191 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209207 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209215 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209248 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209256 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209283 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209288 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209299 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209317 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209328 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209348 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209606 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209641 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209667 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209694 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209727 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209755 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209899 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209899 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.210757 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.210906 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.210976 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.211043 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.211146 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.211281 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.211506 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.211633 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.211767 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.211963 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.220345 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.220539 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.220947 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.221533 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.221942 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.222457 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.222603 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.222986 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.223210 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.223451 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.224156 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.224272 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.224623 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.226127 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.226382 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.226521 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.226621 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.226718 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.226919 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.227072 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.227193 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.227330 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.227426 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.227556 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.227681 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.227965 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.228269 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.234944 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.262286 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.307238 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.340870 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.386695 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.423363 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.432499 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:43 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:43 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:43 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.432576 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.432620 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.433737 4183 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839"} pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" containerMessage="Container router failed startup probe, will be restarted" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.433910 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" containerID="cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839" gracePeriod=3600 Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.471203 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.504175 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.542438 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.583471 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.627254 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.663307 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.702337 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.741944 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.784352 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.821666 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.862193 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.901605 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.947467 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\".4\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0813 19:52:37.663652 17150 metrics.go:552] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0813 19:52:37.664114 17150 ovnkube.go:136] failed to run ovnkube: failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z\\\\nI0813 19:52:37.663319 17150 services_controller.go:421] Built service openshift-kube-apiserver/apiserver cluster-wide LB []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver/apiserver_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver/apiserver\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.86\\\\\\\", Po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:44 crc kubenswrapper[4183]: I0813 19:52:44.208749 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:44 crc kubenswrapper[4183]: I0813 19:52:44.208896 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:44 crc kubenswrapper[4183]: I0813 19:52:44.208981 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:44 crc kubenswrapper[4183]: I0813 19:52:44.209083 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:44 crc kubenswrapper[4183]: I0813 19:52:44.209125 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:44 crc kubenswrapper[4183]: I0813 19:52:44.209141 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:44 crc kubenswrapper[4183]: E0813 19:52:44.209213 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:44 crc kubenswrapper[4183]: E0813 19:52:44.209257 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:44 crc kubenswrapper[4183]: E0813 19:52:44.209370 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:44 crc kubenswrapper[4183]: E0813 19:52:44.209497 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:44 crc kubenswrapper[4183]: I0813 19:52:44.209550 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:44 crc kubenswrapper[4183]: E0813 19:52:44.210067 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:44 crc kubenswrapper[4183]: E0813 19:52:44.210372 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:44 crc kubenswrapper[4183]: E0813 19:52:44.210228 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.208263 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.208318 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.208343 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.208395 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.208419 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.208454 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.208506 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.208534 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.208573 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.208650 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.208660 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.208683 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.208715 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.208720 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.208274 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.208269 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.208931 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.208944 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.208310 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.209009 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.208949 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.208973 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.209016 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.209086 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.209106 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.209133 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.209133 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.209186 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.209200 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.209245 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.209254 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.209340 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.209422 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.209524 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.209572 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.209610 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.209639 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.209749 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.209837 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.210019 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.210031 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.210020 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.210033 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.210176 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.210188 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.210246 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.210252 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.210268 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.210300 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.210250 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.210318 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.210352 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.210457 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.210560 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.210571 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.210643 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.210757 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.210758 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.210920 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.211051 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.211163 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.211168 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.211232 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.211237 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.211307 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.211336 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.211398 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.211463 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.211536 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.211629 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.211734 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.211899 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.212687 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.212734 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.212842 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.212890 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.212970 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.213007 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.213138 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.213277 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.213383 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.213609 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.228109 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.243759 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.257682 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.274879 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.299505 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.315998 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.333233 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.349437 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.369088 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.386205 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.402717 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.418604 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.424749 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.435063 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.450501 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.466728 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.484494 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.506103 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.526745 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.544121 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.560293 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.576479 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.592342 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.606424 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.623898 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.640028 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.656033 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.672996 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.689672 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.707571 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.727997 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.744728 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.764464 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.781356 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.798692 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.813233 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.829522 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.847609 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.871681 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\".4\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0813 19:52:37.663652 17150 metrics.go:552] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0813 19:52:37.664114 17150 ovnkube.go:136] failed to run ovnkube: failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z\\\\nI0813 19:52:37.663319 17150 services_controller.go:421] Built service openshift-kube-apiserver/apiserver cluster-wide LB []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver/apiserver_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver/apiserver\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.86\\\\\\\", Po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.891981 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.909756 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.926926 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.940339 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.960178 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.979178 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.997160 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.014919 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.030926 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.056042 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.070508 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.085050 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.100600 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.120101 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.136747 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.151555 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.167132 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.184219 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:48Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71\\\\n2025-08-13T19:51:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:03Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:03Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:51:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615\\\\n2025-08-13T19:51:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:53Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:53Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:52:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.209282 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.209341 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.209280 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.209440 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.209485 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:46 crc kubenswrapper[4183]: E0813 19:52:46.209511 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.209529 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.209692 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:46 crc kubenswrapper[4183]: E0813 19:52:46.209903 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:46 crc kubenswrapper[4183]: E0813 19:52:46.209925 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:46 crc kubenswrapper[4183]: E0813 19:52:46.210043 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:46 crc kubenswrapper[4183]: E0813 19:52:46.210094 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:46 crc kubenswrapper[4183]: E0813 19:52:46.210193 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:46 crc kubenswrapper[4183]: E0813 19:52:46.210278 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.221974 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.262428 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.303976 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.341939 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.384296 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.426470 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.463005 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.506418 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.541329 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.578547 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.621934 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.208941 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.208987 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.209035 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.209003 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.209156 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.209164 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.209173 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.209214 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.209306 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.209341 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.209358 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.209374 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.209453 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.209455 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.209493 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.209511 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.209560 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.209560 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.209585 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.209640 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.209642 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.209667 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.209686 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.209699 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.209733 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.209765 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.209918 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.209936 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.209958 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.210021 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.209736 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.210094 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.210128 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.210208 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.210214 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.210309 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.210335 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.210359 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.210439 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.210444 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.210504 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.210543 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.210631 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.210763 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.210962 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.211038 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.211284 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.211331 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.211387 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.211454 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.211511 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.211594 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.211637 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.211714 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.211744 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.211855 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.211891 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.211965 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.211993 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.212079 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.212150 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.212202 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.212264 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.212350 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.212369 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.212441 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.212594 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.212636 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.212643 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.212720 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.213288 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.213330 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.213420 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.213765 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.214149 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.214204 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.214298 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.214320 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.214575 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.214701 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.214832 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.215558 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.512281 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.512623 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.512754 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.512968 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.513100 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:47Z","lastTransitionTime":"2025-08-13T19:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.529050 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.535748 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.535889 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.535910 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.535934 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.535958 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:47Z","lastTransitionTime":"2025-08-13T19:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.553158 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.558619 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.558668 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.558683 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.558704 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.558724 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:47Z","lastTransitionTime":"2025-08-13T19:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.574415 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.579446 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.579539 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.579561 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.579588 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.579612 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:47Z","lastTransitionTime":"2025-08-13T19:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.594950 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.601542 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.601662 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.601683 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.601706 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.601734 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:47Z","lastTransitionTime":"2025-08-13T19:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.617075 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.617146 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.833413 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.833517 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.833634 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.833667 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.833703 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.833871 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.833909 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.833949 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j7zrh\" (UniqueName: \"kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.834123 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.834169 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.834210 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.834467 4183 configmap.go:199] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.834528 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.834564 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.834546662 +0000 UTC m=+656.527211290 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.834595 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.834632 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.834692 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.834751 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835047 4183 secret.go:194] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835108 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.835096868 +0000 UTC m=+656.527761486 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835161 4183 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835190 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.83517882 +0000 UTC m=+656.527843438 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"config" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835238 4183 secret.go:194] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835268 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.835260362 +0000 UTC m=+656.527924980 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : object "openshift-service-ca"/"signing-key" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835316 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835346 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.835334735 +0000 UTC m=+656.527999353 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835367 4183 secret.go:194] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835396 4183 secret.go:194] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835418 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.835407137 +0000 UTC m=+656.528071765 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835433 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.835425587 +0000 UTC m=+656.528090205 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835480 4183 secret.go:194] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835498 4183 projected.go:294] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835509 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.835498629 +0000 UTC m=+656.528163327 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"serving-cert" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835519 4183 projected.go:294] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835532 4183 projected.go:200] Error preparing data for projected volume kube-api-access-j7zrh for pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835559 4183 secret.go:194] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835567 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.835556891 +0000 UTC m=+656.528221509 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-j7zrh" (UniqueName: "kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835589 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.835581162 +0000 UTC m=+656.528245780 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"serving-cert" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835613 4183 configmap.go:199] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835635 4183 configmap.go:199] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835671 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.835634823 +0000 UTC m=+656.528299491 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"console-operator-config" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835688 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.835680264 +0000 UTC m=+656.528344862 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835697 4183 configmap.go:199] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835724 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.835716745 +0000 UTC m=+656.528381363 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835879 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835922 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.835906401 +0000 UTC m=+656.528571019 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835977 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.836005 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.835997893 +0000 UTC m=+656.528662511 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"serving-cert" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.836177 4183 projected.go:294] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.836192 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.836228 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.83621969 +0000 UTC m=+656.528884308 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.839663 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.839745 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.839998 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.840018 4183 secret.go:194] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.840108 4183 secret.go:194] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.840128 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.84010544 +0000 UTC m=+656.532770178 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"serving-cert" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.840160 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.840148222 +0000 UTC m=+656.532813000 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default-metrics-tls" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.840202 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.840234 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.840226214 +0000 UTC m=+656.532890892 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.840036 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.840292 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.840363 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.840347067 +0000 UTC m=+656.533011775 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.841454 4183 secret.go:194] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.845391 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls podName:297ab9b6-2186-4d5b-a952-2bfd59af63c4 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.84537118 +0000 UTC m=+656.538035978 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls") pod "machine-config-controller-6df6df6b6b-58shh" (UID: "297ab9b6-2186-4d5b-a952-2bfd59af63c4") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.841251 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.845860 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.845907 4183 configmap.go:199] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.845966 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.845982 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.846010 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.846067 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.846023979 +0000 UTC m=+656.538688597 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : object "openshift-service-ca"/"signing-cabundle" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.846112 4183 secret.go:194] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.846128 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.846168 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.846153432 +0000 UTC m=+656.538818150 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : object "openshift-dns-operator"/"metrics-tls" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.846189 4183 configmap.go:199] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.846247 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.846228075 +0000 UTC m=+656.538892803 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"oauth-serving-cert" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.846404 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.846388059 +0000 UTC m=+656.539052777 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.948020 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.948087 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.948114 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.948247 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.948362 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.948367 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.948341411 +0000 UTC m=+656.641006159 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"image-import-ca" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.948448 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.948432933 +0000 UTC m=+656.641097531 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.948540 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.948586 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.948271 4183 secret.go:194] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.948619 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.948637 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.948629239 +0000 UTC m=+656.641293857 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.948682 4183 secret.go:194] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.948689 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.948713 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.948706071 +0000 UTC m=+656.641370689 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.948748 4183 secret.go:194] Couldn't get secret openshift-image-registry/installation-pull-secrets: object "openshift-image-registry"/"installation-pull-secrets" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.948877 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.948765113 +0000 UTC m=+656.641429721 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "installation-pull-secrets" (UniqueName: "kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"installation-pull-secrets" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.948917 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.948955 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.948946708 +0000 UTC m=+656.641611446 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.948976 4183 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.949008 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.949001269 +0000 UTC m=+656.641665887 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"client-ca" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.949714 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.949904 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.949914 4183 configmap.go:199] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.949960 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.949947996 +0000 UTC m=+656.642612724 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-images" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.950001 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.950039 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.950030879 +0000 UTC m=+656.642695497 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.051168 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.051252 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.051282 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bxr\" (UniqueName: \"kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.051321 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.051347 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.051355 4183 secret.go:194] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.051467 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.051441145 +0000 UTC m=+656.744105753 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.051471 4183 secret.go:194] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.051380 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.051531 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.051516157 +0000 UTC m=+656.744180885 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.051555 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.051583 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.051658 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bm986\" (UniqueName: \"kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.051683 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.051695 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.051717 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.051730 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.051722343 +0000 UTC m=+656.744387081 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-client" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.051760 4183 secret.go:194] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.051955 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wrd8h\" (UniqueName: \"kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052166 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052227 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.052213467 +0000 UTC m=+656.744878215 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052226 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052265 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.052252658 +0000 UTC m=+656.744917246 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052297 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.052282939 +0000 UTC m=+656.744947677 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052167 4183 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052326 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052328 4183 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052340 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052347 4183 projected.go:200] Error preparing data for projected volume kube-api-access-l8bxr for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052353 4183 projected.go:200] Error preparing data for projected volume kube-api-access-wrd8h for pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052359 4183 configmap.go:199] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052387 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.052378722 +0000 UTC m=+656.745043330 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-wrd8h" (UniqueName: "kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052407 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.052400002 +0000 UTC m=+656.745064590 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052427 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052462 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052468 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.052455724 +0000 UTC m=+656.745120432 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"etcd-serving-ca" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052534 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.052518746 +0000 UTC m=+656.745183424 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l8bxr" (UniqueName: "kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052553 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.052544236 +0000 UTC m=+656.745208884 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-session" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052310 4183 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052594 4183 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052606 4183 projected.go:200] Error preparing data for projected volume kube-api-access-bm986 for pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052642 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986 podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.052631639 +0000 UTC m=+656.745296357 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-bm986" (UniqueName: "kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052246 4183 projected.go:294] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052672 4183 projected.go:294] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052684 4183 projected.go:200] Error preparing data for projected volume kube-api-access-nf4t2 for pod openshift-dns-operator/dns-operator-75f687757b-nz2xb: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052721 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2 podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.052709461 +0000 UTC m=+656.745374169 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-nf4t2" (UniqueName: "kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.052177 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nf4t2\" (UniqueName: \"kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.052871 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.052939 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtrv\" (UniqueName: \"kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053009 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.053020 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053051 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.05304065 +0000 UTC m=+656.745705268 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053090 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053131 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.053119443 +0000 UTC m=+656.745784151 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053132 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053155 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053168 4183 projected.go:200] Error preparing data for projected volume kube-api-access-qcxcp for pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053169 4183 projected.go:294] Couldn't get configMap hostpath-provisioner/kube-root-ca.crt: object "hostpath-provisioner"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053190 4183 projected.go:294] Couldn't get configMap hostpath-provisioner/openshift-service-ca.crt: object "hostpath-provisioner"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053199 4183 projected.go:200] Error preparing data for projected volume kube-api-access-vvtrv for pod hostpath-provisioner/csi-hostpathplugin-hvm8g: [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053202 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp podName:d0f40333-c860-4c04-8058-a0bf572dcf12 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.053192045 +0000 UTC m=+656.745856853 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-qcxcp" (UniqueName: "kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp") pod "network-check-source-5c5478f8c-vqvt7" (UID: "d0f40333-c860-4c04-8058-a0bf572dcf12") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.053092 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxcp\" (UniqueName: \"kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp\") pod \"network-check-source-5c5478f8c-vqvt7\" (UID: \"d0f40333-c860-4c04-8058-a0bf572dcf12\") " pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053233 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv podName:12e733dd-0939-4f1b-9cbb-13897e093787 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.053222586 +0000 UTC m=+656.745887334 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-vvtrv" (UniqueName: "kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv") pod "csi-hostpathplugin-hvm8g" (UID: "12e733dd-0939-4f1b-9cbb-13897e093787") : [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.053298 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.053346 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.053373 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.053463 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053470 4183 projected.go:294] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053487 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.053495 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d7ntf\" (UniqueName: \"kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053517 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.053508314 +0000 UTC m=+656.746173112 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.053543 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053552 4183 projected.go:294] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053564 4183 projected.go:294] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053573 4183 projected.go:200] Error preparing data for projected volume kube-api-access-d7ntf for pod openshift-service-ca/service-ca-666f99b6f-vlbxv: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.053580 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.053605 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053624 4183 secret.go:194] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.053637 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053649 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.053641708 +0000 UTC m=+656.746306316 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"metrics-tls" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053675 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053690 4183 configmap.go:199] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053717 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.053710459 +0000 UTC m=+656.746375268 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"trusted-ca-bundle" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053733 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.05372581 +0000 UTC m=+656.746390518 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"openshift-global-ca" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.053692 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmhq\" (UniqueName: \"kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq\") pod \"migrator-f7c6d88df-q2fnv\" (UID: \"cf1a8966-f594-490a-9fbb-eec5bafd13d3\") " pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053858 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053886 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053899 4183 configmap.go:199] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053943 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.053901125 +0000 UTC m=+656.746566023 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"config" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053953 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053971 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hqmhq for pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053972 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.053961097 +0000 UTC m=+656.746625915 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054003 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.053991417 +0000 UTC m=+656.746656106 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-d7ntf" (UniqueName: "kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054011 4183 secret.go:194] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054031 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq podName:cf1a8966-f594-490a-9fbb-eec5bafd13d3 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.054020818 +0000 UTC m=+656.746685486 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqmhq" (UniqueName: "kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq") pod "migrator-f7c6d88df-q2fnv" (UID: "cf1a8966-f594-490a-9fbb-eec5bafd13d3") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.053863 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054064 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054053 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.054041739 +0000 UTC m=+656.746706537 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"encryption-config-1" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054084 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054097 4183 projected.go:200] Error preparing data for projected volume kube-api-access-n6sqt for pod openshift-marketplace/community-operators-8jhz6: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.054112 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.054192 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054195 4183 configmap.go:199] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054224 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt podName:3f4dca86-e6ee-4ec9-8324-86aff960225e nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.054205694 +0000 UTC m=+656.746870362 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-n6sqt" (UniqueName: "kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt") pod "community-operators-8jhz6" (UID: "3f4dca86-e6ee-4ec9-8324-86aff960225e") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054254 4183 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054267 4183 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054275 4183 projected.go:200] Error preparing data for projected volume kube-api-access-55f7t for pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.054296 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tl5kg\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054307 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.054298946 +0000 UTC m=+656.746963694 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-55f7t" (UniqueName: "kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054330 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.054317477 +0000 UTC m=+656.746982195 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.054363 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054384 4183 projected.go:294] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054403 4183 projected.go:294] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.054410 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2h9\" (UniqueName: \"kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054415 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tl5kg for pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054459 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.05444543 +0000 UTC m=+656.747110118 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-tl5kg" (UniqueName: "kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054492 4183 projected.go:294] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054505 4183 projected.go:294] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054513 4183 projected.go:200] Error preparing data for projected volume kube-api-access-lx2h9 for pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054544 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9 podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.054537283 +0000 UTC m=+656.747202021 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-lx2h9" (UniqueName: "kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054652 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054693 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.054684927 +0000 UTC m=+656.747349545 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.054732 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054750 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.054858 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054870 4183 secret.go:194] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054897 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.054882423 +0000 UTC m=+656.747547121 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054926 4183 secret.go:194] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.054941 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054958 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs podName:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.054950025 +0000 UTC m=+656.747614643 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs") pod "multus-admission-controller-6c7c885997-4hbbc" (UID: "d5025cb4-ddb0-4107-88c1-bcbcdb779ac0") : object "openshift-multus"/"multus-admission-controller-secret" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.054989 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054991 4183 configmap.go:199] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055025 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs podName:a702c6d2-4dde-4077-ab8c-0f8df804bf7a nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.055017657 +0000 UTC m=+656.747682255 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs") pod "network-metrics-daemon-qdfr4" (UID: "a702c6d2-4dde-4077-ab8c-0f8df804bf7a") : object "openshift-multus"/"metrics-daemon-secret" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.055032 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055052 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055080 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.055074498 +0000 UTC m=+656.747739086 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.055085 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055105 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.055124 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055127 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.05512119 +0000 UTC m=+656.747785808 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"client-ca" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055155 4183 configmap.go:199] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055163 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.055152041 +0000 UTC m=+656.747816729 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.055191 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055195 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055241 4183 secret.go:194] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055259 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.055249283 +0000 UTC m=+656.747913961 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.055242 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kp86\" (UniqueName: \"kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055280 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.055271414 +0000 UTC m=+656.747936072 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.055301 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8dcvj\" (UniqueName: \"kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.055328 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hpzhn\" (UniqueName: \"kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055352 4183 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055374 4183 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055393 4183 projected.go:294] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055399 4183 projected.go:200] Error preparing data for projected volume kube-api-access-6kp86 for pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055404 4183 projected.go:294] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055413 4183 projected.go:200] Error preparing data for projected volume kube-api-access-r8qj9 for pod openshift-apiserver/apiserver-67cbf64bc9-mtx25: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055439 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86 podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.055428488 +0000 UTC m=+656.748093176 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kp86" (UniqueName: "kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055459 4183 projected.go:294] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055471 4183 projected.go:294] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055479 4183 projected.go:200] Error preparing data for projected volume kube-api-access-8dcvj for pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055460 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9 podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.055450469 +0000 UTC m=+656.748115117 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-r8qj9" (UniqueName: "kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055512 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.055505261 +0000 UTC m=+656.748169849 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055523 4183 projected.go:294] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055542 4183 projected.go:294] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055553 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hpzhn for pod openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055527 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.055519611 +0000 UTC m=+656.748184199 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-8dcvj" (UniqueName: "kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055605 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.055592043 +0000 UTC m=+656.748256721 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-hpzhn" (UniqueName: "kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.055355 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r8qj9\" (UniqueName: \"kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.055661 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.055699 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.055749 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4w8wh\" (UniqueName: \"kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055975 4183 configmap.go:199] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.056102 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.056282 4183 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.056298 4183 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.056306 4183 projected.go:200] Error preparing data for projected volume kube-api-access-4w8wh for pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.057081 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057106 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.057091736 +0000 UTC m=+656.749756354 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057135 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.057129047 +0000 UTC m=+656.749793645 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057151 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.057144567 +0000 UTC m=+656.749809165 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-4w8wh" (UniqueName: "kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057169 4183 projected.go:294] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.057179 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057188 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.057209 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057228 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.057215689 +0000 UTC m=+656.749880387 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057249 4183 configmap.go:199] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.057260 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057276 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.057269571 +0000 UTC m=+656.749934289 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-config" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.057295 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057314 4183 configmap.go:199] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.057336 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057351 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.057341133 +0000 UTC m=+656.750005831 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057377 4183 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.057383 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057400 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.057394344 +0000 UTC m=+656.750058952 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.057427 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057450 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.057469 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057472 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.057465906 +0000 UTC m=+656.750130514 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057505 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.057511 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057527 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.057521288 +0000 UTC m=+656.750185896 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.057547 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057571 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057601 4183 secret.go:194] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057612 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.05759978 +0000 UTC m=+656.750264498 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057632 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.057623531 +0000 UTC m=+656.750288209 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057648 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057669 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.057663382 +0000 UTC m=+656.750327990 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"audit-1" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057695 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057702 4183 secret.go:194] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057712 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057724 4183 projected.go:200] Error preparing data for projected volume kube-api-access-ncrf5 for pod openshift-marketplace/certified-operators-7287f: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057428 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-idp-0-file-data: object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.057572 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057724 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.057718324 +0000 UTC m=+656.750382952 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057876 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5 podName:887d596e-c519-4bfa-af90-3edd9e1b2f0f nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.057771795 +0000 UTC m=+656.750436433 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-ncrf5" (UniqueName: "kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5") pod "certified-operators-7287f" (UID: "887d596e-c519-4bfa-af90-3edd9e1b2f0f") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057900 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.057890248 +0000 UTC m=+656.750554906 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-idp-0-file-data" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.057959 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.058002 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-serving-cert\" (UniqueName: \"kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.058028 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.058065 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.058091 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.058168 4183 secret.go:194] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.058196 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.058188837 +0000 UTC m=+656.750853455 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.058238 4183 secret.go:194] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.058262 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.058255079 +0000 UTC m=+656.750919697 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.058295 4183 configmap.go:199] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.058306 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.058318 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.05831096 +0000 UTC m=+656.750975558 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"audit" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.058354 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.058344461 +0000 UTC m=+656.751009089 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"config" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.058357 4183 secret.go:194] Couldn't get secret openshift-console-operator/webhook-serving-cert: object "openshift-console-operator"/"webhook-serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.058383 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.058377552 +0000 UTC m=+656.751042170 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "webhook-serving-cert" (UniqueName: "kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : object "openshift-console-operator"/"webhook-serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.159504 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-x5d97\" (UniqueName: \"kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.159673 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.159733 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.159861 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.159879 4183 projected.go:200] Error preparing data for projected volume kube-api-access-x5d97 for pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.159933 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5rpl7\" (UniqueName: \"kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.159974 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97 podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.159953133 +0000 UTC m=+656.852617871 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-x5d97" (UniqueName: "kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.160056 4183 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.160073 4183 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.160085 4183 projected.go:200] Error preparing data for projected volume kube-api-access-5rpl7 for pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.160137 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7 podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.160122398 +0000 UTC m=+656.852787016 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-5rpl7" (UniqueName: "kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.160061 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.160216 4183 secret.go:194] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.160264 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2zpsk\" (UniqueName: \"kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk\") pod \"downloads-65476884b9-9wcvx\" (UID: \"6268b7fe-8910-4505-b404-6f1df638105c\") " pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.160335 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.160361 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.160384 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.160401 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.160415 4183 projected.go:200] Error preparing data for projected volume kube-api-access-ptdrb for pod openshift-marketplace/redhat-operators-f4jkp: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.160428 4183 projected.go:269] Couldn't get secret openshift-image-registry/image-registry-tls: object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.160438 4183 projected.go:200] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-585546dd8b-v5m4t: object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.160461 4183 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.160476 4183 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.160484 4183 projected.go:200] Error preparing data for projected volume kube-api-access-2zpsk for pod openshift-console/downloads-65476884b9-9wcvx: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.160497 4183 secret.go:194] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.160415 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.160404996 +0000 UTC m=+656.853069594 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.160540 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb podName:4092a9f8-5acc-4932-9e90-ef962eeb301a nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.16053091 +0000 UTC m=+656.853195498 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-ptdrb" (UniqueName: "kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb") pod "redhat-operators-f4jkp" (UID: "4092a9f8-5acc-4932-9e90-ef962eeb301a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.160552 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.160388 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.160554 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.16054813 +0000 UTC m=+656.853212718 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.160596 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk podName:6268b7fe-8910-4505-b404-6f1df638105c nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.160588171 +0000 UTC m=+656.853252759 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-2zpsk" (UniqueName: "kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk") pod "downloads-65476884b9-9wcvx" (UID: "6268b7fe-8910-4505-b404-6f1df638105c") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.160613 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.160606452 +0000 UTC m=+656.853271120 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-oauth-config" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.160837 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.160825658 +0000 UTC m=+656.853490346 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.160874 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.160919 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kgvs\" (UniqueName: \"kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.160966 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.160994 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-76gl8\" (UniqueName: \"kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8\") pod \"network-check-target-v54bt\" (UID: \"34a48baf-1bee-4921-8bb2-9b7320e76f79\") " pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.161006 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.160996343 +0000 UTC m=+656.853661061 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.161038 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.161059 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.161073 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.161080 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pzb57\" (UniqueName: \"kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.161119 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.161131 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.161133 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.161139 4183 projected.go:200] Error preparing data for projected volume kube-api-access-6kgvs for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.161160 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.161171 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.161161908 +0000 UTC m=+656.853826536 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kgvs" (UniqueName: "kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.161208 4183 secret.go:194] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.161234 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.16122758 +0000 UTC m=+656.853892268 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.161273 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-w4r68\" (UniqueName: \"kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.161341 4183 projected.go:294] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.161372 4183 projected.go:294] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.161385 4183 projected.go:200] Error preparing data for projected volume kube-api-access-pzb57 for pod openshift-controller-manager/controller-manager-6ff78978b4-q4vv8: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.161445 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57 podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.161422015 +0000 UTC m=+656.854086773 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-pzb57" (UniqueName: "kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.161456 4183 projected.go:294] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.161469 4183 projected.go:294] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.161478 4183 projected.go:200] Error preparing data for projected volume kube-api-access-w4r68 for pod openshift-authentication/oauth-openshift-765b47f944-n2lhl: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.161502 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68 podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.161495647 +0000 UTC m=+656.854160265 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-w4r68" (UniqueName: "kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.161548 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.161574 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.161628 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.161679 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.161934 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hjlhw\" (UniqueName: \"kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.161964 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.161973 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.162047 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tvc4r\" (UniqueName: \"kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.162053 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.162063 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tf29r for pod openshift-marketplace/redhat-marketplace-8s8pc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.162097 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r podName:c782cf62-a827-4677-b3c2-6f82c5f09cbb nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.162087214 +0000 UTC m=+656.854751842 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-tf29r" (UniqueName: "kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r") pod "redhat-marketplace-8s8pc" (UID: "c782cf62-a827-4677-b3c2-6f82c5f09cbb") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.162133 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.162146 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.162155 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tvc4r for pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.162189 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.162180547 +0000 UTC m=+656.854845165 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-tvc4r" (UniqueName: "kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.162155 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.162222 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9724w\" (UniqueName: \"kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.162247 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.162271 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.162327 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.162355 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.162364 4183 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.162396 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.162405 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.162394913 +0000 UTC m=+656.855059631 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-client" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.162426 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.162451 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.162474 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.162499 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dt5cx\" (UniqueName: \"kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx\") pod \"ingress-canary-2vhcn\" (UID: \"0b5d722a-1123-4935-9740-52a08d018bc9\") " pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.162705 4183 projected.go:294] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.162712 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-fqnmc\" (UniqueName: \"kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.162721 4183 projected.go:294] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.162732 4183 projected.go:200] Error preparing data for projected volume kube-api-access-9724w for pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.162745 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vhj\" (UniqueName: \"kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.162761 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.162752873 +0000 UTC m=+656.855417581 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-9724w" (UniqueName: "kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.162856 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.162894 4183 projected.go:294] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.162910 4183 projected.go:294] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.162918 4183 projected.go:200] Error preparing data for projected volume kube-api-access-d9vhj for pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.162947 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.162938358 +0000 UTC m=+656.855603076 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-d9vhj" (UniqueName: "kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.162984 4183 configmap.go:199] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163007 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.1630009 +0000 UTC m=+656.855665518 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"service-ca" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163007 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163034 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.163028701 +0000 UTC m=+656.855693399 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"audit-1" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163047 4183 secret.go:194] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163075 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163081 4183 configmap.go:199] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163094 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.163082962 +0000 UTC m=+656.855747750 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"etcd-client" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163116 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.163105873 +0000 UTC m=+656.855770531 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163130 4183 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163133 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.163126314 +0000 UTC m=+656.855790902 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"trusted-ca" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.161082 4183 projected.go:200] Error preparing data for projected volume kube-api-access-76gl8 for pod openshift-network-diagnostics/network-check-target-v54bt: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163157 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.163150484 +0000 UTC m=+656.855815102 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163178 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8 podName:34a48baf-1bee-4921-8bb2-9b7320e76f79 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.163171245 +0000 UTC m=+656.855835933 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-76gl8" (UniqueName: "kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8") pod "network-check-target-v54bt" (UID: "34a48baf-1bee-4921-8bb2-9b7320e76f79") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163194 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163212 4183 secret.go:194] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163222 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.163215276 +0000 UTC m=+656.855880004 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163237 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.163230867 +0000 UTC m=+656.855895575 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163251 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163269 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163277 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.163269368 +0000 UTC m=+656.855934066 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163294 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.163285908 +0000 UTC m=+656.855950616 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163309 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163322 4183 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163333 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.163327219 +0000 UTC m=+656.855991827 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163053 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163349 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.16334246 +0000 UTC m=+656.856007168 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163355 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163367 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hjlhw for pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163392 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.163385961 +0000 UTC m=+656.856050569 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-hjlhw" (UniqueName: "kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163401 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163416 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163425 4183 projected.go:200] Error preparing data for projected volume kube-api-access-rg2zg for pod openshift-marketplace/marketplace-operator-8b455464d-f9xdt: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163442 4183 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163451 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.163442143 +0000 UTC m=+656.856106761 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-rg2zg" (UniqueName: "kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163456 4183 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163466 4183 projected.go:200] Error preparing data for projected volume kube-api-access-fqnmc for pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163491 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.163483484 +0000 UTC m=+656.856148162 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-fqnmc" (UniqueName: "kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163526 4183 secret.go:194] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163551 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.163545526 +0000 UTC m=+656.856210144 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163594 4183 projected.go:294] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163604 4183 projected.go:294] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163614 4183 projected.go:200] Error preparing data for projected volume kube-api-access-dt5cx for pod openshift-ingress-canary/ingress-canary-2vhcn: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163637 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx podName:0b5d722a-1123-4935-9740-52a08d018bc9 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.163631288 +0000 UTC m=+656.856295896 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-dt5cx" (UniqueName: "kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx") pod "ingress-canary-2vhcn" (UID: "0b5d722a-1123-4935-9740-52a08d018bc9") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163934 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 podName: nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.163922136 +0000 UTC m=+656.856586864 (durationBeforeRetry 2m2s). Error: MountVolume.MountDevice failed for volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.208982 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.209036 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.209070 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.208989 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.209036 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.209150 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.209278 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.209398 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.209506 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.209721 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.210316 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.210714 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.211020 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.211222 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.265051 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r7dbp\" (UniqueName: \"kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp\") pod \"redhat-marketplace-rmwfn\" (UID: \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\") " pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.265289 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.265336 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.265350 4183 projected.go:200] Error preparing data for projected volume kube-api-access-r7dbp for pod openshift-marketplace/redhat-marketplace-rmwfn: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.265449 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp podName:9ad279b4-d9dc-42a8-a1c8-a002bd063482 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.265423635 +0000 UTC m=+656.958088343 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-r7dbp" (UniqueName: "kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp") pod "redhat-marketplace-rmwfn" (UID: "9ad279b4-d9dc-42a8-a1c8-a002bd063482") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.266701 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lz9qh\" (UniqueName: \"kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.266898 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\") " pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.267261 4183 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.267323 4183 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.267583 4183 projected.go:200] Error preparing data for projected volume kube-api-access-lz9qh for pod openshift-console/console-84fccc7b6-mkncc: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.267694 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.267669979 +0000 UTC m=+656.960334777 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-lz9qh" (UniqueName: "kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.267438 4183 projected.go:294] Couldn't get configMap openshift-kube-controller-manager/kube-root-ca.crt: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.267728 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/revision-pruner-8-crc: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.267769 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access podName:72854c1e-5ae2-4ed6-9e50-ff3bccde2635 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.267758041 +0000 UTC m=+656.960422719 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access") pod "revision-pruner-8-crc" (UID: "72854c1e-5ae2-4ed6-9e50-ff3bccde2635") : object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.208993 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209022 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209071 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209083 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209213 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209220 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209299 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209308 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.209227 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209386 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.209395 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209424 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209461 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.209496 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209510 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209526 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209571 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209577 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.209628 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209635 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209650 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.209732 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209735 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209766 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209885 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209861 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209946 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209969 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209950 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.209951 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209983 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209752 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.210048 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.210065 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.210131 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.210137 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.210210 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.210291 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.210332 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.210390 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.210450 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.210455 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.210485 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.210519 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.210641 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.210703 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.210914 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.210985 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.211035 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.211155 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.211231 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.211340 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.211410 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.211535 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.211545 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.211637 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.211677 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.211867 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.211929 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.212010 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.212138 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.212254 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.212256 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.212327 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.212430 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.212506 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.212692 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.212898 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.212994 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.213081 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.213100 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.213195 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.213255 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.213303 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.213316 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.213409 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.213539 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.213676 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.213897 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.213996 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.214236 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.214377 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:50 crc kubenswrapper[4183]: I0813 19:52:50.209294 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:50 crc kubenswrapper[4183]: I0813 19:52:50.209395 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:50 crc kubenswrapper[4183]: I0813 19:52:50.209596 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:50 crc kubenswrapper[4183]: I0813 19:52:50.209305 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:50 crc kubenswrapper[4183]: I0813 19:52:50.209325 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:50 crc kubenswrapper[4183]: I0813 19:52:50.209416 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:50 crc kubenswrapper[4183]: E0813 19:52:50.210101 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:50 crc kubenswrapper[4183]: E0813 19:52:50.210122 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:50 crc kubenswrapper[4183]: E0813 19:52:50.210302 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:50 crc kubenswrapper[4183]: E0813 19:52:50.210580 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:50 crc kubenswrapper[4183]: E0813 19:52:50.210910 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:50 crc kubenswrapper[4183]: E0813 19:52:50.211086 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:50 crc kubenswrapper[4183]: I0813 19:52:50.211605 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:50 crc kubenswrapper[4183]: E0813 19:52:50.211965 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:50 crc kubenswrapper[4183]: E0813 19:52:50.427233 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.209054 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.209197 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.209243 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.209211 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.209283 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.209049 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.209156 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.209158 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.209052 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.209574 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.209661 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.209659 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.209931 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.210012 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.210093 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.210189 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.210254 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.210325 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.210406 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.210475 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.210545 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.210651 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.210693 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.210892 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.211045 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.211092 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.211171 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.211220 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.211251 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.211334 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.211355 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.211367 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.211453 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.211482 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.211536 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.211537 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.211607 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.211544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.211697 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.211730 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.211867 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.211880 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.211936 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.211979 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.212024 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.212038 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.212093 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.212117 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.212134 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.212153 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.212227 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.212232 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.212275 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.212355 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.212396 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.212436 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.212508 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.212594 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.212765 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.213062 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.213130 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.213251 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.213359 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.213458 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.213542 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.213629 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.213717 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.213902 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.214054 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.214277 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.214359 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.214436 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.214506 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.214614 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.214653 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.214731 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.214770 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.214982 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.215342 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.215482 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.215617 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.215761 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:52 crc kubenswrapper[4183]: I0813 19:52:52.209172 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:52 crc kubenswrapper[4183]: I0813 19:52:52.209278 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:52 crc kubenswrapper[4183]: I0813 19:52:52.209308 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:52 crc kubenswrapper[4183]: I0813 19:52:52.209356 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:52 crc kubenswrapper[4183]: I0813 19:52:52.209174 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:52 crc kubenswrapper[4183]: E0813 19:52:52.209748 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:52 crc kubenswrapper[4183]: I0813 19:52:52.209873 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:52 crc kubenswrapper[4183]: E0813 19:52:52.209761 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:52 crc kubenswrapper[4183]: E0813 19:52:52.209959 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:52 crc kubenswrapper[4183]: E0813 19:52:52.210047 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:52 crc kubenswrapper[4183]: I0813 19:52:52.210105 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:52 crc kubenswrapper[4183]: E0813 19:52:52.210178 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:52 crc kubenswrapper[4183]: E0813 19:52:52.210259 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:52 crc kubenswrapper[4183]: E0813 19:52:52.210330 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.209585 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.209921 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.210397 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.210510 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.210562 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.210694 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.210868 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.211098 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.211110 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.211289 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.209603 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.211448 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.211549 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.211552 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.211865 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.211975 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.212242 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.212458 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.212462 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.212668 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.212865 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.212879 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.213099 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.213242 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.213308 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.213362 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.213394 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.212982 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.213501 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.213529 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.213555 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.213658 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.213744 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.213914 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.214015 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.214086 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.214154 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.214215 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.214254 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.214312 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.214480 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.214541 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.214641 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.214697 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.214730 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.214759 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.214891 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.214973 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.215002 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.215025 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.215074 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.215097 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.215127 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.215661 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.221724 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.222086 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.222352 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.222584 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.222903 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.223183 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.223953 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.225202 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.225413 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.225419 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.225570 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.225690 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.226245 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.226710 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.227090 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.227328 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.227608 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.228634 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.228893 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.229013 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.229124 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.229209 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.229294 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.229364 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.229428 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.229502 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.229561 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.230050 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.230120 4183 scope.go:117] "RemoveContainer" containerID="9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.272729 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.291619 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.309554 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.353377 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.374133 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.391290 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.412137 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.432056 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.450312 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.469314 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.493533 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.528063 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.548427 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.580703 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.597874 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.624158 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\".4\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0813 19:52:37.663652 17150 metrics.go:552] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0813 19:52:37.664114 17150 ovnkube.go:136] failed to run ovnkube: failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z\\\\nI0813 19:52:37.663319 17150 services_controller.go:421] Built service openshift-kube-apiserver/apiserver cluster-wide LB []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver/apiserver_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver/apiserver\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.86\\\\\\\", Po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.644935 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.660446 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.678441 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.704178 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.723474 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.747950 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.766311 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.784418 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.802502 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.827606 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.887004 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.909603 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.932961 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.956100 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.984708 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.007942 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.036349 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.060298 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.084656 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615\\\\n2025-08-13T19:51:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:53Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:53Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:52:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.104630 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.124106 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.147000 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.180891 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.204590 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.208872 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:54 crc kubenswrapper[4183]: E0813 19:52:54.209105 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.209343 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:54 crc kubenswrapper[4183]: E0813 19:52:54.209487 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.210386 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:54 crc kubenswrapper[4183]: E0813 19:52:54.211355 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.211362 4183 scope.go:117] "RemoveContainer" containerID="ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.211053 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:54 crc kubenswrapper[4183]: E0813 19:52:54.211877 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.211104 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:54 crc kubenswrapper[4183]: E0813 19:52:54.212121 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.211142 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:54 crc kubenswrapper[4183]: E0813 19:52:54.212404 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.210965 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:54 crc kubenswrapper[4183]: E0813 19:52:54.212583 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:54 crc kubenswrapper[4183]: E0813 19:52:54.213313 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.249073 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.272723 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.325278 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.358963 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.384407 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.438616 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.451481 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/1.log" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.451620 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q88th" event={"ID":"475321a1-8b7e-4033-8f72-b05a8b377347","Type":"ContainerStarted","Data":"8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb"} Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.465028 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.487264 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.512259 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.534289 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.557134 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.580708 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.602307 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.621275 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.643473 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.660925 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.671428 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.671592 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.671620 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.671648 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.671669 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.680214 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.697441 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.717333 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.739209 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.755713 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.776009 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.796024 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.814228 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.833608 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.858330 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.872556 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.901065 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.917405 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.939248 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.962034 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.986637 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.004638 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.056963 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.073019 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.100671 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615\\\\n2025-08-13T19:51:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:53Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:53Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:52:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.121120 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.136025 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.154190 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.208627 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.208690 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.208958 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.208964 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.209025 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.209085 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.209120 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.209085 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.208968 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.209252 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.209261 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.209301 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.209353 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.209360 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.209382 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.209394 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.209444 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.209466 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.209501 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.209520 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.209537 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.209600 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.209502 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.209690 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.209697 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.209728 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.209750 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.209938 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.209974 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.210018 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.210023 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.210053 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.210066 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.210164 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.210165 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.210194 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.210271 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.210364 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.210467 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.210674 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.210864 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.210964 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.211048 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.211078 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.211138 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.211203 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.211209 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.211277 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.211354 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.211381 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.211454 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.211466 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.211509 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.211560 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.211596 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.211642 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.211652 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.211739 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.211892 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.211940 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.212033 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.212077 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.212151 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.212208 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.212274 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.212304 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.212346 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.212526 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.212630 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.212739 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.212904 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.212979 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.213049 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.213337 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.213452 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.213545 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.213276 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.213673 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.213753 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.213972 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.214079 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.214166 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.223465 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.245891 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.276459 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.294213 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.311184 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.325979 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.342197 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.360517 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.378013 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.397463 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.414562 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.428270 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.430239 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.446119 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.464652 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.481684 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.497160 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.515621 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.530951 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.546912 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.560681 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.576488 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.590894 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.606186 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.622268 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.641249 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.655300 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.670146 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.687590 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.700914 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.719965 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.733304 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.750473 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.767552 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.783418 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.806909 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.847632 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.891223 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.926932 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.967972 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.009327 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.052226 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.088056 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.128925 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.168055 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.208969 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.209057 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.209189 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:56 crc kubenswrapper[4183]: E0813 19:52:56.209204 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.208987 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:56 crc kubenswrapper[4183]: E0813 19:52:56.209421 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:56 crc kubenswrapper[4183]: E0813 19:52:56.209485 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.209750 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.210002 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:56 crc kubenswrapper[4183]: E0813 19:52:56.210125 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.210221 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:56 crc kubenswrapper[4183]: E0813 19:52:56.210330 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:56 crc kubenswrapper[4183]: E0813 19:52:56.210629 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:56 crc kubenswrapper[4183]: E0813 19:52:56.210887 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.217453 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\".4\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0813 19:52:37.663652 17150 metrics.go:552] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0813 19:52:37.664114 17150 ovnkube.go:136] failed to run ovnkube: failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z\\\\nI0813 19:52:37.663319 17150 services_controller.go:421] Built service openshift-kube-apiserver/apiserver cluster-wide LB []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver/apiserver_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver/apiserver\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.86\\\\\\\", Po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.250113 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.287005 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.340495 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.366313 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.409898 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.448664 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.488369 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.527413 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.568209 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.606303 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.645419 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.686023 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.727454 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.768714 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.809265 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.851656 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.890551 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.927907 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.968662 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.007645 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.048984 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.085914 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.126509 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.167512 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.208608 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.208956 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.209213 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.209356 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.209559 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.209697 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.209987 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.210165 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.210358 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.210501 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.210696 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.210947 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.211108 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.211207 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.211351 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.211456 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.211599 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.211729 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.211959 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.212031 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.212087 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.212103 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.212122 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.212177 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.212181 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.212233 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.212247 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.212255 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.212326 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.212360 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.212362 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.212399 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.212397 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.212450 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.212469 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.212494 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.212494 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.212556 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.212576 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.212684 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.212688 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.212737 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.212691 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.212896 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.212956 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.213001 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.213077 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.213174 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.213223 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.213322 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.213366 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.213493 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.213504 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.213547 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.213626 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.213672 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.213909 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.214330 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.214374 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.214484 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.214627 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.214700 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.214863 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.214920 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.215089 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.215243 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.215308 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.215354 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.215411 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.215530 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.215641 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.215752 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.215959 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.215979 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.216066 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.216173 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.216305 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.216602 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.216739 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.216992 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.217088 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.217183 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.221431 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\".4\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0813 19:52:37.663652 17150 metrics.go:552] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0813 19:52:37.664114 17150 ovnkube.go:136] failed to run ovnkube: failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z\\\\nI0813 19:52:37.663319 17150 services_controller.go:421] Built service openshift-kube-apiserver/apiserver cluster-wide LB []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver/apiserver_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver/apiserver\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.86\\\\\\\", Po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.248209 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.288145 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.325022 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.369307 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.407085 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.445620 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.487062 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.526150 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.565870 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.622015 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.649319 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.694467 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.726227 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.767050 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.816111 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.825128 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.825206 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.825224 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.825250 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.825280 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:57Z","lastTransitionTime":"2025-08-13T19:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.844592 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.849662 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.851331 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.851403 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.851450 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.851474 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.851506 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:57Z","lastTransitionTime":"2025-08-13T19:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.867424 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.872876 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.873066 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.873086 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.873106 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.873133 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:57Z","lastTransitionTime":"2025-08-13T19:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.890447 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.892430 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.895277 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.895371 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.895387 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.895408 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.895429 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:57Z","lastTransitionTime":"2025-08-13T19:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.911046 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.917384 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.917452 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.917477 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.917496 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.917525 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:57Z","lastTransitionTime":"2025-08-13T19:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.930628 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.934081 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.934152 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.974238 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615\\\\n2025-08-13T19:51:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:53Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:53Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:52:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.064854 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.080330 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.125372 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.146370 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.167079 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.208356 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:58 crc kubenswrapper[4183]: E0813 19:52:58.208602 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.208751 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:58 crc kubenswrapper[4183]: E0813 19:52:58.208981 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.209142 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:58 crc kubenswrapper[4183]: E0813 19:52:58.209274 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.209351 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:58 crc kubenswrapper[4183]: E0813 19:52:58.209458 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.209549 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:58 crc kubenswrapper[4183]: E0813 19:52:58.209664 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.210167 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.210284 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.210328 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:58 crc kubenswrapper[4183]: E0813 19:52:58.210412 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:58 crc kubenswrapper[4183]: E0813 19:52:58.210656 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.248416 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.292560 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.345142 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.369067 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.412622 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.451603 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.489734 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.529354 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.567941 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.616097 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.650751 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.691277 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.728302 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.769409 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.808077 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.848614 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.888267 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.929602 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.972584 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.012287 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.048247 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.087204 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.127933 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.167383 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.206258 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.208525 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.208583 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.208673 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.208724 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.209092 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.209102 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.209182 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.209196 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.209326 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.209347 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.209436 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.209521 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.209530 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.209630 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.209643 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.209683 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.209742 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.209936 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.210033 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.210051 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.210067 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.210091 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.210121 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.210177 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.210182 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.210186 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.210092 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.210227 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.210416 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.210427 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.210446 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.210461 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.210525 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.210567 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.210600 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.210568 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.210636 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.210690 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.210711 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.210726 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.210883 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.210988 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.211117 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.211162 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.211233 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.211315 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.211362 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.211422 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.211453 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.211521 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.211551 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.211625 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.211696 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.211727 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.211867 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.211953 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.212073 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.212140 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.212218 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.212270 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.212279 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.212343 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.212388 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.212450 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.212482 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.212580 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.212689 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.212854 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.212940 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.213005 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.213107 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.213150 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.213213 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.213280 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.213341 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.213408 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.213463 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.213537 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.213726 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.214035 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.214183 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.214249 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.251501 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.288162 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:00 crc kubenswrapper[4183]: I0813 19:53:00.209197 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:00 crc kubenswrapper[4183]: I0813 19:53:00.209238 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:00 crc kubenswrapper[4183]: E0813 19:53:00.209476 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:00 crc kubenswrapper[4183]: I0813 19:53:00.209657 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:00 crc kubenswrapper[4183]: E0813 19:53:00.209762 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:00 crc kubenswrapper[4183]: I0813 19:53:00.209988 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:00 crc kubenswrapper[4183]: I0813 19:53:00.210083 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:00 crc kubenswrapper[4183]: E0813 19:53:00.210163 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:00 crc kubenswrapper[4183]: I0813 19:53:00.210309 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:00 crc kubenswrapper[4183]: I0813 19:53:00.210394 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:00 crc kubenswrapper[4183]: E0813 19:53:00.210462 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:00 crc kubenswrapper[4183]: E0813 19:53:00.210709 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:00 crc kubenswrapper[4183]: E0813 19:53:00.211181 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:00 crc kubenswrapper[4183]: E0813 19:53:00.211344 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:00 crc kubenswrapper[4183]: E0813 19:53:00.430758 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.210080 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.210281 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.210463 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.210533 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.210636 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.210716 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.210948 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.211036 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.211147 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.211232 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.211375 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.211448 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.211546 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.211611 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.211706 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.211882 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.211944 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.212059 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.212114 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.212175 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.212238 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.212253 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.212344 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.212394 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.212404 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.212447 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.212620 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.212728 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.212892 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.212893 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.212999 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.213051 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.213089 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.213119 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.212696 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.213315 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.213425 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.213445 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.213479 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.213520 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.213575 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.213964 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.213579 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.213731 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.213761 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.213884 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.214065 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.214082 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.214145 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.214152 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.214272 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.214324 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.214557 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.214559 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.214664 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.214745 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.214956 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.215003 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.214265 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.215052 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.214979 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.215177 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.215258 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.215597 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.215881 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.216210 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.216395 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.216599 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.216936 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.217097 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.217164 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.217181 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.217316 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.217357 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.217587 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.218087 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.218179 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.218475 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.218610 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.218651 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.218741 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.218944 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:02 crc kubenswrapper[4183]: I0813 19:53:02.208890 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:02 crc kubenswrapper[4183]: I0813 19:53:02.209001 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:02 crc kubenswrapper[4183]: I0813 19:53:02.209001 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:02 crc kubenswrapper[4183]: I0813 19:53:02.209088 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:02 crc kubenswrapper[4183]: E0813 19:53:02.209127 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:02 crc kubenswrapper[4183]: I0813 19:53:02.209172 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:02 crc kubenswrapper[4183]: E0813 19:53:02.209271 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:02 crc kubenswrapper[4183]: I0813 19:53:02.209319 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:02 crc kubenswrapper[4183]: E0813 19:53:02.209447 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:02 crc kubenswrapper[4183]: E0813 19:53:02.209541 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:02 crc kubenswrapper[4183]: E0813 19:53:02.209622 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:02 crc kubenswrapper[4183]: I0813 19:53:02.209704 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:02 crc kubenswrapper[4183]: E0813 19:53:02.209761 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:02 crc kubenswrapper[4183]: E0813 19:53:02.209994 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.209608 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.210581 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.209615 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.209638 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.209670 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.209698 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.209707 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.209738 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.209734 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.209734 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.209752 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.209766 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.209884 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.209886 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.209920 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.209935 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.209944 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.209952 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.209969 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.209982 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.209994 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.209998 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.210011 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.210023 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.210019 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.210033 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.210051 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.210049 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.210061 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.210081 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.210085 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.210114 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.210108 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.210112 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.210135 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.210139 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.210163 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.210164 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.210172 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.210216 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.210331 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.211128 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.211336 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.211446 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.211566 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.211859 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.211935 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.212058 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.212154 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.212229 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.212354 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.212473 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.212562 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.212638 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.212722 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.212897 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.213175 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.213282 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.213387 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.213443 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.213555 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.213621 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.214416 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.214499 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.214676 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.214933 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.214978 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.215105 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.215425 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.215587 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.216146 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.216269 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.216487 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.216946 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.217040 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.217139 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.217304 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.217472 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.217669 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.218500 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.218629 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.220023 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:04 crc kubenswrapper[4183]: I0813 19:53:04.208693 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:04 crc kubenswrapper[4183]: I0813 19:53:04.208749 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:04 crc kubenswrapper[4183]: E0813 19:53:04.209067 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:04 crc kubenswrapper[4183]: I0813 19:53:04.209317 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:04 crc kubenswrapper[4183]: E0813 19:53:04.209414 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:04 crc kubenswrapper[4183]: I0813 19:53:04.209481 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:04 crc kubenswrapper[4183]: E0813 19:53:04.209655 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:04 crc kubenswrapper[4183]: I0813 19:53:04.209660 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:04 crc kubenswrapper[4183]: E0813 19:53:04.209743 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:04 crc kubenswrapper[4183]: E0813 19:53:04.209859 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:04 crc kubenswrapper[4183]: I0813 19:53:04.210196 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:04 crc kubenswrapper[4183]: I0813 19:53:04.209675 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:04 crc kubenswrapper[4183]: E0813 19:53:04.210643 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:04 crc kubenswrapper[4183]: E0813 19:53:04.210695 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.208628 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.208734 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.208859 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.209286 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.209396 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.209422 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.209480 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.209526 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.209647 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.209343 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.209373 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.209708 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.209729 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.209969 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.210109 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.210127 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.210152 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.210158 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.210120 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.210242 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.210282 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.210327 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.210358 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.210395 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.210407 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.210507 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.210530 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.210543 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.210610 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.210614 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.210619 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.210674 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.210748 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.210907 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.210937 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.211020 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.211060 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.211110 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.211167 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.211178 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.211225 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.211268 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.211294 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.211309 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.211338 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.211388 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.211478 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.211498 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.211556 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.211559 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.211694 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.211713 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.211935 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.212031 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.212042 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.212097 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.212197 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.212244 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.212261 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.212605 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.212618 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.212968 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.213021 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.213208 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.213290 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.213292 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.213345 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.213371 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.213413 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.213464 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.213564 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.213693 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.213872 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.214015 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.214103 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.214236 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.214294 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.214387 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.214497 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.214641 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.214733 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.215129 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.232749 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.249553 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.265949 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.288121 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.303250 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.325324 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.346055 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.364583 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.385025 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.400961 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.420532 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.432664 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.438473 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.456304 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.474210 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.502157 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\".4\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0813 19:52:37.663652 17150 metrics.go:552] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0813 19:52:37.664114 17150 ovnkube.go:136] failed to run ovnkube: failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z\\\\nI0813 19:52:37.663319 17150 services_controller.go:421] Built service openshift-kube-apiserver/apiserver cluster-wide LB []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver/apiserver_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver/apiserver\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.86\\\\\\\", Po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.523232 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.542231 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.559857 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.577721 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.595336 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.610971 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.628535 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.644528 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.660739 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.687394 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.706478 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.726959 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.745459 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.761669 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.793258 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.810717 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.826241 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.844393 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.865729 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615\\\\n2025-08-13T19:51:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:53Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:53Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:52:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.880658 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.905939 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.924566 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.943441 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.958690 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.976536 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.991988 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.009247 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.029199 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.051684 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.074064 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.091026 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.108384 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.127712 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.161603 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.176522 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.193566 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.208603 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.208664 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.208712 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.208629 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.208630 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.208854 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.208852 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.208666 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:06 crc kubenswrapper[4183]: E0813 19:53:06.208969 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:06 crc kubenswrapper[4183]: E0813 19:53:06.209098 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:06 crc kubenswrapper[4183]: E0813 19:53:06.209184 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:06 crc kubenswrapper[4183]: E0813 19:53:06.209250 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:06 crc kubenswrapper[4183]: E0813 19:53:06.209300 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:06 crc kubenswrapper[4183]: E0813 19:53:06.209373 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:06 crc kubenswrapper[4183]: E0813 19:53:06.209443 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.226697 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.247553 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.262937 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.280143 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.297460 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.314589 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.329411 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.344491 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.363875 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.384139 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.399159 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.416480 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.431613 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.446542 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.465060 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.208974 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.209048 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.209098 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.208974 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.209014 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.209150 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.209228 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.209249 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.209260 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.209302 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.209059 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.209403 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.209523 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.209633 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.209669 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.209747 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.209954 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.209964 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.210007 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.209985 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.210125 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.210181 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.210208 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.210390 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.210182 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.210540 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.210542 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.210609 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.210670 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.210690 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.210760 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.210957 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.210970 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.211080 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.211196 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.211203 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.211234 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.211269 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.211290 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.211082 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.211376 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.211379 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.211918 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.212433 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.213145 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.213496 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.213689 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.214999 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.215462 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.215645 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.216002 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.216212 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.216246 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.216365 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.216559 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.216700 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.217137 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.217521 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.217700 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.218090 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.218442 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.218567 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.218929 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.218942 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.219117 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.219420 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.219512 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.219654 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.219960 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.220171 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.221466 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.221482 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.221573 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.221653 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.221914 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.221997 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.222148 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.222157 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.222557 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.222682 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.222882 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.223037 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.112494 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.112560 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.112579 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.112602 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.112629 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:08Z","lastTransitionTime":"2025-08-13T19:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:08 crc kubenswrapper[4183]: E0813 19:53:08.127077 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:08Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.133043 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.133096 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.133115 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.133137 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.133163 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:08Z","lastTransitionTime":"2025-08-13T19:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:08 crc kubenswrapper[4183]: E0813 19:53:08.149139 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:08Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.154577 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.154626 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.154648 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.154671 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.154695 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:08Z","lastTransitionTime":"2025-08-13T19:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:08 crc kubenswrapper[4183]: E0813 19:53:08.170357 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:08Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.175049 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.175276 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.175305 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.175408 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.175547 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:08Z","lastTransitionTime":"2025-08-13T19:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:08 crc kubenswrapper[4183]: E0813 19:53:08.194226 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:08Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.199715 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.199900 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.199980 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.200001 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.200092 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:08Z","lastTransitionTime":"2025-08-13T19:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.208980 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.209037 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:08 crc kubenswrapper[4183]: E0813 19:53:08.209164 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.209322 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:08 crc kubenswrapper[4183]: E0813 19:53:08.209336 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.209434 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:08 crc kubenswrapper[4183]: E0813 19:53:08.209500 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.209559 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:08 crc kubenswrapper[4183]: E0813 19:53:08.209703 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.209950 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:08 crc kubenswrapper[4183]: E0813 19:53:08.210039 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.210148 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:08 crc kubenswrapper[4183]: E0813 19:53:08.210372 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:08 crc kubenswrapper[4183]: E0813 19:53:08.210607 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.212317 4183 scope.go:117] "RemoveContainer" containerID="ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf" Aug 13 19:53:08 crc kubenswrapper[4183]: E0813 19:53:08.212972 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:53:08 crc kubenswrapper[4183]: E0813 19:53:08.219237 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:08Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:08 crc kubenswrapper[4183]: E0813 19:53:08.219362 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.209751 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.210263 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.210357 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.210425 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.210367 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.210555 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.210562 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.210590 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.210921 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.210867 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.211145 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.211349 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.211568 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.212124 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.211952 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.212323 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.212548 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.212970 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.213067 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.213255 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.213460 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.213392 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.213759 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.214186 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.214190 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.214527 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.214707 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.214964 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.215074 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.210277 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.216642 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.216686 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.216869 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.217045 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.217137 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.217332 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.217468 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.217886 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.217972 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.218045 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.218247 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.218255 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.218354 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.218437 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.218470 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.218535 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.218530 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.218580 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.218778 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.219103 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.219165 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.219345 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.219531 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.219771 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.219975 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.220139 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.220232 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.220318 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.220464 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.220859 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.220986 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.221095 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.221170 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.221239 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.221473 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.221512 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.221666 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.221887 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.221974 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.222153 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.222437 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.222549 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.222741 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.222762 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.224671 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.223107 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.223201 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.223295 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.223410 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.223549 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.223643 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.223733 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:10 crc kubenswrapper[4183]: I0813 19:53:10.144518 4183 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Aug 13 19:53:10 crc kubenswrapper[4183]: I0813 19:53:10.144678 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Aug 13 19:53:10 crc kubenswrapper[4183]: I0813 19:53:10.208303 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:10 crc kubenswrapper[4183]: E0813 19:53:10.208509 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:10 crc kubenswrapper[4183]: I0813 19:53:10.208682 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:10 crc kubenswrapper[4183]: E0813 19:53:10.208775 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:10 crc kubenswrapper[4183]: I0813 19:53:10.208901 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:10 crc kubenswrapper[4183]: I0813 19:53:10.209029 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:10 crc kubenswrapper[4183]: E0813 19:53:10.209032 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:10 crc kubenswrapper[4183]: I0813 19:53:10.209108 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:10 crc kubenswrapper[4183]: E0813 19:53:10.209159 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:10 crc kubenswrapper[4183]: I0813 19:53:10.209285 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:10 crc kubenswrapper[4183]: I0813 19:53:10.209334 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:10 crc kubenswrapper[4183]: E0813 19:53:10.209393 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:10 crc kubenswrapper[4183]: E0813 19:53:10.209697 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:10 crc kubenswrapper[4183]: E0813 19:53:10.210008 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:10 crc kubenswrapper[4183]: E0813 19:53:10.434340 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.209383 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.209461 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.209395 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.209395 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.209427 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.209693 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.209714 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.209916 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.209970 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.210049 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.210137 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.210173 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.210268 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.210315 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.210416 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.210463 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.210522 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.210585 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.210619 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.210663 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.210720 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.210733 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.210756 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.210960 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.211007 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.211058 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.211120 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.211132 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.211202 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.211231 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.211293 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.211292 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.211351 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.211418 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.211424 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.211532 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.211540 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.211579 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.211640 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.211709 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.211996 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.212013 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.212037 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.212133 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.212152 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.212133 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.212228 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.212241 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.212256 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.212330 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.212339 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.212410 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.212426 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.212410 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.212538 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.212625 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.212678 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.212893 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.212938 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.212983 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.213053 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.213078 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.213165 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.213342 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.213378 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.213435 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.213515 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.213641 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.213699 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.213771 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.213881 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.214018 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.214203 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.214390 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.214585 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.214906 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.215064 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.215132 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.215308 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.215447 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.215546 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.215618 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:12 crc kubenswrapper[4183]: I0813 19:53:12.208686 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:12 crc kubenswrapper[4183]: I0813 19:53:12.208753 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:12 crc kubenswrapper[4183]: I0813 19:53:12.208892 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:12 crc kubenswrapper[4183]: I0813 19:53:12.208988 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:12 crc kubenswrapper[4183]: E0813 19:53:12.208990 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:12 crc kubenswrapper[4183]: I0813 19:53:12.209060 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:12 crc kubenswrapper[4183]: E0813 19:53:12.209122 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:12 crc kubenswrapper[4183]: E0813 19:53:12.209195 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:12 crc kubenswrapper[4183]: I0813 19:53:12.209234 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:12 crc kubenswrapper[4183]: E0813 19:53:12.209312 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:12 crc kubenswrapper[4183]: I0813 19:53:12.209386 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:12 crc kubenswrapper[4183]: E0813 19:53:12.209480 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:12 crc kubenswrapper[4183]: E0813 19:53:12.209676 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:12 crc kubenswrapper[4183]: E0813 19:53:12.209927 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.209242 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.209266 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.209285 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.209590 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.209949 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.209980 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.210003 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.210002 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.210028 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.210058 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.210078 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.210043 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.210110 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.210127 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.210170 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.210204 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.210225 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.210244 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.210310 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.210349 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.210372 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.210403 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.210464 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.210498 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.210520 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.210544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.210554 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.210624 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.210760 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.210768 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.210899 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.210983 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.211027 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.211071 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.211113 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.211179 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.211257 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.211339 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.211411 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.211498 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.211517 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.211591 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.211599 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.211644 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.211709 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.211715 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.211892 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.211999 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.212031 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.212055 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.212116 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.212139 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.212163 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.212190 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.212214 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.212232 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.212252 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.212277 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.212298 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.212305 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.212341 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.212391 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.212471 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.212491 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.212579 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.212654 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.212731 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.212881 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.212962 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.213032 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.213107 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.213177 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.213232 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.213292 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.213364 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.213423 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.213483 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.213594 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.213908 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.213950 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.213983 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.214027 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:14 crc kubenswrapper[4183]: I0813 19:53:14.208720 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:14 crc kubenswrapper[4183]: I0813 19:53:14.208925 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:14 crc kubenswrapper[4183]: I0813 19:53:14.208989 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:14 crc kubenswrapper[4183]: I0813 19:53:14.209039 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:14 crc kubenswrapper[4183]: E0813 19:53:14.209139 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:14 crc kubenswrapper[4183]: I0813 19:53:14.209157 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:14 crc kubenswrapper[4183]: E0813 19:53:14.209058 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:14 crc kubenswrapper[4183]: I0813 19:53:14.209202 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:14 crc kubenswrapper[4183]: E0813 19:53:14.209260 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:14 crc kubenswrapper[4183]: E0813 19:53:14.209346 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:14 crc kubenswrapper[4183]: E0813 19:53:14.209410 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:14 crc kubenswrapper[4183]: I0813 19:53:14.209444 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:14 crc kubenswrapper[4183]: E0813 19:53:14.209518 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:14 crc kubenswrapper[4183]: E0813 19:53:14.209583 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208158 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208508 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208555 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208511 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208237 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208267 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208300 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208300 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208328 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208340 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208349 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208197 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208364 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208383 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208393 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208393 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208383 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208414 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208425 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208428 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208448 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208453 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208479 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208462 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208498 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208531 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208360 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.210420 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.210547 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.210665 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.210898 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.211055 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.211147 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.211166 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.211240 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.211302 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.211451 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.211526 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.211628 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.211632 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.211740 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.211898 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.211924 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.211952 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.212015 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.212033 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.212082 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.212127 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.212182 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.212249 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.212323 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.212480 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.212594 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.212921 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.213135 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.213259 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.213416 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.213551 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.213706 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.213907 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.213958 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.214061 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.214857 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.215301 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.215471 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.215595 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.215681 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.215985 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.216088 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.216189 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.216285 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.216380 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.216483 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.216624 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.216758 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.216941 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.217029 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.217152 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.217277 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.217373 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.217538 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.217689 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.232917 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.257095 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\".4\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0813 19:52:37.663652 17150 metrics.go:552] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0813 19:52:37.664114 17150 ovnkube.go:136] failed to run ovnkube: failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z\\\\nI0813 19:52:37.663319 17150 services_controller.go:421] Built service openshift-kube-apiserver/apiserver cluster-wide LB []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver/apiserver_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver/apiserver\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.86\\\\\\\", Po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.275741 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.294017 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.311263 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.326082 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.350167 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.367321 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.386701 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.406995 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.424198 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.436517 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.443739 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.475644 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.490565 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.504499 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.521441 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.546015 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.564356 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.581878 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.601415 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.619904 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615\\\\n2025-08-13T19:51:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:53Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:53Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:52:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.636727 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.651462 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.670842 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.687560 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.705231 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.722704 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.746117 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.764567 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.779374 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.793760 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.812122 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.829014 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.844001 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.858650 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.874405 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.892251 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.911169 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.927621 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.944868 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.962649 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.979042 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.996574 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.012300 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.026681 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.043512 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.058980 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.075251 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.094188 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.110110 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.131981 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.149296 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.164255 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.182059 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.196450 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.209094 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.209168 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.209153 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.209272 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.209329 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.209330 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.209434 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:16 crc kubenswrapper[4183]: E0813 19:53:16.209497 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:16 crc kubenswrapper[4183]: E0813 19:53:16.209598 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:16 crc kubenswrapper[4183]: E0813 19:53:16.209908 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:16 crc kubenswrapper[4183]: E0813 19:53:16.210245 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:16 crc kubenswrapper[4183]: E0813 19:53:16.210338 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:16 crc kubenswrapper[4183]: E0813 19:53:16.210471 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:16 crc kubenswrapper[4183]: E0813 19:53:16.210220 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.217025 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.233035 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.249264 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.266023 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.282951 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.303166 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.318633 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.338112 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.356717 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.376128 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.405346 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.420631 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.208635 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.208763 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.208956 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.209018 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.208959 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.209125 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.209136 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.209131 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.209190 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.209272 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.209280 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.209362 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.209367 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.209412 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.209419 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.209431 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.209483 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.209485 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.209368 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.209377 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.209560 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.209882 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.209958 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.210048 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.210098 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.210104 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.210118 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.210233 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.210234 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.210260 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.210284 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.210264 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.210342 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.210353 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.210392 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.210482 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.210581 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.210654 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.210733 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.210770 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.210866 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.210876 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.211031 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.211099 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.211106 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.211163 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.211177 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.211196 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.211248 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.211341 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.211415 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.211433 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.211461 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.211463 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.211467 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.211539 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.211636 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.211766 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.211928 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.212028 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.212068 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.212139 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.212333 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.212448 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.212493 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.212544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.212569 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.212625 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.212686 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.212721 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.212889 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.212955 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.213030 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.213080 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.213145 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.213207 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.213299 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.213451 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.213544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.213619 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.213698 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.213860 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.209182 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.209114 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.209457 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:18 crc kubenswrapper[4183]: E0813 19:53:18.209490 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.209551 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.209621 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.209675 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.209715 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:18 crc kubenswrapper[4183]: E0813 19:53:18.209860 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:18 crc kubenswrapper[4183]: E0813 19:53:18.209949 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:18 crc kubenswrapper[4183]: E0813 19:53:18.210026 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:18 crc kubenswrapper[4183]: E0813 19:53:18.210113 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:18 crc kubenswrapper[4183]: E0813 19:53:18.210253 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:18 crc kubenswrapper[4183]: E0813 19:53:18.210463 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.620166 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.620735 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.621382 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.621985 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.622493 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:18Z","lastTransitionTime":"2025-08-13T19:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:18 crc kubenswrapper[4183]: E0813 19:53:18.651260 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.659754 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.659922 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.659944 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.659968 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.660001 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:18Z","lastTransitionTime":"2025-08-13T19:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:18 crc kubenswrapper[4183]: E0813 19:53:18.683285 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.692271 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.692395 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.692411 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.692680 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.692976 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:18Z","lastTransitionTime":"2025-08-13T19:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:18 crc kubenswrapper[4183]: E0813 19:53:18.709458 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.716134 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.716282 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.716448 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.716481 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.716598 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:18Z","lastTransitionTime":"2025-08-13T19:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:18 crc kubenswrapper[4183]: E0813 19:53:18.731537 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.737392 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.737532 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.737635 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.737765 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.738116 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:18Z","lastTransitionTime":"2025-08-13T19:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:18 crc kubenswrapper[4183]: E0813 19:53:18.752496 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:18 crc kubenswrapper[4183]: E0813 19:53:18.752555 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.208463 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.208510 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.208556 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.208601 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.208673 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.208682 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.208704 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.208731 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.208869 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.208973 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.208988 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.209026 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.209059 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.209131 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.209139 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.209200 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.209216 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.209199 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.209267 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.209274 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.209307 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.209350 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.209358 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.209365 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.209460 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.209466 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.209510 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.209536 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.209587 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.209595 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.209624 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.209702 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.209718 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.209898 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.209951 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.209987 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.210030 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.210062 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.210088 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.210158 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.210170 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.210227 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.210275 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.210344 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.210351 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.210403 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.210419 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.210558 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.210643 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.210659 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.210761 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.210997 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.211097 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.211160 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.211247 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.211354 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.211472 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.211541 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.211653 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.211696 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.211960 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.212077 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.212145 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.212218 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.212268 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.212330 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.212424 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.212645 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.212924 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.213135 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.213234 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.213246 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.213295 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.213544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.213704 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.214055 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.214366 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.214544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.214505 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.214769 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.215055 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.215297 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:20 crc kubenswrapper[4183]: I0813 19:53:20.208438 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:20 crc kubenswrapper[4183]: I0813 19:53:20.208505 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:20 crc kubenswrapper[4183]: E0813 19:53:20.208661 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:20 crc kubenswrapper[4183]: I0813 19:53:20.208983 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:20 crc kubenswrapper[4183]: I0813 19:53:20.209024 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:20 crc kubenswrapper[4183]: I0813 19:53:20.209061 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:20 crc kubenswrapper[4183]: I0813 19:53:20.209062 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:20 crc kubenswrapper[4183]: I0813 19:53:20.209107 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:20 crc kubenswrapper[4183]: E0813 19:53:20.209121 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:20 crc kubenswrapper[4183]: E0813 19:53:20.209225 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:20 crc kubenswrapper[4183]: E0813 19:53:20.209340 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:20 crc kubenswrapper[4183]: E0813 19:53:20.209431 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:20 crc kubenswrapper[4183]: E0813 19:53:20.209535 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:20 crc kubenswrapper[4183]: E0813 19:53:20.209634 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:20 crc kubenswrapper[4183]: E0813 19:53:20.438349 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.209379 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.209946 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.210138 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.209380 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.209403 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.209445 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.209479 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.209515 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.209552 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.209583 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.209612 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.209640 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.209668 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.209700 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.209744 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.209878 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.209915 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.210995 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.211260 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.211455 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.211477 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.211639 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.211739 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.211920 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.212036 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.212090 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.212170 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.212204 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.212243 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.212275 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.212325 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.212414 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.212428 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.212439 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.212495 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.212511 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.212509 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.212538 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.212549 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.212564 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.212608 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.212669 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.212735 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.213033 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.213118 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.213175 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.213276 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.213328 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.213347 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.213333 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.213384 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.213393 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.213432 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.213484 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.213569 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.213585 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.213613 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.213652 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.213684 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.213715 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.213756 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.213864 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.213888 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.213944 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.214007 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.214055 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.214067 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.214108 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.214134 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.214191 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.214254 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.214309 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.214397 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.214482 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.214554 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.214615 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.214866 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.214875 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.214930 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.214981 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.214999 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.215172 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.208944 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.209046 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.209062 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:22 crc kubenswrapper[4183]: E0813 19:53:22.209713 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.209766 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:22 crc kubenswrapper[4183]: E0813 19:53:22.209941 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.209957 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:22 crc kubenswrapper[4183]: E0813 19:53:22.210044 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.210070 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:22 crc kubenswrapper[4183]: E0813 19:53:22.210361 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.210455 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:22 crc kubenswrapper[4183]: E0813 19:53:22.210672 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:22 crc kubenswrapper[4183]: E0813 19:53:22.210745 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:22 crc kubenswrapper[4183]: E0813 19:53:22.210889 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.211475 4183 scope.go:117] "RemoveContainer" containerID="ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.567394 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovnkube-controller/3.log" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.573178 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137"} Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.573927 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.593181 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.607752 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.622102 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.648006 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.698183 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.727766 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.752717 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.781315 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.806877 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.830051 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.847684 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.865368 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.882685 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.905244 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.923713 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.936009 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.952129 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.967511 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.984148 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.003141 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.024410 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.041828 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.065370 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.083555 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.103218 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.125183 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.144210 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.163094 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.180890 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.199082 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.208485 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.208528 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.208670 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.208915 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.208977 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.209041 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.209158 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.209254 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.209321 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.209427 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.209480 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.209547 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.209654 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.209703 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.209756 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.209970 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.210026 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.210118 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.210235 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.210290 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.210347 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.210439 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.210522 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.210707 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.210953 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.211089 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.211249 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.211418 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.211594 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.211739 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.211891 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.211974 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.212039 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.212064 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.212138 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.212185 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.212141 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.212215 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.212258 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.212291 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.212187 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.212346 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.212382 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.212394 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.212449 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.212449 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.212044 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.212617 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.212661 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.212695 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.212714 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.212727 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.212767 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.212879 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.212882 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.212913 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.212937 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.213043 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.213110 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.213151 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.213171 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.213188 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.213207 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.213305 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.213345 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.213429 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.213437 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.214437 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.214558 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.214478 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.214705 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.215003 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.215150 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.215341 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.215414 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.215503 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.215578 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.215915 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.216032 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.216118 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.216184 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.216241 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.222706 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.250925 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\".4\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0813 19:52:37.663652 17150 metrics.go:552] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0813 19:52:37.664114 17150 ovnkube.go:136] failed to run ovnkube: failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z\\\\nI0813 19:52:37.663319 17150 services_controller.go:421] Built service openshift-kube-apiserver/apiserver cluster-wide LB []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver/apiserver_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver/apiserver\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.86\\\\\\\", Po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.268285 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.288220 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.310289 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.328407 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.347659 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.365228 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.382364 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.404866 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.420067 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.449433 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.477099 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.502895 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.520673 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.539114 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.565991 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.588006 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.608159 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.635103 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.655737 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615\\\\n2025-08-13T19:51:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:53Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:53Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:52:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.675089 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.694536 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.718288 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.736921 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.756496 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.772937 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.789085 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.816278 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.835668 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.851892 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.867883 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.888283 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.905277 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.923323 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.944177 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.964361 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.209178 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.209178 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.209215 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.209240 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.209452 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.209505 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:24 crc kubenswrapper[4183]: E0813 19:53:24.209586 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.209658 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:24 crc kubenswrapper[4183]: E0813 19:53:24.209989 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:24 crc kubenswrapper[4183]: E0813 19:53:24.210048 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:24 crc kubenswrapper[4183]: E0813 19:53:24.210126 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:24 crc kubenswrapper[4183]: E0813 19:53:24.210222 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:24 crc kubenswrapper[4183]: E0813 19:53:24.210315 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:24 crc kubenswrapper[4183]: E0813 19:53:24.210374 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.583497 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovnkube-controller/4.log" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.584634 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovnkube-controller/3.log" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.589535 4183 generic.go:334] "Generic (PLEG): container finished" podID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerID="419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137" exitCode=1 Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.589610 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerDied","Data":"419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137"} Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.589659 4183 scope.go:117] "RemoveContainer" containerID="ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.591641 4183 scope.go:117] "RemoveContainer" containerID="419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137" Aug 13 19:53:24 crc kubenswrapper[4183]: E0813 19:53:24.592274 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.611151 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.630162 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.650660 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.670662 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.690138 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.711723 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.728917 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.752108 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.772436 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.791573 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.810438 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.825256 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.842180 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.865908 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\".4\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0813 19:52:37.663652 17150 metrics.go:552] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0813 19:52:37.664114 17150 ovnkube.go:136] failed to run ovnkube: failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z\\\\nI0813 19:52:37.663319 17150 services_controller.go:421] Built service openshift-kube-apiserver/apiserver cluster-wide LB []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver/apiserver_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver/apiserver\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.86\\\\\\\", Po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:24Z\\\",\\\"message\\\":\\\"094 reflector.go:295] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0813 19:53:23.937894 18094 handler.go:217] Removed *v1.Node event handler 10\\\\nI0813 19:53:23.937902 18094 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:53:23.937909 18094 handler.go:217] Removed *v1.EgressIP event handler 8\\\\nI0813 19:53:23.937915 18094 handler.go:217] Removed *v1.Pod event handler 3\\\\nI0813 19:53:23.937950 18094 reflector.go:295] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937977 18094 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:53:23.938001 18094 handler.go:203] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0813 19:53:23.938010 18094 handler.go:217] Removed *v1.EgressFirewall event handler 9\\\\nI0813 19:53:23.938033 18094 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:53:23.938059 18094 reflector.go:295] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937476 18094 handler.go:217] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.889759 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.905930 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.925144 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.941462 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.968054 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.993271 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.013392 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.031716 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.051455 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.069413 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.095557 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.113754 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.130412 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.148394 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.171521 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.190070 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.206906 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.209105 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.209122 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.209254 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.209436 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.209689 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.209724 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.209748 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.209854 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.209885 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.209945 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.209981 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.209987 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.210010 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.210041 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.209689 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.210145 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.210154 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.210161 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.210195 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.210204 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.210247 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.210259 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.210279 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.210248 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.210332 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.210407 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.210494 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.210613 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.210710 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.210737 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.210921 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.211001 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.211047 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.211105 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.211158 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.211213 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.211304 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.211402 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.211449 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.211494 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.211561 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.211594 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.211688 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.211711 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.211733 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.211887 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.211905 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.211999 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.212034 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.212076 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.212132 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.212207 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.212423 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.212702 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.213044 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.213094 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.213207 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.213208 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.213342 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.213364 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.213385 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.213496 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.213612 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.213621 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.213687 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.213685 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.213734 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.213744 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.213879 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.213959 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.214034 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.214245 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.214260 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.214366 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.214709 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.214926 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.215019 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.215088 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.215170 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.215235 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.215301 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.215408 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.236087 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.253641 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615\\\\n2025-08-13T19:51:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:53Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:53Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:52:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.268482 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.285117 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.304254 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.322188 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.342421 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.364543 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.384643 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.406592 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.422127 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.435988 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.440076 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.457238 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.475270 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.495190 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.510984 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.526122 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.541479 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.557306 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.571163 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.588255 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.595319 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovnkube-controller/4.log" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.613377 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.627767 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.644326 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.661423 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.683956 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.703188 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.718349 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.742963 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.762330 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.778538 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.799734 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.829227 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.868714 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.912340 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.948941 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.988226 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.033058 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.070414 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.109327 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.149852 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.193585 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.208382 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.208425 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.208467 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.208493 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:26 crc kubenswrapper[4183]: E0813 19:53:26.209441 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:26 crc kubenswrapper[4183]: E0813 19:53:26.209512 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.208511 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:26 crc kubenswrapper[4183]: E0813 19:53:26.209665 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.208537 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:26 crc kubenswrapper[4183]: E0813 19:53:26.209750 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.208616 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:26 crc kubenswrapper[4183]: E0813 19:53:26.209922 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:26 crc kubenswrapper[4183]: E0813 19:53:26.208930 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:26 crc kubenswrapper[4183]: E0813 19:53:26.209159 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.232056 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.269389 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.311014 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.353926 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.389309 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.426273 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.468221 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.514729 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\".4\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0813 19:52:37.663652 17150 metrics.go:552] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0813 19:52:37.664114 17150 ovnkube.go:136] failed to run ovnkube: failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z\\\\nI0813 19:52:37.663319 17150 services_controller.go:421] Built service openshift-kube-apiserver/apiserver cluster-wide LB []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver/apiserver_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver/apiserver\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.86\\\\\\\", Po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:24Z\\\",\\\"message\\\":\\\"094 reflector.go:295] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0813 19:53:23.937894 18094 handler.go:217] Removed *v1.Node event handler 10\\\\nI0813 19:53:23.937902 18094 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:53:23.937909 18094 handler.go:217] Removed *v1.EgressIP event handler 8\\\\nI0813 19:53:23.937915 18094 handler.go:217] Removed *v1.Pod event handler 3\\\\nI0813 19:53:23.937950 18094 reflector.go:295] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937977 18094 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:53:23.938001 18094 handler.go:203] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0813 19:53:23.938010 18094 handler.go:217] Removed *v1.EgressFirewall event handler 9\\\\nI0813 19:53:23.938033 18094 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:53:23.938059 18094 reflector.go:295] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937476 18094 handler.go:217] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.552699 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.592610 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.631348 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.668980 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.707527 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.747072 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.787953 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.826284 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.867460 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.936428 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.990071 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.007532 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.025362 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.066572 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.110043 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.153537 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.188630 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.208995 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.209011 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.209151 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.209174 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.209240 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.209382 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.209402 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.209442 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.209544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.209621 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.209743 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.209932 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.209940 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.210031 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.210127 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.210127 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.210190 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.210251 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.210336 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.210378 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.210431 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.210433 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.210481 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.210543 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.210583 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.210642 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.210767 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.210936 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.211099 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.211111 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.211162 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.211239 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.211261 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.211338 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.211450 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.211517 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.211565 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.211626 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.211702 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.211714 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.211758 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.211973 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.211985 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.212109 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.212182 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.212290 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.212290 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.212342 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.212363 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.212405 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.212485 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.212629 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.212680 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.212766 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.212852 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.212925 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.213058 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.213169 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.213314 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.213353 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.213449 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.213590 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.213635 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.213748 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.213756 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.213926 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.214108 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.214178 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.214262 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.214355 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.214447 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.214508 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.214947 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.215028 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.214664 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.214862 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.215353 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.216063 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.216464 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.216506 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.216572 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.216710 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.227563 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.269975 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615\\\\n2025-08-13T19:51:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:53Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:53Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:52:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.307988 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.346944 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.389491 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.430501 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.469200 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.509052 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.549620 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.586342 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.629426 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.667841 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.708647 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.753720 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.788652 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.827499 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.866369 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.909362 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.947887 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.991220 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:28 crc kubenswrapper[4183]: I0813 19:53:28.028611 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:28 crc kubenswrapper[4183]: I0813 19:53:28.069532 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:28 crc kubenswrapper[4183]: I0813 19:53:28.108221 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:28 crc kubenswrapper[4183]: I0813 19:53:28.150567 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:28 crc kubenswrapper[4183]: I0813 19:53:28.186164 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:28 crc kubenswrapper[4183]: I0813 19:53:28.208061 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:28 crc kubenswrapper[4183]: I0813 19:53:28.208170 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:28 crc kubenswrapper[4183]: E0813 19:53:28.208282 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:28 crc kubenswrapper[4183]: I0813 19:53:28.208347 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:28 crc kubenswrapper[4183]: I0813 19:53:28.208425 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:28 crc kubenswrapper[4183]: I0813 19:53:28.208545 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:28 crc kubenswrapper[4183]: E0813 19:53:28.208553 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:28 crc kubenswrapper[4183]: E0813 19:53:28.208716 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:28 crc kubenswrapper[4183]: E0813 19:53:28.208864 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:28 crc kubenswrapper[4183]: I0813 19:53:28.208753 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:28 crc kubenswrapper[4183]: I0813 19:53:28.208906 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:28 crc kubenswrapper[4183]: E0813 19:53:28.209076 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:28 crc kubenswrapper[4183]: E0813 19:53:28.209143 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:28 crc kubenswrapper[4183]: E0813 19:53:28.209422 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:28 crc kubenswrapper[4183]: I0813 19:53:28.230106 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:28 crc kubenswrapper[4183]: I0813 19:53:28.269551 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:28 crc kubenswrapper[4183]: I0813 19:53:28.307370 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:28 crc kubenswrapper[4183]: I0813 19:53:28.348529 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:28 crc kubenswrapper[4183]: I0813 19:53:28.390113 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:28 crc kubenswrapper[4183]: I0813 19:53:28.429726 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:28 crc kubenswrapper[4183]: I0813 19:53:28.469704 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:28 crc kubenswrapper[4183]: I0813 19:53:28.509945 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:28 crc kubenswrapper[4183]: I0813 19:53:28.545067 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:28 crc kubenswrapper[4183]: I0813 19:53:28.589112 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:28 crc kubenswrapper[4183]: I0813 19:53:28.625949 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.077248 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.077744 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.077989 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.078215 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.078358 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:29Z","lastTransitionTime":"2025-08-13T19:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.092331 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.097414 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.097465 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.097481 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.097500 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.097527 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:29Z","lastTransitionTime":"2025-08-13T19:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.111095 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.115351 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.115577 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.115706 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.115885 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.116049 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:29Z","lastTransitionTime":"2025-08-13T19:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.129256 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.133742 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.133881 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.133898 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.133916 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.133942 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:29Z","lastTransitionTime":"2025-08-13T19:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.146308 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.150916 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.150973 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.150990 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.151009 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.151029 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:29Z","lastTransitionTime":"2025-08-13T19:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.165069 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.165121 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.208670 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.208880 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.209001 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.209009 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.209127 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.209241 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.209339 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.209243 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.209289 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.209313 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.209513 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.209555 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.209635 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.209681 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.209749 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.209871 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.209943 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.209985 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.210008 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.210073 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.210104 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.210153 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.210255 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.210265 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.210296 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.210346 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.210372 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.210418 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.210447 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.210546 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.210562 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.210597 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.210625 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.210680 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.210749 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.210757 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.210421 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.210905 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.210986 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.210997 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.211035 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.211077 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.211094 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.211148 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.211186 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.211208 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.211266 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.211289 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.211327 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.211335 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.211362 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.211380 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.211397 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.211435 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.211447 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.211565 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.211632 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.211735 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.211762 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.211917 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.211962 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.212145 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.211972 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.212206 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.212344 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.212416 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.212552 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.212660 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.212742 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.212946 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.212985 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.213035 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.213122 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.213182 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.213425 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.213474 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.213606 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.213768 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.213996 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.214030 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.214092 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.214160 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.623087 4183 generic.go:334] "Generic (PLEG): container finished" podID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerID="0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839" exitCode=0 Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.623611 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" event={"ID":"aa90b3c2-febd-4588-a063-7fbbe82f00c1","Type":"ContainerDied","Data":"0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839"} Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.208656 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:30 crc kubenswrapper[4183]: E0813 19:53:30.208912 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.208946 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.209023 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.209084 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.208985 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.209132 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:30 crc kubenswrapper[4183]: E0813 19:53:30.209209 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:30 crc kubenswrapper[4183]: E0813 19:53:30.209298 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.209364 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:30 crc kubenswrapper[4183]: E0813 19:53:30.209441 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:30 crc kubenswrapper[4183]: E0813 19:53:30.209529 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:30 crc kubenswrapper[4183]: E0813 19:53:30.209575 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:30 crc kubenswrapper[4183]: E0813 19:53:30.209638 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:30 crc kubenswrapper[4183]: E0813 19:53:30.441591 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.630111 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" event={"ID":"aa90b3c2-febd-4588-a063-7fbbe82f00c1","Type":"ContainerStarted","Data":"4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02"} Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.650906 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.675255 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.693444 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.709315 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.732295 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.780334 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.799883 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.817633 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.832910 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.850378 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.867869 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.885717 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.903042 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.920897 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.940193 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.957944 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.975079 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.993986 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.009763 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.024521 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.045401 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.062640 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.082641 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.102222 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.127310 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.144765 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.163260 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.183273 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.205641 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.211050 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.211111 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.211186 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.211272 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.211285 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.211286 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.211378 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.211409 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.211451 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.211509 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.211569 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.211606 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.211609 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.211680 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.211724 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.211686 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.211926 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.211931 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.212000 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.212003 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.212067 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.212074 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.211734 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.212137 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.212174 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.212231 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.212243 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.212284 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.212412 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.212457 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.212463 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.212568 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.212608 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.212609 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.212713 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.212764 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.212948 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.213016 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.212714 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.213079 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.213150 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.213220 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.213227 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.213495 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.213432 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.213614 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.213622 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.213661 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.213704 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.213896 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.213983 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.214048 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.214123 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.214204 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.214244 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.214415 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.214460 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.214506 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.214508 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.214532 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.214585 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.214640 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.214676 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.214755 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.214948 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.215027 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.215386 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.215411 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.215493 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.215590 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.215958 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.216039 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.216066 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.216078 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.216416 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.216169 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.216206 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.216568 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.216700 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.216915 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.217061 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.217155 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.229367 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.246833 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.265301 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.288169 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.302377 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.318284 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.340349 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\".4\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0813 19:52:37.663652 17150 metrics.go:552] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0813 19:52:37.664114 17150 ovnkube.go:136] failed to run ovnkube: failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z\\\\nI0813 19:52:37.663319 17150 services_controller.go:421] Built service openshift-kube-apiserver/apiserver cluster-wide LB []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver/apiserver_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver/apiserver\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.86\\\\\\\", Po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:24Z\\\",\\\"message\\\":\\\"094 reflector.go:295] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0813 19:53:23.937894 18094 handler.go:217] Removed *v1.Node event handler 10\\\\nI0813 19:53:23.937902 18094 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:53:23.937909 18094 handler.go:217] Removed *v1.EgressIP event handler 8\\\\nI0813 19:53:23.937915 18094 handler.go:217] Removed *v1.Pod event handler 3\\\\nI0813 19:53:23.937950 18094 reflector.go:295] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937977 18094 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:53:23.938001 18094 handler.go:203] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0813 19:53:23.938010 18094 handler.go:217] Removed *v1.EgressFirewall event handler 9\\\\nI0813 19:53:23.938033 18094 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:53:23.938059 18094 reflector.go:295] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937476 18094 handler.go:217] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.364733 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.385035 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.401403 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.416662 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.430462 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.431706 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.437912 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:31 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:31 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:31 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.438012 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.446452 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.464122 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.479497 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.496936 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.517951 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.535666 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.550668 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.567720 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.581755 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.606953 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.624174 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.643075 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.670693 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.689319 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615\\\\n2025-08-13T19:51:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:53Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:53Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:52:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.704929 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.718165 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.733629 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.758219 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.778196 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.797734 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.820394 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.838613 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.860536 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.878112 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.894491 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.911205 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:32 crc kubenswrapper[4183]: I0813 19:53:32.209052 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:32 crc kubenswrapper[4183]: I0813 19:53:32.209116 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:32 crc kubenswrapper[4183]: I0813 19:53:32.209695 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:32 crc kubenswrapper[4183]: I0813 19:53:32.210010 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:32 crc kubenswrapper[4183]: I0813 19:53:32.210326 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:32 crc kubenswrapper[4183]: I0813 19:53:32.210395 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:32 crc kubenswrapper[4183]: E0813 19:53:32.210564 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:32 crc kubenswrapper[4183]: I0813 19:53:32.210847 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:32 crc kubenswrapper[4183]: E0813 19:53:32.210947 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:32 crc kubenswrapper[4183]: E0813 19:53:32.210990 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:32 crc kubenswrapper[4183]: E0813 19:53:32.211026 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:32 crc kubenswrapper[4183]: E0813 19:53:32.211128 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:32 crc kubenswrapper[4183]: E0813 19:53:32.211223 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:32 crc kubenswrapper[4183]: E0813 19:53:32.211296 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:32 crc kubenswrapper[4183]: I0813 19:53:32.432020 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:32 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:32 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:32 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:32 crc kubenswrapper[4183]: I0813 19:53:32.432567 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.208496 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.208598 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.208635 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.208686 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.208699 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.208723 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.208902 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.208905 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.208935 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.208980 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.208999 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.209003 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.209068 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.208510 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.209069 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.209171 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.209183 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.209206 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.209233 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.209284 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.209285 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.209415 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.209416 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.209431 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.209441 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.209496 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.209503 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.209536 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.209562 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.209596 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.209674 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.209686 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.209768 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.209901 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.209925 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.210000 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.210140 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.210341 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.210372 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.210438 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.210492 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.210511 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.210529 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.210652 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.210709 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.210729 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.210876 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.210959 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.210999 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.211043 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.211078 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.211143 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.211203 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.211243 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.211325 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.211389 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.211445 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.211445 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.211497 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.211511 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.211564 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.211621 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.211680 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.211853 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.211956 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.212008 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.212053 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.212109 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.212193 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.212284 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.212397 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.212471 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.212550 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.212629 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.212702 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.209686 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.212976 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.213046 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.213079 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.213152 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.213313 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.213434 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.432657 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:33 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:33 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:33 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.432750 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:34 crc kubenswrapper[4183]: I0813 19:53:34.209175 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:34 crc kubenswrapper[4183]: I0813 19:53:34.209293 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:34 crc kubenswrapper[4183]: I0813 19:53:34.209175 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:34 crc kubenswrapper[4183]: I0813 19:53:34.209199 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:34 crc kubenswrapper[4183]: E0813 19:53:34.209459 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:34 crc kubenswrapper[4183]: I0813 19:53:34.209200 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:34 crc kubenswrapper[4183]: I0813 19:53:34.209233 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:34 crc kubenswrapper[4183]: E0813 19:53:34.209550 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:34 crc kubenswrapper[4183]: I0813 19:53:34.209579 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:34 crc kubenswrapper[4183]: E0813 19:53:34.209619 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:34 crc kubenswrapper[4183]: E0813 19:53:34.209694 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:34 crc kubenswrapper[4183]: E0813 19:53:34.209891 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:34 crc kubenswrapper[4183]: E0813 19:53:34.209971 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:34 crc kubenswrapper[4183]: E0813 19:53:34.210256 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:34 crc kubenswrapper[4183]: I0813 19:53:34.432582 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:34 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:34 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:34 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:34 crc kubenswrapper[4183]: I0813 19:53:34.432909 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.208248 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.208322 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.208400 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.208451 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.208467 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.208488 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.208539 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.208541 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.208545 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.208617 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.208577 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.208681 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.208686 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.208738 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.208759 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.208921 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.208931 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.208962 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.208981 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.209020 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.208248 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.209084 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.209053 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.209130 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.209141 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.209174 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.209205 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.209223 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.209243 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.209262 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.209305 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.209309 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.208587 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.209379 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.209501 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.209600 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.209671 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.209685 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.209705 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.209675 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.209882 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.209994 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.210053 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.210160 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.210221 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.210305 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.210360 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.210505 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.210575 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.210713 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.210892 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.210967 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.210990 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.211033 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.211074 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.211084 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.211185 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.211473 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.211539 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.211575 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.211609 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.211665 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.211700 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.211735 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.211739 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.211887 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.211911 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.212049 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.212146 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.212143 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.212344 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.212347 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.212461 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.212560 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.212696 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.212982 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.213139 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.213158 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.213244 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.213355 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.213499 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.213634 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.228976 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.246680 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.264431 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.281380 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.305188 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\".4\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0813 19:52:37.663652 17150 metrics.go:552] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0813 19:52:37.664114 17150 ovnkube.go:136] failed to run ovnkube: failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z\\\\nI0813 19:52:37.663319 17150 services_controller.go:421] Built service openshift-kube-apiserver/apiserver cluster-wide LB []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver/apiserver_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver/apiserver\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.86\\\\\\\", Po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:24Z\\\",\\\"message\\\":\\\"094 reflector.go:295] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0813 19:53:23.937894 18094 handler.go:217] Removed *v1.Node event handler 10\\\\nI0813 19:53:23.937902 18094 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:53:23.937909 18094 handler.go:217] Removed *v1.EgressIP event handler 8\\\\nI0813 19:53:23.937915 18094 handler.go:217] Removed *v1.Pod event handler 3\\\\nI0813 19:53:23.937950 18094 reflector.go:295] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937977 18094 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:53:23.938001 18094 handler.go:203] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0813 19:53:23.938010 18094 handler.go:217] Removed *v1.EgressFirewall event handler 9\\\\nI0813 19:53:23.938033 18094 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:53:23.938059 18094 reflector.go:295] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937476 18094 handler.go:217] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.324247 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.339954 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.356597 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.370849 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.393555 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.410696 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.427482 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.432627 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:35 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:35 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:35 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.433086 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.443071 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.443253 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.459449 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.476096 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.500226 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.515552 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.529081 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.551895 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.569436 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.586751 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.603408 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.620765 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.637101 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615\\\\n2025-08-13T19:51:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:53Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:53Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:52:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.657719 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.672039 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.690353 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.707181 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.724412 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.746962 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.769400 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.790167 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.805204 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.821278 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.841564 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.862171 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.877699 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.894078 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.913498 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.931602 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.950005 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.970697 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.990188 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.009600 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.054363 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.082422 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.117853 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.136869 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.156267 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.175321 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.195651 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.209129 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.209220 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.209175 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:36 crc kubenswrapper[4183]: E0813 19:53:36.209390 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.209339 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:36 crc kubenswrapper[4183]: E0813 19:53:36.209505 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.209423 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.209565 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:36 crc kubenswrapper[4183]: E0813 19:53:36.209606 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:36 crc kubenswrapper[4183]: E0813 19:53:36.210039 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.210254 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:36 crc kubenswrapper[4183]: E0813 19:53:36.210340 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:36 crc kubenswrapper[4183]: E0813 19:53:36.210257 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:36 crc kubenswrapper[4183]: E0813 19:53:36.210526 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.216507 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.239172 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.257266 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.278303 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.293213 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.311450 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.330961 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.351692 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.373683 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.392943 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.409003 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.429216 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.433245 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:36 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:36 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:36 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.433347 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.467128 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.483283 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.501753 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.521272 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.208543 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.208665 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.208877 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.208893 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.208906 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.208975 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.208989 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.209073 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.209131 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.209177 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.209233 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.209238 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.209276 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.209367 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.209372 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.209412 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.209454 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.209495 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.209715 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.209978 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.210078 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.210224 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.210284 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.210382 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.210562 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.210660 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.210755 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.211007 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.211118 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.211206 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.211378 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.211504 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.211688 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.211993 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.212132 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.212305 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.212401 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.212537 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.212706 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.212875 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.212944 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.213053 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.213142 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.213222 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.213267 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.213297 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.213386 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.213509 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.213516 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.213608 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.213618 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.213757 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.213763 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.213890 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.214037 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.214095 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.214106 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.214138 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.214140 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.214213 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.214217 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.214285 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.214325 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.214428 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.214534 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.214600 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.214688 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.215024 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.215230 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.215358 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.215419 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.215444 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.215475 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.215502 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.215745 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.215756 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.215911 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.215986 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.216087 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.216184 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.216331 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.216433 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.433091 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:37 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:37 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:37 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.433234 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:38 crc kubenswrapper[4183]: I0813 19:53:38.208859 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:38 crc kubenswrapper[4183]: E0813 19:53:38.209162 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:38 crc kubenswrapper[4183]: I0813 19:53:38.209338 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:38 crc kubenswrapper[4183]: I0813 19:53:38.209457 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:38 crc kubenswrapper[4183]: I0813 19:53:38.209572 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:38 crc kubenswrapper[4183]: I0813 19:53:38.209599 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:38 crc kubenswrapper[4183]: E0813 19:53:38.209467 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:38 crc kubenswrapper[4183]: I0813 19:53:38.209760 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:38 crc kubenswrapper[4183]: I0813 19:53:38.210014 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:38 crc kubenswrapper[4183]: E0813 19:53:38.210311 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:38 crc kubenswrapper[4183]: E0813 19:53:38.210445 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:38 crc kubenswrapper[4183]: E0813 19:53:38.210728 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:38 crc kubenswrapper[4183]: E0813 19:53:38.210890 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:38 crc kubenswrapper[4183]: E0813 19:53:38.210957 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:38 crc kubenswrapper[4183]: I0813 19:53:38.432553 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:38 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:38 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:38 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:38 crc kubenswrapper[4183]: I0813 19:53:38.432689 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.208561 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.208649 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.208672 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.208588 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.208732 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.208741 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.208623 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.208933 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.208954 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.208937 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.208969 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.209026 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.209035 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.209042 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.208933 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.209109 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.209165 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.209173 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.209206 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.209475 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.209965 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.209981 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.210029 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.210115 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.210240 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.210286 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.210312 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.210292 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.210410 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.210433 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.210609 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.210669 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.210699 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.210676 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.210770 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.210877 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.210922 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.210965 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.210991 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.211017 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.211038 4183 scope.go:117] "RemoveContainer" containerID="419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.210996 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.211092 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.211103 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.211211 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.211214 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.211249 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.211349 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.211396 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.211440 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.211448 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.211492 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.211532 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.211540 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.211607 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.211687 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.211752 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.211959 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.211963 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.212112 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.212146 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.212324 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.212402 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.212477 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.212502 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.212588 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.212692 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.212875 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.212928 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.212947 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.213053 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.213123 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.213192 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.213263 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.213348 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.213408 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.213743 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.214033 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.214224 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.214390 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.214548 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.214895 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.215095 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.215567 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.231024 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.245045 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.261434 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.276633 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.301151 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.317028 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.333623 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.348741 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.367248 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.384065 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.401651 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.417090 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.427450 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.427520 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.427537 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.427555 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.427580 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:39Z","lastTransitionTime":"2025-08-13T19:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.432836 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:39 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:39 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:39 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.432948 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.437111 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.443272 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.448082 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.448152 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.448168 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.448191 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.448214 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:39Z","lastTransitionTime":"2025-08-13T19:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.456699 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.463185 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.468328 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.468672 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.468908 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.469149 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.469440 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:39Z","lastTransitionTime":"2025-08-13T19:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.475478 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.485313 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.490333 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.492504 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.492869 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.493212 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.493577 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.493911 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:39Z","lastTransitionTime":"2025-08-13T19:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.508022 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.510746 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.516420 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.516489 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.516510 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.516539 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.516571 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:39Z","lastTransitionTime":"2025-08-13T19:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.527094 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.538002 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.538601 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.542631 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.558296 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.571119 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.594431 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.611651 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.626397 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.648604 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.664625 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.681279 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.695379 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.712153 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.733239 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.746960 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.760668 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.790297 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.815297 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.842519 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:24Z\\\",\\\"message\\\":\\\"094 reflector.go:295] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0813 19:53:23.937894 18094 handler.go:217] Removed *v1.Node event handler 10\\\\nI0813 19:53:23.937902 18094 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:53:23.937909 18094 handler.go:217] Removed *v1.EgressIP event handler 8\\\\nI0813 19:53:23.937915 18094 handler.go:217] Removed *v1.Pod event handler 3\\\\nI0813 19:53:23.937950 18094 reflector.go:295] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937977 18094 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:53:23.938001 18094 handler.go:203] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0813 19:53:23.938010 18094 handler.go:217] Removed *v1.EgressFirewall event handler 9\\\\nI0813 19:53:23.938033 18094 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:53:23.938059 18094 reflector.go:295] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937476 18094 handler.go:217] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.867012 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.882724 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.898521 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.915112 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.934385 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.952227 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.968554 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.983660 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.999704 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.013880 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.037096 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.052019 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.067281 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.081054 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.099048 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.116529 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.134155 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.144190 4183 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.144297 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.150093 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.169843 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615\\\\n2025-08-13T19:51:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:53Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:53Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:52:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.184050 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.197481 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.208630 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.208660 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.208704 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.208926 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:40 crc kubenswrapper[4183]: E0813 19:53:40.208957 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.209130 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.209169 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:40 crc kubenswrapper[4183]: E0813 19:53:40.209290 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:40 crc kubenswrapper[4183]: E0813 19:53:40.209400 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:40 crc kubenswrapper[4183]: E0813 19:53:40.209480 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:40 crc kubenswrapper[4183]: E0813 19:53:40.209604 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.209610 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:40 crc kubenswrapper[4183]: E0813 19:53:40.209755 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:40 crc kubenswrapper[4183]: E0813 19:53:40.209905 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.214938 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.236241 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.257580 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.281956 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.300995 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.318871 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.336729 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.351079 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.373371 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.389739 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.406613 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.430230 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.432939 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:40 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:40 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:40 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.433033 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:40 crc kubenswrapper[4183]: E0813 19:53:40.444450 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.666554 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/2.log" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.667349 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/1.log" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.667412 4183 generic.go:334] "Generic (PLEG): container finished" podID="475321a1-8b7e-4033-8f72-b05a8b377347" containerID="8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb" exitCode=1 Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.667440 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q88th" event={"ID":"475321a1-8b7e-4033-8f72-b05a8b377347","Type":"ContainerDied","Data":"8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb"} Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.667474 4183 scope.go:117] "RemoveContainer" containerID="9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.667995 4183 scope.go:117] "RemoveContainer" containerID="8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb" Aug 13 19:53:40 crc kubenswrapper[4183]: E0813 19:53:40.668458 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\"" pod="openshift-multus/multus-q88th" podUID="475321a1-8b7e-4033-8f72-b05a8b377347" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.817399 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.833153 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.854704 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.870102 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.895697 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.911390 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.927015 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.942995 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.958510 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.972748 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.987283 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.002603 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.017928 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.031162 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.048549 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.065241 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.080681 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.098948 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.112276 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.129425 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.146903 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.166703 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.187548 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.205905 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.209250 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.209313 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.209322 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.209380 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.209506 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.209529 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.209602 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.209707 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.209765 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.209905 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.209959 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.210025 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.210024 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.210133 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.210137 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.210234 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.210331 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.210331 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.210375 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.210452 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.210498 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.210525 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.210591 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.210657 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.210727 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.210917 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.210990 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.211073 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.211110 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.211160 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.211226 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.211338 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.211338 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.211448 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.211497 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.211532 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.211579 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.211657 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.211733 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.211903 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.212064 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.212205 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.212257 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.212351 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.212388 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.212522 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.212561 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.212657 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.212865 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.212896 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.212971 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.213066 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.213066 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.213188 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.213363 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.213490 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.213383 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.213609 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.209187 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.209688 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.213944 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.214014 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.214053 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.214164 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.214255 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.214361 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.214487 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.214629 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.214854 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.215022 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.215118 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.215184 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.215274 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.215396 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.215534 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.215667 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.216075 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.216187 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.216242 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.216321 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.216547 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.216719 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.233630 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.253210 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.270092 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.291469 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.308630 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.327316 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.343225 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.361917 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.380121 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.397201 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.418045 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.433110 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:41 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:41 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:41 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.433545 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.440924 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:24Z\\\",\\\"message\\\":\\\"094 reflector.go:295] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0813 19:53:23.937894 18094 handler.go:217] Removed *v1.Node event handler 10\\\\nI0813 19:53:23.937902 18094 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:53:23.937909 18094 handler.go:217] Removed *v1.EgressIP event handler 8\\\\nI0813 19:53:23.937915 18094 handler.go:217] Removed *v1.Pod event handler 3\\\\nI0813 19:53:23.937950 18094 reflector.go:295] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937977 18094 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:53:23.938001 18094 handler.go:203] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0813 19:53:23.938010 18094 handler.go:217] Removed *v1.EgressFirewall event handler 9\\\\nI0813 19:53:23.938033 18094 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:53:23.938059 18094 reflector.go:295] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937476 18094 handler.go:217] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.457607 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.475497 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.491204 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.510182 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.545711 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.583944 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.623729 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.661893 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.674930 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/2.log" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.705079 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.753198 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.783123 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.822335 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.861983 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.907293 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.944270 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.982765 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.023703 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.064041 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615\\\\n2025-08-13T19:51:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:53Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:53Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:52:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"2025-08-13T19:52:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e\\\\n2025-08-13T19:52:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e to /host/opt/cni/bin/\\\\n2025-08-13T19:52:54Z [verbose] multus-daemon started\\\\n2025-08-13T19:52:54Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:53:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.105425 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.140655 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.182237 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.208881 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.208954 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.208956 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.209025 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:42 crc kubenswrapper[4183]: E0813 19:53:42.209155 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.209252 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:42 crc kubenswrapper[4183]: E0813 19:53:42.209343 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.209368 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:42 crc kubenswrapper[4183]: E0813 19:53:42.209451 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:42 crc kubenswrapper[4183]: E0813 19:53:42.209602 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:42 crc kubenswrapper[4183]: E0813 19:53:42.209762 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:42 crc kubenswrapper[4183]: E0813 19:53:42.209918 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.210161 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:42 crc kubenswrapper[4183]: E0813 19:53:42.210663 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.226494 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.395062 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.417003 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.433565 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:42 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:42 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:42 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.433719 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.441649 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.483633 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.523251 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.546872 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.564833 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.581658 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.599446 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.209271 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.209397 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.209452 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.209465 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.209504 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.209472 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.209508 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.209450 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.209482 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.209623 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.209643 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.209646 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.209660 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.209698 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.209709 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.209735 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.209889 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.209952 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.209956 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.210033 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.210090 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.210153 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.210160 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.210241 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.210252 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.210298 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.210347 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.210398 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.210400 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.210478 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.210479 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.210510 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.210535 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.210607 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.210678 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.210693 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.210756 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.210955 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.210986 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.211011 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.211086 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.211135 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.211348 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.211427 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.211489 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.211535 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.211562 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.211605 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.211614 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.211860 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.211911 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.211981 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.211982 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.212023 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.212181 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.212218 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.212371 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.211571 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.212391 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.212488 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.212579 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.212719 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.212903 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.213009 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.213110 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.213300 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.213496 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.213541 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.213564 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.213595 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.213669 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.213849 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.213977 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.214083 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.214211 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.214258 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.214302 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.214355 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.214475 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.214592 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.214707 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.214895 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.436268 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:43 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:43 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:43 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.436381 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:44 crc kubenswrapper[4183]: I0813 19:53:44.208997 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:44 crc kubenswrapper[4183]: I0813 19:53:44.209061 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:44 crc kubenswrapper[4183]: I0813 19:53:44.209071 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:44 crc kubenswrapper[4183]: I0813 19:53:44.209018 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:44 crc kubenswrapper[4183]: I0813 19:53:44.209016 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:44 crc kubenswrapper[4183]: E0813 19:53:44.209325 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:44 crc kubenswrapper[4183]: E0813 19:53:44.209482 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:44 crc kubenswrapper[4183]: E0813 19:53:44.209578 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:44 crc kubenswrapper[4183]: I0813 19:53:44.209631 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:44 crc kubenswrapper[4183]: E0813 19:53:44.209696 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:44 crc kubenswrapper[4183]: I0813 19:53:44.209733 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:44 crc kubenswrapper[4183]: E0813 19:53:44.209957 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:44 crc kubenswrapper[4183]: E0813 19:53:44.210019 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:44 crc kubenswrapper[4183]: E0813 19:53:44.210118 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:44 crc kubenswrapper[4183]: I0813 19:53:44.433166 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:44 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:44 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:44 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:44 crc kubenswrapper[4183]: I0813 19:53:44.433303 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.208109 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.208181 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.208182 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.208229 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.208112 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.208267 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.208284 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.208149 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.208152 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.208111 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.208358 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.208232 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.209002 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.209021 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.209103 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.209119 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.209200 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.209213 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.209377 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.209410 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.209525 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.209622 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.209661 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.209755 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.209859 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.209949 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.209969 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.210081 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.210089 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.210162 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.210253 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.210273 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.210302 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.210373 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.210393 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.210399 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.210433 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.210493 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.210518 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.210554 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.210630 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.210634 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.210685 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.210732 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.210876 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.210909 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.210988 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.212321 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.211059 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.211192 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.211241 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.211264 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.211280 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.211336 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.211492 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.211576 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.211645 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.212607 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.212636 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.212771 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.213007 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.213066 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.213123 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.213244 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.213360 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.213465 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.213496 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.213566 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.213692 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.213971 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.214026 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.214138 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.214349 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.214501 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.214564 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.214647 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.214693 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.214763 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.214952 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.215036 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.215751 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.216119 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.228506 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.244413 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.266523 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.282344 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.301419 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.317484 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.343094 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.366623 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.390910 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.412466 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.428976 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.432900 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:45 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:45 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:45 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.432971 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.444519 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.445495 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.462054 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.483249 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.536918 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.553242 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.567225 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.583327 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.600851 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.617265 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.632915 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.649493 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.665401 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.680595 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.703715 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.722916 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.739844 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.758872 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.778887 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.799107 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.818013 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.839415 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.857183 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.874746 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.892990 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.913007 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.938071 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:24Z\\\",\\\"message\\\":\\\"094 reflector.go:295] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0813 19:53:23.937894 18094 handler.go:217] Removed *v1.Node event handler 10\\\\nI0813 19:53:23.937902 18094 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:53:23.937909 18094 handler.go:217] Removed *v1.EgressIP event handler 8\\\\nI0813 19:53:23.937915 18094 handler.go:217] Removed *v1.Pod event handler 3\\\\nI0813 19:53:23.937950 18094 reflector.go:295] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937977 18094 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:53:23.938001 18094 handler.go:203] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0813 19:53:23.938010 18094 handler.go:217] Removed *v1.EgressFirewall event handler 9\\\\nI0813 19:53:23.938033 18094 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:53:23.938059 18094 reflector.go:295] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937476 18094 handler.go:217] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.957247 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.976381 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.994336 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.010962 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.026618 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.041672 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.057057 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.071375 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.087459 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.110382 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.139623 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.163200 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.180862 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.200285 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.208285 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.208362 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.208416 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.208317 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.208512 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.208331 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:46 crc kubenswrapper[4183]: E0813 19:53:46.208721 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:46 crc kubenswrapper[4183]: E0813 19:53:46.208894 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:46 crc kubenswrapper[4183]: E0813 19:53:46.208979 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:46 crc kubenswrapper[4183]: E0813 19:53:46.209227 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.209320 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:46 crc kubenswrapper[4183]: E0813 19:53:46.209343 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:46 crc kubenswrapper[4183]: E0813 19:53:46.209441 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:46 crc kubenswrapper[4183]: E0813 19:53:46.209690 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.223042 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.244713 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.263180 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.281184 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.303077 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615\\\\n2025-08-13T19:51:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:53Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:53Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:52:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"2025-08-13T19:52:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e\\\\n2025-08-13T19:52:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e to /host/opt/cni/bin/\\\\n2025-08-13T19:52:54Z [verbose] multus-daemon started\\\\n2025-08-13T19:52:54Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:53:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.319318 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.335462 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.352247 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.371636 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.393350 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.412632 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.432418 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:46 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:46 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:46 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.432567 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.433664 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.451459 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.482964 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.504596 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.528661 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.208446 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.208539 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.208555 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.208451 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.208493 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.208708 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.209038 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.209098 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.209301 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.209311 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.209417 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.209542 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.209579 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.209347 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.209722 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.209769 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.209863 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.210095 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.210101 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.210155 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.210032 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.210162 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.210032 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.210220 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.209997 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.210702 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.210898 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.210996 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.211070 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.211106 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.211165 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.211185 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.211232 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.211253 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.211280 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.211289 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.211253 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.211439 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.211454 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.211481 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.211508 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.211516 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.211532 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.211538 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.211610 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.211720 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.211894 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.211961 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.211968 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.212167 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.212203 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.212607 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.212715 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.212881 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.213057 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.213080 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.213133 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.213192 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.213201 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.213248 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.213344 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.213458 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.213538 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.213565 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.213741 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.213908 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.214145 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.214266 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.214376 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.214443 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.214502 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.214648 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.214695 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.214864 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.215007 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.215085 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.215192 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.215342 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.215417 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.215486 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.215557 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.215744 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.432346 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:47 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:47 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:47 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.432469 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:48 crc kubenswrapper[4183]: I0813 19:53:48.208414 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:48 crc kubenswrapper[4183]: I0813 19:53:48.208496 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:48 crc kubenswrapper[4183]: I0813 19:53:48.208507 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:48 crc kubenswrapper[4183]: I0813 19:53:48.208415 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:48 crc kubenswrapper[4183]: I0813 19:53:48.208461 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:48 crc kubenswrapper[4183]: E0813 19:53:48.208694 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:48 crc kubenswrapper[4183]: E0813 19:53:48.208915 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:48 crc kubenswrapper[4183]: I0813 19:53:48.208924 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:48 crc kubenswrapper[4183]: E0813 19:53:48.209013 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:48 crc kubenswrapper[4183]: E0813 19:53:48.209091 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:48 crc kubenswrapper[4183]: I0813 19:53:48.209140 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:48 crc kubenswrapper[4183]: E0813 19:53:48.209199 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:48 crc kubenswrapper[4183]: E0813 19:53:48.209268 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:48 crc kubenswrapper[4183]: E0813 19:53:48.209340 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:48 crc kubenswrapper[4183]: I0813 19:53:48.432761 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:48 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:48 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:48 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:48 crc kubenswrapper[4183]: I0813 19:53:48.433012 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.209076 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.209488 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.209094 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.209132 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.209157 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.209188 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.209212 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.209238 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.209682 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.209869 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.209925 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.210038 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.210272 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.209271 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.209301 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.209318 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.209349 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.209367 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.209386 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.209404 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.209430 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.209449 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.209466 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.210593 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.210746 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.210986 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.211018 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.211031 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.211135 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.211086 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.211121 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.211204 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.211239 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.211265 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.211302 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.211311 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.211430 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.211439 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.211498 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.211500 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.211623 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.211640 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.211669 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.211770 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.211830 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.212003 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.212095 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.212237 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.212266 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.212239 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.212327 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.212398 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.212410 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.212657 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.212699 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.212857 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.212915 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.213031 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.213090 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.213162 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.213344 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.213412 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.213473 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.213504 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.213620 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.213703 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.213889 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.213966 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.214065 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.214421 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.214638 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.214733 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.214909 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.215043 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.215048 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.215173 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.215243 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.215304 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.215404 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.215639 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.215693 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.215740 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.432345 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:49 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:49 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:49 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.432468 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.597393 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.597506 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.597524 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.597543 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.597563 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:49Z","lastTransitionTime":"2025-08-13T19:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.619535 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.624933 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.625030 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.625050 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.625070 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.625101 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:49Z","lastTransitionTime":"2025-08-13T19:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.639740 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.645370 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.645557 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.645577 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.645658 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.645694 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:49Z","lastTransitionTime":"2025-08-13T19:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.662090 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.667611 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.667687 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.667704 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.667724 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.667756 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:49Z","lastTransitionTime":"2025-08-13T19:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.680742 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.684915 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.684964 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.684978 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.684999 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.685021 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:49Z","lastTransitionTime":"2025-08-13T19:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.698982 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.699034 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:53:50 crc kubenswrapper[4183]: I0813 19:53:50.209124 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:50 crc kubenswrapper[4183]: I0813 19:53:50.209194 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:50 crc kubenswrapper[4183]: E0813 19:53:50.209380 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:50 crc kubenswrapper[4183]: I0813 19:53:50.209680 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:50 crc kubenswrapper[4183]: E0813 19:53:50.209915 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:50 crc kubenswrapper[4183]: I0813 19:53:50.210023 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:50 crc kubenswrapper[4183]: I0813 19:53:50.210092 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:50 crc kubenswrapper[4183]: I0813 19:53:50.210147 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:50 crc kubenswrapper[4183]: E0813 19:53:50.210241 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:50 crc kubenswrapper[4183]: E0813 19:53:50.210489 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:50 crc kubenswrapper[4183]: I0813 19:53:50.210637 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:50 crc kubenswrapper[4183]: E0813 19:53:50.210732 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:50 crc kubenswrapper[4183]: E0813 19:53:50.211014 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:50 crc kubenswrapper[4183]: E0813 19:53:50.211189 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:50 crc kubenswrapper[4183]: I0813 19:53:50.433245 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:50 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:50 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:50 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:50 crc kubenswrapper[4183]: I0813 19:53:50.433396 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:50 crc kubenswrapper[4183]: E0813 19:53:50.447537 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.209170 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.209267 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.209408 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.209480 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.209565 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.209654 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.209666 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.209722 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.209909 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.209963 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.209967 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.210001 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.210111 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.210112 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.210163 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.210174 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.210245 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.210323 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.210350 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.210397 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.210458 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.210530 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.210575 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.210631 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.210660 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.210703 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.210763 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.210930 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.210976 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.211058 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.211131 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.211320 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.211364 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.211386 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.211448 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.211456 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.211500 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.211369 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.211581 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.211680 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.211688 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.211735 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.211876 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.211972 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.212006 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.212098 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.212152 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.212259 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.212288 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.212324 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.212384 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.212489 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.212503 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.212575 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.212656 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.212672 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.212725 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.212866 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.212895 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.213002 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.209173 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.213510 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.213753 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.214126 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.214208 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.214286 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.214287 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.214376 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.214436 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.214531 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.214598 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.214613 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.214674 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.214914 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.214973 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.215040 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.215164 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.215184 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.215267 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.215376 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.215490 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.215592 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.433111 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:51 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:51 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:51 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.433306 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.208925 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.209042 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.208949 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.209046 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.208998 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.208982 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.209004 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:52 crc kubenswrapper[4183]: E0813 19:53:52.209359 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:52 crc kubenswrapper[4183]: E0813 19:53:52.209552 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:52 crc kubenswrapper[4183]: E0813 19:53:52.210034 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.210090 4183 scope.go:117] "RemoveContainer" containerID="8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb" Aug 13 19:53:52 crc kubenswrapper[4183]: E0813 19:53:52.210155 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:52 crc kubenswrapper[4183]: E0813 19:53:52.210450 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:52 crc kubenswrapper[4183]: E0813 19:53:52.210513 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\"" pod="openshift-multus/multus-q88th" podUID="475321a1-8b7e-4033-8f72-b05a8b377347" Aug 13 19:53:52 crc kubenswrapper[4183]: E0813 19:53:52.210563 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:52 crc kubenswrapper[4183]: E0813 19:53:52.210647 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.246218 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.273189 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.297541 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.315907 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.336407 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.356375 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.385124 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:24Z\\\",\\\"message\\\":\\\"094 reflector.go:295] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0813 19:53:23.937894 18094 handler.go:217] Removed *v1.Node event handler 10\\\\nI0813 19:53:23.937902 18094 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:53:23.937909 18094 handler.go:217] Removed *v1.EgressIP event handler 8\\\\nI0813 19:53:23.937915 18094 handler.go:217] Removed *v1.Pod event handler 3\\\\nI0813 19:53:23.937950 18094 reflector.go:295] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937977 18094 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:53:23.938001 18094 handler.go:203] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0813 19:53:23.938010 18094 handler.go:217] Removed *v1.EgressFirewall event handler 9\\\\nI0813 19:53:23.938033 18094 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:53:23.938059 18094 reflector.go:295] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937476 18094 handler.go:217] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.417574 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.435138 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:52 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:52 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:52 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.435236 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.442150 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.460014 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.475110 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.498992 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.521125 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.544454 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.561002 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.580187 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.606467 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.622969 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.638573 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.663627 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.682266 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.698573 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.713371 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.731418 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.748446 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"2025-08-13T19:52:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e\\\\n2025-08-13T19:52:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e to /host/opt/cni/bin/\\\\n2025-08-13T19:52:54Z [verbose] multus-daemon started\\\\n2025-08-13T19:52:54Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:53:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.764586 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.780340 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.798676 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.821154 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.840497 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.858177 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.876713 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.897604 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.919472 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.938545 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.958184 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.974657 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.991260 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.007552 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.024227 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.041917 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.060233 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.078692 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.095654 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.114681 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.132080 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.147188 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.164064 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.181695 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.196662 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.208458 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.208485 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.208503 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.208463 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.208575 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.208635 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.208635 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.208664 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.208741 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.208760 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.208767 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.208969 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.208979 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.209035 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.209103 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.209143 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.209164 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.209262 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.209314 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.209375 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.209425 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.209521 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.209580 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.209654 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.209898 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.209924 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.209989 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.210096 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.210142 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.210203 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.210238 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.210301 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.210384 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.210437 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.210522 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.210606 4183 scope.go:117] "RemoveContainer" containerID="419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.210621 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.210669 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.210734 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.210965 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.211121 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.211256 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.211268 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.211322 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.211447 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.211630 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.211920 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.212263 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.212422 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.212432 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.212471 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.212507 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.212521 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.212530 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.212582 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.212691 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.212760 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.212874 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.212932 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.212998 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.213017 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.213036 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.213112 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.213184 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.213231 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.213289 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.213359 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.213508 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.213547 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.213625 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.213695 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.213887 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.213972 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.214002 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.214099 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.214155 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.214229 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.214281 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.214314 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.214364 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.214415 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.214471 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.214533 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.214589 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.219682 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.234186 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.249949 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.276276 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.293737 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.307024 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.324685 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.340397 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.360625 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.383583 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.403511 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.422232 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.433879 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:53 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:53 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:53 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.434484 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.441560 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.458694 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.476080 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.496085 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.514714 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:54 crc kubenswrapper[4183]: I0813 19:53:54.208191 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:54 crc kubenswrapper[4183]: I0813 19:53:54.208222 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:54 crc kubenswrapper[4183]: E0813 19:53:54.208482 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:54 crc kubenswrapper[4183]: I0813 19:53:54.208552 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:54 crc kubenswrapper[4183]: I0813 19:53:54.208591 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:54 crc kubenswrapper[4183]: I0813 19:53:54.208642 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:54 crc kubenswrapper[4183]: E0813 19:53:54.208630 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:54 crc kubenswrapper[4183]: I0813 19:53:54.208691 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:54 crc kubenswrapper[4183]: E0813 19:53:54.208724 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:54 crc kubenswrapper[4183]: E0813 19:53:54.208950 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:54 crc kubenswrapper[4183]: E0813 19:53:54.209019 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:54 crc kubenswrapper[4183]: E0813 19:53:54.209088 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:54 crc kubenswrapper[4183]: I0813 19:53:54.209385 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:54 crc kubenswrapper[4183]: E0813 19:53:54.209648 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:54 crc kubenswrapper[4183]: I0813 19:53:54.433266 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:54 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:54 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:54 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:54 crc kubenswrapper[4183]: I0813 19:53:54.433358 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:54 crc kubenswrapper[4183]: I0813 19:53:54.672919 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 19:53:54 crc kubenswrapper[4183]: I0813 19:53:54.673057 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 19:53:54 crc kubenswrapper[4183]: I0813 19:53:54.673077 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 19:53:54 crc kubenswrapper[4183]: I0813 19:53:54.673115 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 19:53:54 crc kubenswrapper[4183]: I0813 19:53:54.673144 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.208215 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.208585 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.208704 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.208741 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.208941 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.209029 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.208316 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.208342 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.208359 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.208395 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.208400 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.208421 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.208448 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.208453 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.208479 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.208491 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.208504 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.208533 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.208589 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.209732 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.209943 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.210077 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.210119 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.210183 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.210311 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.210315 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.210479 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.210531 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.210560 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.210618 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.210665 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.210624 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.210679 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.210568 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.210963 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.211037 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.211078 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.211097 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.211215 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.211364 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.211487 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.211525 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.211575 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.211684 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.211712 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.211839 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.211898 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.211937 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.211997 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.212034 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.212241 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.212369 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.212374 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.212478 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.212592 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.212650 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.212753 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.212989 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.213038 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.213044 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.213201 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.213401 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.213443 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.213469 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.213581 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.213628 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.213642 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.213704 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.213846 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.213901 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.213959 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.214057 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.214101 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.214135 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.214404 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.214568 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.214649 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.214854 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.214878 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.214982 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.215055 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.215153 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.238121 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.255378 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.272524 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.289503 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.305168 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.324147 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.341375 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.360200 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.375966 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.393325 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.435869 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:55 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:55 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:55 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.436416 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.437696 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.449085 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.462850 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.494154 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.513435 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.529023 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.546387 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.562010 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.577598 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.592148 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.605024 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.621065 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.635968 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.654699 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.673109 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.694539 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.709601 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.727622 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.745645 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.760696 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.778891 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.797894 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.820044 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.837558 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.851065 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.867307 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.893494 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:24Z\\\",\\\"message\\\":\\\"094 reflector.go:295] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0813 19:53:23.937894 18094 handler.go:217] Removed *v1.Node event handler 10\\\\nI0813 19:53:23.937902 18094 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:53:23.937909 18094 handler.go:217] Removed *v1.EgressIP event handler 8\\\\nI0813 19:53:23.937915 18094 handler.go:217] Removed *v1.Pod event handler 3\\\\nI0813 19:53:23.937950 18094 reflector.go:295] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937977 18094 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:53:23.938001 18094 handler.go:203] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0813 19:53:23.938010 18094 handler.go:217] Removed *v1.EgressFirewall event handler 9\\\\nI0813 19:53:23.938033 18094 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:53:23.938059 18094 reflector.go:295] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937476 18094 handler.go:217] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.911947 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.928402 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.945108 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.964266 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.981029 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.998336 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.015261 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.032421 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.049283 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.072432 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.088004 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.103649 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.121709 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.136165 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.161507 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.181985 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.197316 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.209192 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:56 crc kubenswrapper[4183]: E0813 19:53:56.209769 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.209272 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:56 crc kubenswrapper[4183]: E0813 19:53:56.210233 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.209310 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.209316 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:56 crc kubenswrapper[4183]: E0813 19:53:56.211047 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.209349 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:56 crc kubenswrapper[4183]: E0813 19:53:56.211194 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.209369 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:56 crc kubenswrapper[4183]: E0813 19:53:56.211303 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.209448 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:56 crc kubenswrapper[4183]: E0813 19:53:56.211420 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:56 crc kubenswrapper[4183]: E0813 19:53:56.210617 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.217469 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.234617 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"2025-08-13T19:52:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e\\\\n2025-08-13T19:52:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e to /host/opt/cni/bin/\\\\n2025-08-13T19:52:54Z [verbose] multus-daemon started\\\\n2025-08-13T19:52:54Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:53:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.249469 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.264258 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.285044 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.301282 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.317289 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.332663 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.349009 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.374611 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.395767 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.416526 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.433386 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:56 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:56 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:56 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.433965 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.434475 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.451153 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.208364 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.208422 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.208463 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.208606 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.208622 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.208636 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.208714 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.208934 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.208978 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.208935 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.209095 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.209106 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.209108 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.209213 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.209267 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.209377 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.209452 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.209506 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.209566 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.209607 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.209632 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.209701 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.209744 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.209767 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.209702 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.209896 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.209914 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.209933 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.210000 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.210064 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.210131 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.210228 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.210252 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.210278 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.210363 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.210422 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.210462 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.210551 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.210592 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.210622 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.210704 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.210725 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.210728 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.210914 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.210927 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.210966 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.210995 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.211025 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.211110 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.211147 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.211154 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.211272 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.211522 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.211556 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.211573 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.211609 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.211689 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.211746 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.211899 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.211901 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.212103 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.212226 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.212325 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.212493 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.212558 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.212656 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.212728 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.212892 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.212985 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.213285 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.213296 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.213352 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.213405 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.213480 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.213634 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.213901 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.213705 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.213851 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.214006 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.214448 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.214581 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.214702 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.431877 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:57 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:57 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:57 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.432015 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:58 crc kubenswrapper[4183]: I0813 19:53:58.208987 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:58 crc kubenswrapper[4183]: I0813 19:53:58.209127 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:58 crc kubenswrapper[4183]: I0813 19:53:58.209219 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:58 crc kubenswrapper[4183]: E0813 19:53:58.209929 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:58 crc kubenswrapper[4183]: I0813 19:53:58.209266 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:58 crc kubenswrapper[4183]: E0813 19:53:58.210159 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:58 crc kubenswrapper[4183]: I0813 19:53:58.209298 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:58 crc kubenswrapper[4183]: E0813 19:53:58.210328 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:58 crc kubenswrapper[4183]: E0813 19:53:58.210449 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:58 crc kubenswrapper[4183]: I0813 19:53:58.209330 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:58 crc kubenswrapper[4183]: I0813 19:53:58.209384 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:58 crc kubenswrapper[4183]: E0813 19:53:58.210708 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:58 crc kubenswrapper[4183]: E0813 19:53:58.211050 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:58 crc kubenswrapper[4183]: E0813 19:53:58.211055 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:58 crc kubenswrapper[4183]: I0813 19:53:58.433730 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:58 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:58 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:58 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:58 crc kubenswrapper[4183]: I0813 19:53:58.434297 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.208253 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.208395 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.208416 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.208553 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.208606 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.208626 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.208652 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.208699 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.208882 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.208950 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.208979 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.209014 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.209041 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.209101 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.209127 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.209148 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.209206 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.209227 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.209206 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.209265 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.209018 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.209313 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.209100 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.209353 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.209290 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.209400 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.209404 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.209430 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.209450 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.209493 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.209497 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.209520 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.209543 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.209578 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.209624 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.209703 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.209747 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.209905 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.209954 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.209967 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.210061 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.210098 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.210133 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.210154 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.210235 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.210344 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.210386 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.210450 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.210546 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.210581 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.210642 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.210679 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.210682 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.210734 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.210912 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.210940 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.210984 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.210985 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.211028 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.211030 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.211093 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.211159 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.211232 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.211281 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.211308 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.211347 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.211452 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.211618 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.211734 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.211760 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.211883 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.211937 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.212011 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.212092 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.212176 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.212226 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.212299 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.212380 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.212601 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.212644 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.212705 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.212868 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.433137 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:59 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:59 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:59 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.433251 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.978168 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.978240 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.978261 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.978284 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.978318 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:59Z","lastTransitionTime":"2025-08-13T19:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.997328 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.002438 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.002510 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.002531 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.002556 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.002589 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:00Z","lastTransitionTime":"2025-08-13T19:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:00 crc kubenswrapper[4183]: E0813 19:54:00.017464 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.022185 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.022245 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.022262 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.022280 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.022300 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:00Z","lastTransitionTime":"2025-08-13T19:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:00 crc kubenswrapper[4183]: E0813 19:54:00.037334 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.042236 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.042482 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.042747 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.043131 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.043354 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:00Z","lastTransitionTime":"2025-08-13T19:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:00 crc kubenswrapper[4183]: E0813 19:54:00.058106 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.063026 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.063344 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.063524 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.063673 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.063949 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:00Z","lastTransitionTime":"2025-08-13T19:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:00 crc kubenswrapper[4183]: E0813 19:54:00.078984 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:00 crc kubenswrapper[4183]: E0813 19:54:00.079331 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.208388 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:00 crc kubenswrapper[4183]: E0813 19:54:00.208641 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.208921 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:00 crc kubenswrapper[4183]: E0813 19:54:00.209005 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.209144 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:00 crc kubenswrapper[4183]: E0813 19:54:00.209262 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.209386 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:00 crc kubenswrapper[4183]: E0813 19:54:00.209539 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.209657 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:00 crc kubenswrapper[4183]: E0813 19:54:00.209770 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.210013 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:00 crc kubenswrapper[4183]: E0813 19:54:00.210097 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.210204 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:00 crc kubenswrapper[4183]: E0813 19:54:00.210277 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.432038 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:00 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:00 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:00 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.432153 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:00 crc kubenswrapper[4183]: E0813 19:54:00.451444 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.209030 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.209096 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.209124 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.209142 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.209073 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.209210 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.209219 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.209247 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.209321 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.209330 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.209357 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.209396 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.209447 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.209514 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.209524 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.209639 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.209677 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.209705 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.209737 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.209898 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.209945 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.209979 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.209981 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.209951 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.210055 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.210147 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.210197 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.210447 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.210449 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.210488 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.210564 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.210603 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.210657 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.210669 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.210710 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.211411 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.211487 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.211490 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.211590 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.211732 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.211903 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.211905 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.211932 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.211999 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.212173 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.212263 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.212341 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.212354 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.212447 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.212490 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.212599 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.212664 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.212707 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.212762 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.212767 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.212928 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.212963 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.213026 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.213037 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.213120 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.211570 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.213258 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.213328 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.213605 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.213606 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.213666 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.213760 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.213954 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.214019 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.214036 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.214156 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.214349 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.214519 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.214563 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.214655 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.214764 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.214922 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.215024 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.215107 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.216209 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.216274 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.216364 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.433735 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:01 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:01 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:01 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.434016 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:02 crc kubenswrapper[4183]: I0813 19:54:02.208411 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:02 crc kubenswrapper[4183]: I0813 19:54:02.208508 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:02 crc kubenswrapper[4183]: I0813 19:54:02.208518 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:02 crc kubenswrapper[4183]: I0813 19:54:02.208465 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:02 crc kubenswrapper[4183]: I0813 19:54:02.208492 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:02 crc kubenswrapper[4183]: I0813 19:54:02.208425 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:02 crc kubenswrapper[4183]: E0813 19:54:02.208733 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:02 crc kubenswrapper[4183]: E0813 19:54:02.209024 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:02 crc kubenswrapper[4183]: E0813 19:54:02.209087 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:02 crc kubenswrapper[4183]: I0813 19:54:02.209353 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:02 crc kubenswrapper[4183]: E0813 19:54:02.209478 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:02 crc kubenswrapper[4183]: E0813 19:54:02.209664 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:02 crc kubenswrapper[4183]: E0813 19:54:02.209960 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:02 crc kubenswrapper[4183]: E0813 19:54:02.210142 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:02 crc kubenswrapper[4183]: I0813 19:54:02.433762 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:02 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:02 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:02 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:02 crc kubenswrapper[4183]: I0813 19:54:02.434000 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.208943 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.208997 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.209012 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.209055 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.209057 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.208958 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.208969 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.209098 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.208989 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.208943 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.209187 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.209200 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.209240 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.209247 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.209295 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.209357 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.209438 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.209439 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.209494 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.209520 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.209535 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.209579 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.209595 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.209648 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.209702 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.209523 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.209669 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.209763 4183 scope.go:117] "RemoveContainer" containerID="8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.209888 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.209920 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.209941 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.209967 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.209999 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.210007 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.210098 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.210195 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.210246 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.210387 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.210442 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.210517 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.210631 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.210738 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.210756 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.210757 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.210915 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.210922 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.210990 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.211054 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.211201 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.211208 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.211266 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.211276 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.211573 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.211702 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.211709 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.211860 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.211920 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.212068 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.212161 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.212169 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.212217 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.212231 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.212327 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.212399 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.212435 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.212482 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.212568 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.212639 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.212666 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.212708 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.212869 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.212908 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.212951 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.213010 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.213074 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.213221 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.213247 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.213336 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.213396 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.213433 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.213486 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.213546 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.213635 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.434344 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:03 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:03 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:03 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.434988 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.775722 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/2.log" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.775967 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q88th" event={"ID":"475321a1-8b7e-4033-8f72-b05a8b377347","Type":"ContainerStarted","Data":"c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791"} Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.803302 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.821177 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.842350 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.860978 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.879569 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.899942 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.918966 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.936621 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.953349 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.969879 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.989463 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.013992 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.030651 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.047588 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.063650 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.078645 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.092414 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.109317 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.123153 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.142874 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.158244 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.177134 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.196000 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.208248 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.208313 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.208279 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.208366 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.208728 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.208736 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:04 crc kubenswrapper[4183]: E0813 19:54:04.208906 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:04 crc kubenswrapper[4183]: E0813 19:54:04.208982 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:04 crc kubenswrapper[4183]: E0813 19:54:04.209059 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:04 crc kubenswrapper[4183]: E0813 19:54:04.209182 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:04 crc kubenswrapper[4183]: E0813 19:54:04.209306 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.209313 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:04 crc kubenswrapper[4183]: E0813 19:54:04.209379 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:04 crc kubenswrapper[4183]: E0813 19:54:04.209452 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.217651 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.234592 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.252427 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.268109 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.285023 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.300514 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.318412 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.334358 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.351593 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.368405 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.386606 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.401922 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.424902 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:24Z\\\",\\\"message\\\":\\\"094 reflector.go:295] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0813 19:53:23.937894 18094 handler.go:217] Removed *v1.Node event handler 10\\\\nI0813 19:53:23.937902 18094 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:53:23.937909 18094 handler.go:217] Removed *v1.EgressIP event handler 8\\\\nI0813 19:53:23.937915 18094 handler.go:217] Removed *v1.Pod event handler 3\\\\nI0813 19:53:23.937950 18094 reflector.go:295] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937977 18094 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:53:23.938001 18094 handler.go:203] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0813 19:53:23.938010 18094 handler.go:217] Removed *v1.EgressFirewall event handler 9\\\\nI0813 19:53:23.938033 18094 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:53:23.938059 18094 reflector.go:295] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937476 18094 handler.go:217] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.431854 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:04 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:04 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:04 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.431983 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.443429 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.460615 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.475599 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.489891 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.511979 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.532745 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.549638 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.563896 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.584192 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.611929 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.630848 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.649600 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.671898 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.689530 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.704682 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.727601 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.747214 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.764618 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"2025-08-13T19:52:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e\\\\n2025-08-13T19:52:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e to /host/opt/cni/bin/\\\\n2025-08-13T19:52:54Z [verbose] multus-daemon started\\\\n2025-08-13T19:52:54Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:53:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.794921 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.811659 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.829037 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.845755 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.864724 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.887374 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.912999 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.938070 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.964206 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.987029 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.008924 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.048697 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.071536 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.209067 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.209115 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.209323 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.209496 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.209551 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.209594 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.209897 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.209942 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.210023 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.210089 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.210157 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.210282 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.210358 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.210390 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.210449 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.210546 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.210552 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.210590 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.210704 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.210736 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.210766 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.210914 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.210290 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.211204 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.211259 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.211352 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.211409 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.211499 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.211512 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.211616 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.211733 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.211875 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.211967 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.211987 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.212107 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.212233 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.212337 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.212425 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.212430 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.212537 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.212555 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.212613 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.212662 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.212664 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.212697 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.211734 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.212937 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.212941 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.213083 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.213225 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.213224 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.213283 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.213330 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.213399 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.213533 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.213667 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.213765 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.213938 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.214123 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.214238 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.214303 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.214439 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.214501 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.214592 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.214697 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.214883 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.214977 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.215074 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.215103 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.215138 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.215228 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.215346 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.215442 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.215546 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.215656 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.215870 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.216133 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.218362 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.219550 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.220093 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.220188 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.220294 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.234314 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.256088 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.283576 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.299983 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.316689 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.341612 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.363210 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.380325 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.397248 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.413567 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.432658 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.434240 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:05 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:05 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:05 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.434383 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.453270 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.453517 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.470353 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.487686 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.503990 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.519461 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.533579 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.548224 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.565100 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.581076 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.592064 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.607745 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.622422 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.640858 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.655570 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.678692 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.720408 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.744215 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.775118 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.797041 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.814911 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.831856 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.847576 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.863379 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.879451 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.912946 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.950900 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.991453 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.032447 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.079265 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:24Z\\\",\\\"message\\\":\\\"094 reflector.go:295] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0813 19:53:23.937894 18094 handler.go:217] Removed *v1.Node event handler 10\\\\nI0813 19:53:23.937902 18094 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:53:23.937909 18094 handler.go:217] Removed *v1.EgressIP event handler 8\\\\nI0813 19:53:23.937915 18094 handler.go:217] Removed *v1.Pod event handler 3\\\\nI0813 19:53:23.937950 18094 reflector.go:295] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937977 18094 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:53:23.938001 18094 handler.go:203] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0813 19:53:23.938010 18094 handler.go:217] Removed *v1.EgressFirewall event handler 9\\\\nI0813 19:53:23.938033 18094 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:53:23.938059 18094 reflector.go:295] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937476 18094 handler.go:217] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.109344 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.149366 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.201227 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.208975 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.209023 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.209132 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.209160 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.209223 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:06 crc kubenswrapper[4183]: E0813 19:54:06.209271 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.209383 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:06 crc kubenswrapper[4183]: E0813 19:54:06.209498 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:06 crc kubenswrapper[4183]: E0813 19:54:06.209710 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:06 crc kubenswrapper[4183]: E0813 19:54:06.209895 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.209960 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:06 crc kubenswrapper[4183]: E0813 19:54:06.210035 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:06 crc kubenswrapper[4183]: E0813 19:54:06.210377 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:06 crc kubenswrapper[4183]: E0813 19:54:06.210544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.231338 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.276045 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.314026 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.358928 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.389035 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.430025 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.432278 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:06 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:06 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:06 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.432376 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.480965 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.512121 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.556944 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.593513 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.633299 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.669519 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.707970 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.750714 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.792924 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"2025-08-13T19:52:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e\\\\n2025-08-13T19:52:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e to /host/opt/cni/bin/\\\\n2025-08-13T19:52:54Z [verbose] multus-daemon started\\\\n2025-08-13T19:52:54Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:53:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.830983 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.871963 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.908731 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.950109 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.993932 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.031844 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:07Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.075245 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:07Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.112490 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:07Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.149844 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:07Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.208728 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.208914 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.208990 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.208931 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.209114 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.209140 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.209203 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.209631 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.209635 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.209638 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.209870 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.209886 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.209988 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.210091 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.210096 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.210097 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.210228 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.210335 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.210422 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.210191 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.210705 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.210960 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.210282 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.210539 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.210850 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.211077 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.210897 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.211124 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.211189 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.211193 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.211225 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.211282 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.210892 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.210713 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.211555 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.211572 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.211610 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.211630 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.211719 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.211926 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.211988 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.212071 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.212078 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.212144 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.212206 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.212258 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.212211 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.212331 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.212432 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.212543 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.212636 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.212686 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.212582 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.212593 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.212912 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.212992 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.213011 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.212912 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.213046 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.213055 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.213094 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.213182 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.213279 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.213522 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.213597 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.213670 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.213863 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.214173 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.214271 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.214362 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.214594 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.214722 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.215287 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.216176 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.216363 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.216516 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.216645 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.216770 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.217030 4183 scope.go:117] "RemoveContainer" containerID="419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.217249 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.214067 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.217111 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.217188 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.218044 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.433453 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:07 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:07 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:07 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.434455 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:08 crc kubenswrapper[4183]: I0813 19:54:08.209153 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:08 crc kubenswrapper[4183]: I0813 19:54:08.209312 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:08 crc kubenswrapper[4183]: I0813 19:54:08.209274 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:08 crc kubenswrapper[4183]: E0813 19:54:08.209701 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:08 crc kubenswrapper[4183]: E0813 19:54:08.209475 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:08 crc kubenswrapper[4183]: I0813 19:54:08.209950 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:08 crc kubenswrapper[4183]: I0813 19:54:08.209921 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:08 crc kubenswrapper[4183]: I0813 19:54:08.209565 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:08 crc kubenswrapper[4183]: I0813 19:54:08.210230 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:08 crc kubenswrapper[4183]: E0813 19:54:08.210434 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:08 crc kubenswrapper[4183]: E0813 19:54:08.210752 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:08 crc kubenswrapper[4183]: E0813 19:54:08.211479 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:08 crc kubenswrapper[4183]: E0813 19:54:08.211624 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:08 crc kubenswrapper[4183]: E0813 19:54:08.212228 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:08 crc kubenswrapper[4183]: I0813 19:54:08.434946 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:08 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:08 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:08 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:08 crc kubenswrapper[4183]: I0813 19:54:08.435084 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.209336 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.209322 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.209478 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.209629 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.209930 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.210038 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.210126 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.210304 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.210412 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.210576 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.210670 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.210876 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.210993 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.211187 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.211425 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.211628 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.211872 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.212005 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.212149 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.212216 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.212286 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.212296 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.212380 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.212461 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.212584 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.212638 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.212697 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.212910 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.212979 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.213118 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.213489 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.213501 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.212170 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.213968 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.213982 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.214059 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.214092 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.214108 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.214092 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.214146 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.214268 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.214302 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.214378 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.214455 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.214469 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.214438 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.214558 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.214582 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.214669 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.214697 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.214729 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.214753 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.214977 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.215294 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.215418 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.215923 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.216004 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.215423 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.215570 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.216065 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.216071 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.215598 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.215614 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.215579 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.215641 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.215657 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.215672 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.216254 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.215706 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.215748 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.215634 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.216336 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.216473 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.216585 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.216677 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.216933 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.217109 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.217231 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.217472 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.217644 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.217730 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.217918 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.432705 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:09 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:09 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:09 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.432882 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.143707 4183 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.143881 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.143938 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.144597 4183 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9"} pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.144897 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" containerID="cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9" gracePeriod=600 Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.209607 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.209877 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:10 crc kubenswrapper[4183]: E0813 19:54:10.210135 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.210178 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.210213 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.210413 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:10 crc kubenswrapper[4183]: E0813 19:54:10.210434 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:10 crc kubenswrapper[4183]: E0813 19:54:10.210546 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.210608 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:10 crc kubenswrapper[4183]: E0813 19:54:10.210645 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.210688 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:10 crc kubenswrapper[4183]: E0813 19:54:10.211024 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:10 crc kubenswrapper[4183]: E0813 19:54:10.211063 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:10 crc kubenswrapper[4183]: E0813 19:54:10.211541 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.308048 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.308269 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.308359 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.308450 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.308566 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:10Z","lastTransitionTime":"2025-08-13T19:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:10 crc kubenswrapper[4183]: E0813 19:54:10.326145 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.332704 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.332889 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.332919 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.336453 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.336518 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:10Z","lastTransitionTime":"2025-08-13T19:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:10 crc kubenswrapper[4183]: E0813 19:54:10.357702 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.363927 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.364339 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.364359 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.364386 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.364421 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:10Z","lastTransitionTime":"2025-08-13T19:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:10 crc kubenswrapper[4183]: E0813 19:54:10.382043 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.397303 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.397748 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.397973 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.398139 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.398349 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:10Z","lastTransitionTime":"2025-08-13T19:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:10 crc kubenswrapper[4183]: E0813 19:54:10.415828 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.422164 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.422246 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.422262 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.422284 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.422311 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:10Z","lastTransitionTime":"2025-08-13T19:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.433273 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:10 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:10 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:10 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.433357 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:10 crc kubenswrapper[4183]: E0813 19:54:10.441424 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:10 crc kubenswrapper[4183]: E0813 19:54:10.441485 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:54:10 crc kubenswrapper[4183]: E0813 19:54:10.455729 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.810166 4183 generic.go:334] "Generic (PLEG): container finished" podID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerID="9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9" exitCode=0 Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.810253 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerDied","Data":"9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9"} Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.810292 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerStarted","Data":"f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665"} Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.847565 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.873044 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.896125 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.915257 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.934393 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.958094 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.976658 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.997966 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.032262 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.048311 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.069555 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.086538 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.109033 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.135406 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:24Z\\\",\\\"message\\\":\\\"094 reflector.go:295] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0813 19:53:23.937894 18094 handler.go:217] Removed *v1.Node event handler 10\\\\nI0813 19:53:23.937902 18094 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:53:23.937909 18094 handler.go:217] Removed *v1.EgressIP event handler 8\\\\nI0813 19:53:23.937915 18094 handler.go:217] Removed *v1.Pod event handler 3\\\\nI0813 19:53:23.937950 18094 reflector.go:295] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937977 18094 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:53:23.938001 18094 handler.go:203] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0813 19:53:23.938010 18094 handler.go:217] Removed *v1.EgressFirewall event handler 9\\\\nI0813 19:53:23.938033 18094 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:53:23.938059 18094 reflector.go:295] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937476 18094 handler.go:217] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.156672 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.176003 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.197306 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.209069 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.209118 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.209156 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.209175 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.209286 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.209318 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.209331 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.209333 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.209286 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.209377 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.209125 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.209543 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.209583 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.209542 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.209666 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.209728 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.210062 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.209737 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.209739 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.209770 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.209909 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.209917 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.209928 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.209943 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.210238 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.210245 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.209962 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.209968 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.210342 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.210353 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.210476 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.210536 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.210540 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.210603 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.210706 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.210750 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.210848 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.210862 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.210892 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.210911 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.210901 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.210993 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.211132 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.211180 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.211195 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.211198 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.211235 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.211360 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.211486 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.211522 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.211527 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.211573 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.211582 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.211605 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.211683 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.211859 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.211915 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.212135 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.212158 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.212182 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.212234 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.212311 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.212435 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.212481 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.212568 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.212979 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.213056 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.213109 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.213148 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.213185 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.213193 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.213300 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.213324 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.213422 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.213607 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.213709 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.213889 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.213999 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.214085 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.214165 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.218852 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.219132 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.223001 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.241735 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.259621 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.275697 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.291483 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.307681 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.323854 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.350850 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.369439 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.387483 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.411092 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.431332 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.434505 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:11 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:11 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:11 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.435068 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.451426 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.466193 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.483766 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.501406 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"2025-08-13T19:52:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e\\\\n2025-08-13T19:52:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e to /host/opt/cni/bin/\\\\n2025-08-13T19:52:54Z [verbose] multus-daemon started\\\\n2025-08-13T19:52:54Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:53:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.518467 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.536583 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.551408 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.570010 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.590057 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.608920 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.624312 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.641483 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.655473 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.671871 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.690397 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.706352 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.721870 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.736900 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.752144 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.769432 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.785206 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.802281 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.819192 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.835712 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.851680 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.867529 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.884369 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.900742 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.918433 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.932581 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.950350 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.971433 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.992175 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:12 crc kubenswrapper[4183]: I0813 19:54:12.007187 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:12 crc kubenswrapper[4183]: I0813 19:54:12.025746 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:12 crc kubenswrapper[4183]: I0813 19:54:12.039994 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:12 crc kubenswrapper[4183]: I0813 19:54:12.055988 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:12 crc kubenswrapper[4183]: I0813 19:54:12.074367 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:12 crc kubenswrapper[4183]: I0813 19:54:12.208281 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:12 crc kubenswrapper[4183]: I0813 19:54:12.208361 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:12 crc kubenswrapper[4183]: I0813 19:54:12.208280 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:12 crc kubenswrapper[4183]: I0813 19:54:12.208320 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:12 crc kubenswrapper[4183]: I0813 19:54:12.208519 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:12 crc kubenswrapper[4183]: E0813 19:54:12.208536 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:12 crc kubenswrapper[4183]: E0813 19:54:12.208650 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:12 crc kubenswrapper[4183]: E0813 19:54:12.209105 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:12 crc kubenswrapper[4183]: I0813 19:54:12.209291 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:12 crc kubenswrapper[4183]: E0813 19:54:12.209405 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:12 crc kubenswrapper[4183]: E0813 19:54:12.209565 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:12 crc kubenswrapper[4183]: E0813 19:54:12.209738 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:12 crc kubenswrapper[4183]: I0813 19:54:12.210017 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:12 crc kubenswrapper[4183]: E0813 19:54:12.210353 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:12 crc kubenswrapper[4183]: I0813 19:54:12.432897 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:12 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:12 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:12 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:12 crc kubenswrapper[4183]: I0813 19:54:12.432992 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.208599 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.208709 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.208860 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.208893 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.208976 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.209016 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.208660 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.209187 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.209194 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.209238 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.209251 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.209277 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.209291 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.209197 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.209386 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.209425 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.209441 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.209474 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.209670 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.209744 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.209862 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.209891 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.210027 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.210075 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.210142 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.210170 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.210228 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.210295 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.210417 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.210490 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.210512 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.210579 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.210592 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.210697 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.210763 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.211047 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.211102 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.211139 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.211193 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.211194 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.211294 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.211346 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.211361 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.211482 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.211484 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.211629 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.211694 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.211849 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.211894 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.211898 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.212125 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.212205 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.212205 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.212269 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.212274 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.212500 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.212592 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.212665 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.212747 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.212932 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.212983 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.213083 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.213136 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.213151 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.213205 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.213218 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.213241 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.213334 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.213486 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.213546 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.213547 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.213610 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.213668 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.213863 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.213900 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.213955 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.214032 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.214116 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.214302 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.214358 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.214499 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.214600 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.433466 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:13 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:13 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:13 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.438703 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:14 crc kubenswrapper[4183]: I0813 19:54:14.208916 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:14 crc kubenswrapper[4183]: I0813 19:54:14.208924 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:14 crc kubenswrapper[4183]: E0813 19:54:14.209968 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:14 crc kubenswrapper[4183]: I0813 19:54:14.208965 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:14 crc kubenswrapper[4183]: I0813 19:54:14.209063 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:14 crc kubenswrapper[4183]: E0813 19:54:14.210364 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:14 crc kubenswrapper[4183]: E0813 19:54:14.210500 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:14 crc kubenswrapper[4183]: I0813 19:54:14.209086 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:14 crc kubenswrapper[4183]: E0813 19:54:14.210734 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:14 crc kubenswrapper[4183]: I0813 19:54:14.209099 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:14 crc kubenswrapper[4183]: I0813 19:54:14.209115 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:14 crc kubenswrapper[4183]: E0813 19:54:14.210073 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:14 crc kubenswrapper[4183]: E0813 19:54:14.211065 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:14 crc kubenswrapper[4183]: E0813 19:54:14.211235 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:14 crc kubenswrapper[4183]: I0813 19:54:14.433232 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:14 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:14 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:14 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:14 crc kubenswrapper[4183]: I0813 19:54:14.433414 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.208967 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.209068 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.209106 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.208972 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.209007 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.209037 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.209244 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.209270 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.209283 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.209329 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.209379 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.209417 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.209247 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.209546 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.209571 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.209625 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.209639 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.209076 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.209380 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.209728 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.209736 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.209728 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.209941 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.209959 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.209975 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.210027 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.210074 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.210095 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.210109 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.210170 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.210197 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.210285 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.210388 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.210410 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.210219 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.210289 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.210239 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.210632 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.210671 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.210869 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.210925 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.211021 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.211049 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.211130 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.211158 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.211243 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.211293 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.211352 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.211390 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.211452 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.211538 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.211659 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.211757 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.211901 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.212084 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.212201 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.212262 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.212270 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.212376 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.212427 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.212578 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.212670 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.212720 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.212770 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.212873 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.213072 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.213143 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.213215 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.213226 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.213276 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.213324 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.213594 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.213649 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.213720 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.213885 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.213975 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.214086 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.214180 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.214266 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.214380 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.215124 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.215252 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.432766 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:15 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:15 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:15 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.433117 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.457098 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.948590 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.970729 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.989282 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.005768 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.024998 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.041209 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.129289 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.145290 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.161627 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.177245 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.193900 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.206700 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.208348 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.208409 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.208473 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.208535 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:16 crc kubenswrapper[4183]: E0813 19:54:16.208565 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.208608 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:16 crc kubenswrapper[4183]: E0813 19:54:16.208700 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.208739 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:16 crc kubenswrapper[4183]: E0813 19:54:16.208937 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:16 crc kubenswrapper[4183]: E0813 19:54:16.209114 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:16 crc kubenswrapper[4183]: E0813 19:54:16.209266 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.209279 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:16 crc kubenswrapper[4183]: E0813 19:54:16.209454 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:16 crc kubenswrapper[4183]: E0813 19:54:16.209322 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.225336 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.242202 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.261068 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.279284 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.296508 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.315717 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.332078 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.348731 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.371182 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.387672 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.404528 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.419910 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.433461 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:16 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:16 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:16 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.433592 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.435762 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.459647 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:24Z\\\",\\\"message\\\":\\\"094 reflector.go:295] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0813 19:53:23.937894 18094 handler.go:217] Removed *v1.Node event handler 10\\\\nI0813 19:53:23.937902 18094 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:53:23.937909 18094 handler.go:217] Removed *v1.EgressIP event handler 8\\\\nI0813 19:53:23.937915 18094 handler.go:217] Removed *v1.Pod event handler 3\\\\nI0813 19:53:23.937950 18094 reflector.go:295] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937977 18094 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:53:23.938001 18094 handler.go:203] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0813 19:53:23.938010 18094 handler.go:217] Removed *v1.EgressFirewall event handler 9\\\\nI0813 19:53:23.938033 18094 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:53:23.938059 18094 reflector.go:295] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937476 18094 handler.go:217] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.483350 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.506649 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.525901 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.543177 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.561967 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.579687 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.596668 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.612126 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.628544 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.653100 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.678081 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.697242 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.712910 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.727298 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.746461 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.764908 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.782877 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.801199 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.820496 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"2025-08-13T19:52:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e\\\\n2025-08-13T19:52:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e to /host/opt/cni/bin/\\\\n2025-08-13T19:52:54Z [verbose] multus-daemon started\\\\n2025-08-13T19:52:54Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:53:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.838348 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.862140 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.894723 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.933093 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.955356 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.072663 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.089741 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.107102 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.126165 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.145950 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.163647 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.184104 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.202262 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.208713 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.208958 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.209141 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.209283 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.209471 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.209661 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.210008 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.210118 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.210267 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.210375 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.210522 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.210627 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.210886 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.210984 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.211018 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.211164 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.211284 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.211296 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.220350 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.220413 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.220660 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.220751 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.220891 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.220946 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.220988 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.221039 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.221157 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.221188 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.221208 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.221281 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.221364 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.221428 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.221451 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.221510 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.221602 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.221634 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.221652 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.221731 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.222292 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.222333 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.222393 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.222504 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.223708 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.222505 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.222572 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.222755 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.222768 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.222715 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.222885 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.224190 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.222906 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.224227 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.222930 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.222975 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.223001 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.224335 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.224453 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.224559 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.224626 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.223128 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.223162 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.223282 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.223383 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.223421 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.223451 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.223600 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.223641 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.227690 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.227979 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.228059 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.228100 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.228163 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.228261 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.228336 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.228463 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.228567 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.228641 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.228721 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.228914 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.229021 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.229102 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.229206 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.229343 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.239184 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.255593 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.273195 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.289897 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.308697 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.327454 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.343876 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.362854 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.433025 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:17 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:17 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:17 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.433160 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:18 crc kubenswrapper[4183]: I0813 19:54:18.208924 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:18 crc kubenswrapper[4183]: I0813 19:54:18.208986 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:18 crc kubenswrapper[4183]: I0813 19:54:18.209080 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:18 crc kubenswrapper[4183]: I0813 19:54:18.209189 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:18 crc kubenswrapper[4183]: E0813 19:54:18.209224 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:18 crc kubenswrapper[4183]: E0813 19:54:18.209279 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:18 crc kubenswrapper[4183]: I0813 19:54:18.209334 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:18 crc kubenswrapper[4183]: E0813 19:54:18.209439 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:18 crc kubenswrapper[4183]: I0813 19:54:18.209498 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:18 crc kubenswrapper[4183]: E0813 19:54:18.209545 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:18 crc kubenswrapper[4183]: I0813 19:54:18.209554 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:18 crc kubenswrapper[4183]: E0813 19:54:18.209614 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:18 crc kubenswrapper[4183]: E0813 19:54:18.209691 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:18 crc kubenswrapper[4183]: E0813 19:54:18.209756 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:18 crc kubenswrapper[4183]: I0813 19:54:18.432766 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:18 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:18 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:18 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:18 crc kubenswrapper[4183]: I0813 19:54:18.432929 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.209116 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.209210 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.209246 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.209349 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.209399 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.209432 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.209357 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.209622 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.209680 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.209721 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.209733 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.209986 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.209995 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.210054 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.210082 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.210168 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.210228 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.210056 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.210300 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.210356 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.210363 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.210415 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.210440 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.210484 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.210498 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.210168 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.210544 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.210613 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.210617 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.210647 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.210719 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.210730 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.210952 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.211053 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.211084 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.211117 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.211210 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.211417 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.211544 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.211683 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.211946 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.212066 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.212150 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.212185 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.213110 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.213134 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.213202 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.213283 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.213375 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.213377 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.213405 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.213464 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.213510 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.213550 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.213551 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.213680 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.213921 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.213973 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.213983 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.214042 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.214128 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.214186 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.214276 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.214291 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.214385 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.214486 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.214619 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.214874 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.214899 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.214944 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.215041 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.215145 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.215198 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.215261 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.215339 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.215427 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.215653 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.215703 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.215852 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.215981 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.216048 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.216153 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.435375 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:19 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:19 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:19 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.435480 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.208136 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.208189 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.208239 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:20 crc kubenswrapper[4183]: E0813 19:54:20.208349 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.208422 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:20 crc kubenswrapper[4183]: E0813 19:54:20.208516 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.208564 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:20 crc kubenswrapper[4183]: E0813 19:54:20.208623 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.208641 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.208695 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:20 crc kubenswrapper[4183]: E0813 19:54:20.208761 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:20 crc kubenswrapper[4183]: E0813 19:54:20.208972 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:20 crc kubenswrapper[4183]: E0813 19:54:20.209156 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:20 crc kubenswrapper[4183]: E0813 19:54:20.209296 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.433025 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:20 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:20 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:20 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.433180 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:20 crc kubenswrapper[4183]: E0813 19:54:20.459145 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.705929 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.705968 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.705985 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.706007 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.706032 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:20Z","lastTransitionTime":"2025-08-13T19:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:20 crc kubenswrapper[4183]: E0813 19:54:20.724535 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.729937 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.730024 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.730046 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.730069 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.730097 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:20Z","lastTransitionTime":"2025-08-13T19:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:20 crc kubenswrapper[4183]: E0813 19:54:20.751424 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.756916 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.757003 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.757024 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.757050 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.757089 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:20Z","lastTransitionTime":"2025-08-13T19:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:20 crc kubenswrapper[4183]: E0813 19:54:20.773216 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.780641 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.780890 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.781013 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.781142 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.781255 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:20Z","lastTransitionTime":"2025-08-13T19:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:20 crc kubenswrapper[4183]: E0813 19:54:20.801999 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.809520 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.809563 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.809578 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.809602 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.809629 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:20Z","lastTransitionTime":"2025-08-13T19:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:20 crc kubenswrapper[4183]: E0813 19:54:20.824236 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:20 crc kubenswrapper[4183]: E0813 19:54:20.824658 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.209130 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.209216 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.209342 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.209513 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.209729 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.209992 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.210056 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.210217 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.210335 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.213534 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.213640 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.213766 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.213876 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.213947 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.214021 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.214064 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.214196 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.214366 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.214471 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.214562 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.214374 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.214411 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.214445 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.215127 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.215733 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.215743 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.215976 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.216061 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.216159 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.216205 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.216304 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.216366 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.216456 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.216501 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.216502 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.216643 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.216650 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.216735 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.216857 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.217024 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.217116 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.217281 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.217354 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.217465 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.217584 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.218245 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.218352 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.218541 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.218726 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.218999 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.219126 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.219282 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.219421 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.219581 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.219714 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.219994 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.220133 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.220277 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.220391 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.220511 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.220651 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.220938 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.222074 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.222236 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.222331 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.222346 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.222409 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.222440 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.222517 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.222688 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.222906 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.223005 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.223099 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.223139 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.223203 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.223310 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.223424 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.223575 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.223737 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.224092 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.224271 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.224418 4183 scope.go:117] "RemoveContainer" containerID="419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.225139 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.224490 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.432309 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:21 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:21 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:21 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.432416 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:22 crc kubenswrapper[4183]: I0813 19:54:22.208763 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:22 crc kubenswrapper[4183]: I0813 19:54:22.209152 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:22 crc kubenswrapper[4183]: I0813 19:54:22.209156 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:22 crc kubenswrapper[4183]: E0813 19:54:22.209433 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:22 crc kubenswrapper[4183]: I0813 19:54:22.209510 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:22 crc kubenswrapper[4183]: E0813 19:54:22.209688 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:22 crc kubenswrapper[4183]: I0813 19:54:22.209962 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:22 crc kubenswrapper[4183]: I0813 19:54:22.210110 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:22 crc kubenswrapper[4183]: E0813 19:54:22.210227 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:22 crc kubenswrapper[4183]: E0813 19:54:22.210332 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:22 crc kubenswrapper[4183]: I0813 19:54:22.210405 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:22 crc kubenswrapper[4183]: E0813 19:54:22.210494 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:22 crc kubenswrapper[4183]: E0813 19:54:22.210587 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:22 crc kubenswrapper[4183]: E0813 19:54:22.210693 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:22 crc kubenswrapper[4183]: I0813 19:54:22.433421 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:22 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:22 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:22 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:22 crc kubenswrapper[4183]: I0813 19:54:22.433528 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.209158 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.209555 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.209591 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.209564 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.209331 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.209684 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.209705 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.209727 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.209291 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.209401 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.209448 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.209484 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.209520 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.209367 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.210036 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.210127 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.210222 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.210232 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.210259 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.210269 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.210323 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.210361 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.210406 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.210459 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.210475 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.210507 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.210524 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.210651 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.210649 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.210762 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.210919 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.211032 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.211169 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.211179 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.211216 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.211238 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.211370 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.211417 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.211430 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.211477 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.211554 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.211619 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.211625 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.211692 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.211696 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.211728 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.211929 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.212020 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.212038 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.212103 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.212210 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.212272 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.212330 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.212389 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.212425 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.212470 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.212537 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.212574 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.212630 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.212719 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.212858 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.212906 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.212964 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.213036 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.213067 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.213138 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.213194 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.213275 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.213329 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.213402 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.213472 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.213541 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.213678 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.213745 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.213964 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.214021 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.214062 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.214180 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.214278 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.214340 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.214369 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.214434 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.433753 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:23 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:23 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:23 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.433921 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:24 crc kubenswrapper[4183]: I0813 19:54:24.208363 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:24 crc kubenswrapper[4183]: I0813 19:54:24.208422 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:24 crc kubenswrapper[4183]: I0813 19:54:24.208428 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:24 crc kubenswrapper[4183]: I0813 19:54:24.208492 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:24 crc kubenswrapper[4183]: I0813 19:54:24.208368 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:24 crc kubenswrapper[4183]: I0813 19:54:24.208401 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:24 crc kubenswrapper[4183]: E0813 19:54:24.208690 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:24 crc kubenswrapper[4183]: E0813 19:54:24.208870 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:24 crc kubenswrapper[4183]: E0813 19:54:24.209052 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:24 crc kubenswrapper[4183]: E0813 19:54:24.209119 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:24 crc kubenswrapper[4183]: I0813 19:54:24.209172 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:24 crc kubenswrapper[4183]: E0813 19:54:24.209202 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:24 crc kubenswrapper[4183]: E0813 19:54:24.209264 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:24 crc kubenswrapper[4183]: E0813 19:54:24.209340 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:24 crc kubenswrapper[4183]: I0813 19:54:24.432268 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:24 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:24 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:24 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:24 crc kubenswrapper[4183]: I0813 19:54:24.432355 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.208232 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.208316 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.208267 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.208276 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.208465 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.208579 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.208581 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.208607 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.208623 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.208642 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.208588 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.208869 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.208871 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.208947 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.208995 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.209001 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.209019 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.209032 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.209155 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.209229 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.209423 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.209491 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.209558 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.209661 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.209875 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.210003 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.210014 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.210069 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.210126 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.210137 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.210168 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.210198 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.210233 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.210250 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.210285 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.210310 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.210321 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.210351 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.210379 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.210439 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.210476 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.210521 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.210555 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.210524 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.210599 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.210619 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.210639 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.210661 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.210672 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.210705 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.210730 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.210748 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.210764 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.210857 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.210918 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.210941 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.210958 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.211000 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.211047 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.211075 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.211116 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.211133 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.211162 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.211271 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.211386 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.211485 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.211604 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.211683 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.211767 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.212006 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.212134 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.212191 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.212285 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.212344 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.212419 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.212494 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.212626 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.212729 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.212945 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.213070 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.213192 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.213254 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.230134 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.248533 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.264479 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.285660 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.303173 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.326573 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.366940 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.401501 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.422077 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.432471 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:25 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:25 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:25 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.432600 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.440014 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.456889 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.461004 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.472347 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.493680 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.511343 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.527449 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.540308 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.556926 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.570211 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.584470 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.598524 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.619271 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.634931 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.655973 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.673994 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.690758 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.708883 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.725130 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.743404 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.760254 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.775733 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.790392 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.813140 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:24Z\\\",\\\"message\\\":\\\"094 reflector.go:295] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0813 19:53:23.937894 18094 handler.go:217] Removed *v1.Node event handler 10\\\\nI0813 19:53:23.937902 18094 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:53:23.937909 18094 handler.go:217] Removed *v1.EgressIP event handler 8\\\\nI0813 19:53:23.937915 18094 handler.go:217] Removed *v1.Pod event handler 3\\\\nI0813 19:53:23.937950 18094 reflector.go:295] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937977 18094 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:53:23.938001 18094 handler.go:203] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0813 19:53:23.938010 18094 handler.go:217] Removed *v1.EgressFirewall event handler 9\\\\nI0813 19:53:23.938033 18094 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:53:23.938059 18094 reflector.go:295] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937476 18094 handler.go:217] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.830011 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.846862 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.861042 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.875979 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.893098 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.908426 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.929269 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.944564 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.959600 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.978018 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.996040 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.013049 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.027542 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.041978 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.069309 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.095303 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.109062 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.128023 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.145220 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"2025-08-13T19:52:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e\\\\n2025-08-13T19:52:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e to /host/opt/cni/bin/\\\\n2025-08-13T19:52:54Z [verbose] multus-daemon started\\\\n2025-08-13T19:52:54Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:53:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.160576 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.177680 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.194514 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.209160 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:26 crc kubenswrapper[4183]: E0813 19:54:26.209371 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.209569 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:26 crc kubenswrapper[4183]: E0813 19:54:26.209651 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.209897 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:26 crc kubenswrapper[4183]: E0813 19:54:26.210011 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.210079 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.210145 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.210197 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.210213 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:26 crc kubenswrapper[4183]: E0813 19:54:26.210552 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:26 crc kubenswrapper[4183]: E0813 19:54:26.210930 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:26 crc kubenswrapper[4183]: E0813 19:54:26.211132 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:26 crc kubenswrapper[4183]: E0813 19:54:26.211137 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.213477 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.230062 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.246238 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.261600 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.279484 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.299522 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.318516 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.335201 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.352749 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.368535 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.386111 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.402332 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.417747 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.433402 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:26 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:26 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:26 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.433887 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.208489 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.208552 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.208553 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.208573 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.208690 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.208703 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.208904 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.208929 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.208987 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.209096 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.209112 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.209157 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.209157 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.209207 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.209305 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.209341 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.210162 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.210271 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.216354 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.216635 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.216915 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.217107 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.217246 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.217406 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.217885 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.217996 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.218288 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.218568 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.218685 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.218874 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.218929 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.219000 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.219012 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.219108 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.219169 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.219285 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.219466 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.220228 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.220294 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.220402 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.220465 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.220553 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.220649 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.220719 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.220907 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.221026 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.221084 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.221210 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.221244 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.221380 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.221442 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.221578 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.221403 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.221741 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.221859 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.221743 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.222003 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.222128 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.222294 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.222422 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.222493 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.222599 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.222697 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.222747 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.222894 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.222951 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.223030 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.223143 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.223225 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.223308 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.223350 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.223408 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.223489 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.223567 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.223707 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.223974 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.224113 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.224198 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.224370 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.224498 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.224672 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.224975 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.432437 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:27 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:27 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:27 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.432510 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:28 crc kubenswrapper[4183]: I0813 19:54:28.209280 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:28 crc kubenswrapper[4183]: I0813 19:54:28.209342 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:28 crc kubenswrapper[4183]: E0813 19:54:28.209514 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:28 crc kubenswrapper[4183]: I0813 19:54:28.209598 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:28 crc kubenswrapper[4183]: I0813 19:54:28.209306 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:28 crc kubenswrapper[4183]: I0813 19:54:28.209418 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:28 crc kubenswrapper[4183]: I0813 19:54:28.209381 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:28 crc kubenswrapper[4183]: E0813 19:54:28.209842 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:28 crc kubenswrapper[4183]: I0813 19:54:28.209983 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:28 crc kubenswrapper[4183]: E0813 19:54:28.210184 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:28 crc kubenswrapper[4183]: E0813 19:54:28.210507 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:28 crc kubenswrapper[4183]: E0813 19:54:28.210584 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:28 crc kubenswrapper[4183]: E0813 19:54:28.211107 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:28 crc kubenswrapper[4183]: E0813 19:54:28.211256 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:28 crc kubenswrapper[4183]: I0813 19:54:28.432638 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:28 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:28 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:28 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:28 crc kubenswrapper[4183]: I0813 19:54:28.432855 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.209122 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.209209 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.209161 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.209338 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.209161 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.209471 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.209644 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.209700 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.209978 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.209984 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.210103 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.210137 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.210160 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.210247 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.210297 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.210364 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.210425 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.210509 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.210574 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.210650 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.210697 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.210861 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.210904 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.210942 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.211010 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.211052 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.211111 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.209192 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.211281 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.211365 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.211408 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.211226 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.211563 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.211252 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.211641 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.211262 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.211525 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.211709 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.211885 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.211938 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.212094 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.212097 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.212271 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.212303 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.212279 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.212284 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.212359 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.212382 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.212396 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.212424 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.212443 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.212441 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.212473 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.212452 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.212495 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.212524 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.212599 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.213083 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.213243 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.213394 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.213601 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.213678 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.213745 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.213986 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.214132 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.214265 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.214458 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.214645 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.214666 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.214894 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.215036 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.215169 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.215439 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.215558 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.215643 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.215726 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.216039 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.216224 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.216439 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.216591 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.216754 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.217009 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.433510 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:29 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:29 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:29 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.433741 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:30 crc kubenswrapper[4183]: I0813 19:54:30.210146 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:30 crc kubenswrapper[4183]: I0813 19:54:30.210268 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:30 crc kubenswrapper[4183]: E0813 19:54:30.210453 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:30 crc kubenswrapper[4183]: I0813 19:54:30.210482 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:30 crc kubenswrapper[4183]: I0813 19:54:30.210522 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:30 crc kubenswrapper[4183]: E0813 19:54:30.210656 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:30 crc kubenswrapper[4183]: I0813 19:54:30.210714 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:30 crc kubenswrapper[4183]: E0813 19:54:30.211387 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:30 crc kubenswrapper[4183]: I0813 19:54:30.212363 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:30 crc kubenswrapper[4183]: I0813 19:54:30.213176 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:30 crc kubenswrapper[4183]: E0813 19:54:30.213349 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:30 crc kubenswrapper[4183]: E0813 19:54:30.213538 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:30 crc kubenswrapper[4183]: E0813 19:54:30.213895 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:30 crc kubenswrapper[4183]: E0813 19:54:30.214069 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:30 crc kubenswrapper[4183]: I0813 19:54:30.433722 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:30 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:30 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:30 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:30 crc kubenswrapper[4183]: I0813 19:54:30.433901 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:30 crc kubenswrapper[4183]: E0813 19:54:30.462856 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.033020 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.033093 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.033108 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.033128 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.033152 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:31Z","lastTransitionTime":"2025-08-13T19:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.050258 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.056305 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.056339 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.056355 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.056373 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.056401 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:31Z","lastTransitionTime":"2025-08-13T19:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.071383 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.076711 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.077010 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.077213 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.077384 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.077577 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:31Z","lastTransitionTime":"2025-08-13T19:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.093739 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.099145 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.099212 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.099230 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.099250 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.099270 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:31Z","lastTransitionTime":"2025-08-13T19:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.113961 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.119626 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.119677 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.119692 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.119710 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.119732 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:31Z","lastTransitionTime":"2025-08-13T19:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.133861 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.134301 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.209016 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.209145 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.209248 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.209250 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.209156 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.209287 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.209299 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.209439 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.209507 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.209546 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.209573 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.209596 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.209573 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.209636 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.209695 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.209718 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.209724 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.209726 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.209762 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.209766 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.209905 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.209918 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.210019 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.210060 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.210024 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.210062 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.210105 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.210225 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.210294 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.210309 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.210430 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.210465 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.210534 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.210585 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.210598 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.210681 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.210725 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.210961 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.211013 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.211060 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.211106 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.211223 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.211263 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.211339 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.211348 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.211363 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.211488 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.211501 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.211520 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.211572 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.211626 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.211645 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.211998 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.212097 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.212289 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.212307 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.212362 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.212371 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.212425 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.212442 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.212504 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.212566 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.212652 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.212738 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.212913 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.212991 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.213286 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.213480 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.213754 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.213913 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.213949 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.213981 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.213989 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.214046 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.214110 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.214209 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.214298 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.214380 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.214479 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.214579 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.209055 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.216046 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.433716 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:31 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:31 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:31 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.434761 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:32 crc kubenswrapper[4183]: I0813 19:54:32.209222 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:32 crc kubenswrapper[4183]: I0813 19:54:32.209300 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:32 crc kubenswrapper[4183]: I0813 19:54:32.209370 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:32 crc kubenswrapper[4183]: I0813 19:54:32.209228 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:32 crc kubenswrapper[4183]: I0813 19:54:32.209280 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:32 crc kubenswrapper[4183]: I0813 19:54:32.209443 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:32 crc kubenswrapper[4183]: E0813 19:54:32.209495 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:32 crc kubenswrapper[4183]: I0813 19:54:32.209449 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:32 crc kubenswrapper[4183]: E0813 19:54:32.209728 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:32 crc kubenswrapper[4183]: E0813 19:54:32.209890 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:32 crc kubenswrapper[4183]: E0813 19:54:32.209995 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:32 crc kubenswrapper[4183]: E0813 19:54:32.210077 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:32 crc kubenswrapper[4183]: E0813 19:54:32.210165 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:32 crc kubenswrapper[4183]: E0813 19:54:32.210251 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:32 crc kubenswrapper[4183]: I0813 19:54:32.433040 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:32 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:32 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:32 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:32 crc kubenswrapper[4183]: I0813 19:54:32.433194 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.208931 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.209021 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.209161 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.209172 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.209179 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.209265 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.209228 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.208972 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.209383 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.209395 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.209438 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.209494 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.209545 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.208935 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.209602 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.209627 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.209717 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.209966 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.210021 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.210051 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.210065 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.210117 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.210125 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.210177 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.210200 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.210233 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.210263 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.210271 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.210347 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.210356 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.210417 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.210454 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.210507 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.210511 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.210554 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.210565 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.210625 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.210694 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.210696 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.210746 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.210759 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.210859 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.210905 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.210958 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.210981 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.211013 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.211062 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.211140 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.211199 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.211242 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.211321 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.211400 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.211460 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.211541 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.211649 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.211726 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.211873 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.211959 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.212005 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.212269 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.212495 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.212499 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.212544 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.212606 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.212705 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.212862 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.212969 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.213069 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.213116 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.213217 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.213343 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.213417 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.213493 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.213592 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.213640 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.213743 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.213995 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.214135 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.214387 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.214417 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.214473 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.214503 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.433034 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:33 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:33 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:33 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.433302 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:34 crc kubenswrapper[4183]: I0813 19:54:34.208480 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:34 crc kubenswrapper[4183]: I0813 19:54:34.208606 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:34 crc kubenswrapper[4183]: I0813 19:54:34.208638 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:34 crc kubenswrapper[4183]: I0813 19:54:34.208614 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:34 crc kubenswrapper[4183]: I0813 19:54:34.208640 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:34 crc kubenswrapper[4183]: E0813 19:54:34.209034 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:34 crc kubenswrapper[4183]: I0813 19:54:34.209148 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:34 crc kubenswrapper[4183]: E0813 19:54:34.209243 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:34 crc kubenswrapper[4183]: E0813 19:54:34.209485 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:34 crc kubenswrapper[4183]: I0813 19:54:34.209980 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:34 crc kubenswrapper[4183]: E0813 19:54:34.210133 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:34 crc kubenswrapper[4183]: E0813 19:54:34.210630 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:34 crc kubenswrapper[4183]: E0813 19:54:34.210712 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:34 crc kubenswrapper[4183]: E0813 19:54:34.210751 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:34 crc kubenswrapper[4183]: I0813 19:54:34.211232 4183 scope.go:117] "RemoveContainer" containerID="419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137" Aug 13 19:54:34 crc kubenswrapper[4183]: E0813 19:54:34.211682 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:54:34 crc kubenswrapper[4183]: I0813 19:54:34.432038 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:34 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:34 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:34 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:34 crc kubenswrapper[4183]: I0813 19:54:34.432175 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.208876 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.208963 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.209003 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.209061 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.209067 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.209012 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.209125 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.209103 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.209134 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.208977 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.209226 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.209233 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.209237 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.208963 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.209272 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.209282 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.209322 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.209390 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.209393 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.209483 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.209491 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.209485 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.209532 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.209547 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.209494 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.209637 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.209902 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.209973 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.210060 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.210097 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.210204 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.210307 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.210339 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.210416 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.210447 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.210517 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.210561 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.210631 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.210637 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.210518 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.210713 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.210737 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.210755 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.210630 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.210955 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.211096 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.211104 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.211222 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.211244 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.211328 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.211452 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.211465 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.211529 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.211558 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.211602 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.211659 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.211857 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.211943 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.211963 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.212041 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.212072 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.212143 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.212212 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.212248 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.212326 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.212396 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.212452 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.212511 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.212577 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.212645 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.212699 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.212842 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.212883 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.212947 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.212960 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.213068 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.213129 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.213228 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.213319 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.213448 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.213516 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.213565 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.231367 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.259403 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.279430 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.306653 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.345733 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"2025-08-13T19:52:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e\\\\n2025-08-13T19:52:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e to /host/opt/cni/bin/\\\\n2025-08-13T19:52:54Z [verbose] multus-daemon started\\\\n2025-08-13T19:52:54Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:53:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.376635 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.399440 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.414891 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.429115 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.431071 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:35 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:35 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:35 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.431150 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.445895 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.461369 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.464272 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.479367 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.497942 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.515475 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.532403 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.547528 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.566078 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.587104 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.604306 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.619053 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.634533 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.651601 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.669017 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.687095 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.707145 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.728680 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.745283 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.763651 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.779627 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.795344 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.818673 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.831405 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.848366 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.863890 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.880017 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.897599 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.910502 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.933138 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.946708 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.965636 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.983686 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.999723 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.018199 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.032680 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.049943 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.070090 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.085296 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.103914 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.119738 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.136328 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.150407 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.167647 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.183476 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.207498 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:24Z\\\",\\\"message\\\":\\\"094 reflector.go:295] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0813 19:53:23.937894 18094 handler.go:217] Removed *v1.Node event handler 10\\\\nI0813 19:53:23.937902 18094 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:53:23.937909 18094 handler.go:217] Removed *v1.EgressIP event handler 8\\\\nI0813 19:53:23.937915 18094 handler.go:217] Removed *v1.Pod event handler 3\\\\nI0813 19:53:23.937950 18094 reflector.go:295] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937977 18094 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:53:23.938001 18094 handler.go:203] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0813 19:53:23.938010 18094 handler.go:217] Removed *v1.EgressFirewall event handler 9\\\\nI0813 19:53:23.938033 18094 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:53:23.938059 18094 reflector.go:295] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937476 18094 handler.go:217] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.208154 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.208209 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:36 crc kubenswrapper[4183]: E0813 19:54:36.208390 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.208487 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.208516 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.208588 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.208611 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:36 crc kubenswrapper[4183]: E0813 19:54:36.208873 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:36 crc kubenswrapper[4183]: E0813 19:54:36.208933 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:36 crc kubenswrapper[4183]: E0813 19:54:36.209011 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.209104 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:36 crc kubenswrapper[4183]: E0813 19:54:36.209195 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:36 crc kubenswrapper[4183]: E0813 19:54:36.209287 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:36 crc kubenswrapper[4183]: E0813 19:54:36.209366 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.228186 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.246442 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.261852 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.277960 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.303487 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.327849 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.352335 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.369874 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.394766 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.432518 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.433012 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:36 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:36 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:36 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.433157 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.454684 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.472545 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.494317 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.209048 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.209372 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.209514 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.209430 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.209394 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.209132 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.209185 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.210099 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.209211 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.210162 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.210190 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.210228 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.210242 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.210270 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.209176 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.209251 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.209277 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.209284 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.209287 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.210360 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.209314 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.209319 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.209333 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.209349 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.209366 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.209396 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.209088 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.209697 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.209712 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.209441 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.209244 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.210565 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.210742 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.210998 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.211029 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.211080 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.211190 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.211257 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.211333 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.211335 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.211347 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.211443 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.211469 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.211488 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.211496 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.211638 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.211686 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.211854 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.212043 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.212052 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.212194 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.212289 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.212900 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.212943 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.212969 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.213073 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.213153 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.213242 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.213302 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.213379 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.213466 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.213563 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.213605 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.213677 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.213677 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.213763 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.214264 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.214544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.214630 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.214728 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.214637 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.214672 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.215266 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.215761 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.215893 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.216011 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.216327 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.216630 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.216690 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.216717 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.216849 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.216931 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.432700 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:37 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:37 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:37 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.432893 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:38 crc kubenswrapper[4183]: I0813 19:54:38.209119 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:38 crc kubenswrapper[4183]: I0813 19:54:38.209237 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:38 crc kubenswrapper[4183]: I0813 19:54:38.209418 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:38 crc kubenswrapper[4183]: E0813 19:54:38.209425 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:38 crc kubenswrapper[4183]: I0813 19:54:38.209564 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:38 crc kubenswrapper[4183]: I0813 19:54:38.209572 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:38 crc kubenswrapper[4183]: E0813 19:54:38.209723 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:38 crc kubenswrapper[4183]: I0813 19:54:38.209728 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:38 crc kubenswrapper[4183]: E0813 19:54:38.209957 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:38 crc kubenswrapper[4183]: E0813 19:54:38.210060 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:38 crc kubenswrapper[4183]: E0813 19:54:38.210174 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:38 crc kubenswrapper[4183]: I0813 19:54:38.210301 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:38 crc kubenswrapper[4183]: E0813 19:54:38.210415 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:38 crc kubenswrapper[4183]: E0813 19:54:38.210646 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:38 crc kubenswrapper[4183]: I0813 19:54:38.434304 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:38 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:38 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:38 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:38 crc kubenswrapper[4183]: I0813 19:54:38.434764 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.209228 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.209356 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.209394 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.209408 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.209373 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.209475 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.209513 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.209585 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.209591 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.209609 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.209727 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.209754 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.209876 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.209984 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.209996 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.210097 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.210116 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.210145 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.210246 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.210262 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.210283 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.210297 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.210394 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.210514 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.210526 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.210574 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.210580 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.210652 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.210684 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.210657 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.210765 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.210945 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.211016 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.211103 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.211238 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.211304 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.211398 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.211444 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.211515 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.211568 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.211598 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.211675 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.212006 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.212093 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.212148 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.212214 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.212312 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.212423 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.212466 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.212534 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.212644 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.212666 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.212919 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.212770 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.213106 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.213113 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.213269 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.213373 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.213466 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.213515 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.213628 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.213758 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.214035 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.215101 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.214120 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.214203 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.215288 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.215297 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.215412 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.215478 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.215579 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.215674 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.215882 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.215915 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.213960 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.215982 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.216087 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.216191 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.216289 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.216382 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.216473 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.216706 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.432882 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:39 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:39 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:39 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.433301 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:40 crc kubenswrapper[4183]: I0813 19:54:40.208691 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:40 crc kubenswrapper[4183]: I0813 19:54:40.208745 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:40 crc kubenswrapper[4183]: I0813 19:54:40.208688 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:40 crc kubenswrapper[4183]: I0813 19:54:40.208722 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:40 crc kubenswrapper[4183]: E0813 19:54:40.208964 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:40 crc kubenswrapper[4183]: E0813 19:54:40.209056 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:40 crc kubenswrapper[4183]: I0813 19:54:40.209095 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:40 crc kubenswrapper[4183]: E0813 19:54:40.209166 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:40 crc kubenswrapper[4183]: I0813 19:54:40.209197 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:40 crc kubenswrapper[4183]: I0813 19:54:40.209239 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:40 crc kubenswrapper[4183]: E0813 19:54:40.209294 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:40 crc kubenswrapper[4183]: E0813 19:54:40.209659 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:40 crc kubenswrapper[4183]: E0813 19:54:40.209887 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:40 crc kubenswrapper[4183]: E0813 19:54:40.210034 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:40 crc kubenswrapper[4183]: I0813 19:54:40.432324 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:40 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:40 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:40 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:40 crc kubenswrapper[4183]: I0813 19:54:40.432462 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:40 crc kubenswrapper[4183]: E0813 19:54:40.465764 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.209101 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.209224 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.209295 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.209231 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.209351 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.209449 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.209455 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.209498 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.209584 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.209767 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.209969 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.210098 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.210187 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.210253 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.210355 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.210439 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.210480 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.210559 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.210578 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.210636 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.210686 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.211019 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.211338 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.211690 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.212116 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.212430 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.212640 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.212737 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.213026 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.213248 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.213338 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.213488 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.213689 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.214050 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.214294 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.214450 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.214512 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.214542 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.214560 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.214595 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.214600 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.214510 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.214638 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.214710 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.215217 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.215322 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.215496 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.215601 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.215952 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.216061 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.216232 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.216272 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.216309 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.216348 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.216361 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.216518 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.216532 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.216576 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.216876 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.216976 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.217088 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.217361 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.217445 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.217512 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.217574 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.217594 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.217639 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.217841 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.218035 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.218133 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.218168 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.218509 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.218910 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.219199 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.219299 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.219430 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.219630 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.220049 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.220317 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.220626 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.220768 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.220704 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.435116 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:41 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:41 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:41 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.435243 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.502069 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.502140 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.502160 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.502189 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.502219 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:41Z","lastTransitionTime":"2025-08-13T19:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.522002 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.526729 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.526909 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.526933 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.526959 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.526986 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:41Z","lastTransitionTime":"2025-08-13T19:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.542164 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.547568 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.547887 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.547938 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.547972 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.548000 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:41Z","lastTransitionTime":"2025-08-13T19:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.608295 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.614594 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.614715 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.614735 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.614756 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.614865 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:41Z","lastTransitionTime":"2025-08-13T19:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.629391 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.636401 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.636502 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.636529 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.636556 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.636584 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:41Z","lastTransitionTime":"2025-08-13T19:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.654760 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.654994 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:54:42 crc kubenswrapper[4183]: I0813 19:54:42.209094 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:42 crc kubenswrapper[4183]: I0813 19:54:42.209180 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:42 crc kubenswrapper[4183]: I0813 19:54:42.209258 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:42 crc kubenswrapper[4183]: E0813 19:54:42.209364 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:42 crc kubenswrapper[4183]: I0813 19:54:42.209426 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:42 crc kubenswrapper[4183]: I0813 19:54:42.209102 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:42 crc kubenswrapper[4183]: I0813 19:54:42.209142 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:42 crc kubenswrapper[4183]: E0813 19:54:42.209635 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:42 crc kubenswrapper[4183]: I0813 19:54:42.209652 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:42 crc kubenswrapper[4183]: E0813 19:54:42.209746 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:42 crc kubenswrapper[4183]: E0813 19:54:42.209957 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:42 crc kubenswrapper[4183]: E0813 19:54:42.210077 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:42 crc kubenswrapper[4183]: E0813 19:54:42.210246 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:42 crc kubenswrapper[4183]: E0813 19:54:42.210559 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:42 crc kubenswrapper[4183]: I0813 19:54:42.433538 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:42 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:42 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:42 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:42 crc kubenswrapper[4183]: I0813 19:54:42.433638 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.209003 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.209020 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.209316 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.209524 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.209678 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.209744 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.210001 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.210053 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.210088 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.210139 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.210147 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.210152 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.210181 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.210193 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.210243 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.210245 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.210279 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.210366 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.210370 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.210427 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.210439 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.210498 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.210507 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.210542 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.210570 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.210626 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.210630 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.210674 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.210684 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.210849 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.210850 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.210943 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.210975 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.211052 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.211057 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.211095 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.211114 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.211204 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.211461 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.211593 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.211591 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.211650 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.211668 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.211717 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.211854 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.211914 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.212042 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.212048 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.212148 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.212178 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.212257 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.212315 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.212354 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.212431 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.212731 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.213066 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.213462 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.213485 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.213487 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.213520 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.213570 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.213596 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.213621 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.213661 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.213695 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.213724 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.213854 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.213922 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.214101 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.214214 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.214322 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.214464 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.214587 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.214642 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.215029 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.215145 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.215247 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.215339 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.215396 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.215469 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.215564 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.215652 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.431848 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:43 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:43 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:43 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.431962 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:44 crc kubenswrapper[4183]: I0813 19:54:44.208226 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:44 crc kubenswrapper[4183]: I0813 19:54:44.208243 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:44 crc kubenswrapper[4183]: I0813 19:54:44.208285 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:44 crc kubenswrapper[4183]: I0813 19:54:44.208322 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:44 crc kubenswrapper[4183]: I0813 19:54:44.208362 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:44 crc kubenswrapper[4183]: I0813 19:54:44.208395 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:44 crc kubenswrapper[4183]: I0813 19:54:44.208461 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:44 crc kubenswrapper[4183]: E0813 19:54:44.209157 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:44 crc kubenswrapper[4183]: E0813 19:54:44.209561 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:44 crc kubenswrapper[4183]: E0813 19:54:44.209634 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:44 crc kubenswrapper[4183]: E0813 19:54:44.210010 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:44 crc kubenswrapper[4183]: E0813 19:54:44.210394 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:44 crc kubenswrapper[4183]: E0813 19:54:44.210486 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:44 crc kubenswrapper[4183]: E0813 19:54:44.210570 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:44 crc kubenswrapper[4183]: I0813 19:54:44.435268 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:44 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:44 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:44 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:44 crc kubenswrapper[4183]: I0813 19:54:44.435394 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.210358 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.210604 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.211512 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.211695 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.212108 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.212274 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.212498 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.212668 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.213998 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.214125 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.214242 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.214245 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.214316 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.214545 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.214597 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.214542 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.214760 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.214908 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.214970 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.215110 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.215122 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.215282 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.215341 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.215288 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.215517 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.215527 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.215572 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.215766 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.215977 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.215922 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.216173 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.216230 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.216326 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.216493 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.216600 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.216687 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.216176 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.217408 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.217739 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.218388 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.219028 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.219300 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.219445 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.219936 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.219997 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.220028 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.220299 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.220382 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.220417 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.220574 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.220688 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.221131 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.221213 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.221292 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.221897 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.222016 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.223938 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.224026 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.224040 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.224164 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.224355 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.224448 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.224457 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.224556 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.224643 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.224925 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.224989 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.225075 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.225149 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.225262 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.225501 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.225873 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.226242 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.226522 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.226765 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.226938 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.227048 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.227179 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.227126 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.227481 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.227618 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.227746 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.240007 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.259224 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"2025-08-13T19:52:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e\\\\n2025-08-13T19:52:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e to /host/opt/cni/bin/\\\\n2025-08-13T19:52:54Z [verbose] multus-daemon started\\\\n2025-08-13T19:52:54Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:53:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.275105 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.290325 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.307423 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.323207 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.342074 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.370459 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.387924 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.402115 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.421449 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.431642 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:45 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:45 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:45 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.432233 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.438197 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.455730 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.468173 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.474267 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.490929 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.507429 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.521314 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.538859 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.555219 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.569369 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.589336 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.612362 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.632632 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.648393 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.663047 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.678079 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.694226 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.709116 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.723651 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.738885 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.754187 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.769606 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.786263 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.799613 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.815253 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.830166 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.846436 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.859958 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.877492 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.898599 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.914895 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.931482 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.950313 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.966261 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.996191 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.023186 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.049970 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.072034 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.094562 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.119466 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:24Z\\\",\\\"message\\\":\\\"094 reflector.go:295] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0813 19:53:23.937894 18094 handler.go:217] Removed *v1.Node event handler 10\\\\nI0813 19:53:23.937902 18094 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:53:23.937909 18094 handler.go:217] Removed *v1.EgressIP event handler 8\\\\nI0813 19:53:23.937915 18094 handler.go:217] Removed *v1.Pod event handler 3\\\\nI0813 19:53:23.937950 18094 reflector.go:295] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937977 18094 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:53:23.938001 18094 handler.go:203] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0813 19:53:23.938010 18094 handler.go:217] Removed *v1.EgressFirewall event handler 9\\\\nI0813 19:53:23.938033 18094 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:53:23.938059 18094 reflector.go:295] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937476 18094 handler.go:217] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.138111 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.153682 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.168901 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.187256 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.205375 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.208338 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.208451 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.208503 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.208596 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:46 crc kubenswrapper[4183]: E0813 19:54:46.208674 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.208734 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:46 crc kubenswrapper[4183]: E0813 19:54:46.208974 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.209012 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:46 crc kubenswrapper[4183]: E0813 19:54:46.209202 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.209425 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:46 crc kubenswrapper[4183]: E0813 19:54:46.209586 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:46 crc kubenswrapper[4183]: E0813 19:54:46.209684 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:46 crc kubenswrapper[4183]: E0813 19:54:46.209878 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:46 crc kubenswrapper[4183]: E0813 19:54:46.209978 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.211161 4183 scope.go:117] "RemoveContainer" containerID="419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.230252 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.258103 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.278343 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.299250 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.321347 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.352887 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.375028 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.393393 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.412304 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.432079 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:46 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:46 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:46 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.432192 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.434107 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.454652 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.470021 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.979183 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovnkube-controller/4.log" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.983368 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5"} Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.984354 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.004075 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.022911 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.041708 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.059232 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.076089 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.094130 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.114271 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.139764 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.159564 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.177680 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.198244 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.208280 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.208408 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.208418 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.208409 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.208534 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.208555 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.208634 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.208640 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.208655 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.208661 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.208698 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.208702 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.208698 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.208740 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.208743 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.208744 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.208838 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.208855 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.208917 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.208922 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.208955 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.208982 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.209034 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.209073 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.209082 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.209183 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.209242 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.209258 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.209304 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.209332 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.209342 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.209390 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.209392 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.209485 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.209584 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.209594 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.209721 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.209747 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.209901 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.209965 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.210083 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.210228 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.210377 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.210449 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.210547 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.210605 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.210698 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.210842 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.210925 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.210955 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.211004 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.211126 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.211131 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.211223 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.211270 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.211279 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.211421 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.211450 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.211492 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.211502 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.211577 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.211591 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.211641 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.211663 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.211706 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.211728 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.211759 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.211933 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.212053 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.212254 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.212407 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.212437 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.212517 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.212564 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.212633 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.212641 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.212703 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.212904 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.213019 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.213122 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.213207 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.213299 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.385702 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.407160 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.426706 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.431948 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:47 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:47 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:47 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.432059 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.443756 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.460142 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.476657 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.494628 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.517651 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.539526 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.556764 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.576728 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.601147 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.618215 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.634440 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.650510 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.666098 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.682414 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.699579 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.713272 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.731917 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.746511 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.764588 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.785187 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.801677 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.820296 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.840186 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.857902 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.878992 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.901394 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.918627 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.939003 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.963913 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.981263 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.990280 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovnkube-controller/5.log" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.991066 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovnkube-controller/4.log" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.996318 4183 generic.go:334] "Generic (PLEG): container finished" podID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerID="2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5" exitCode=1 Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.996483 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerDied","Data":"2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5"} Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.996545 4183 scope.go:117] "RemoveContainer" containerID="419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.999114 4183 scope.go:117] "RemoveContainer" containerID="2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5" Aug 13 19:54:48 crc kubenswrapper[4183]: E0813 19:54:48.004133 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.007433 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.050322 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:24Z\\\",\\\"message\\\":\\\"094 reflector.go:295] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0813 19:53:23.937894 18094 handler.go:217] Removed *v1.Node event handler 10\\\\nI0813 19:53:23.937902 18094 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:53:23.937909 18094 handler.go:217] Removed *v1.EgressIP event handler 8\\\\nI0813 19:53:23.937915 18094 handler.go:217] Removed *v1.Pod event handler 3\\\\nI0813 19:53:23.937950 18094 reflector.go:295] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937977 18094 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:53:23.938001 18094 handler.go:203] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0813 19:53:23.938010 18094 handler.go:217] Removed *v1.EgressFirewall event handler 9\\\\nI0813 19:53:23.938033 18094 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:53:23.938059 18094 reflector.go:295] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937476 18094 handler.go:217] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.067002 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.082673 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.102605 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.124526 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.148231 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.170532 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.187756 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.204730 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.208942 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.209091 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.208964 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.209137 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.209115 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:48 crc kubenswrapper[4183]: E0813 19:54:48.209168 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:48 crc kubenswrapper[4183]: E0813 19:54:48.209251 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:48 crc kubenswrapper[4183]: E0813 19:54:48.209350 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.209392 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.209449 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:48 crc kubenswrapper[4183]: E0813 19:54:48.209492 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:48 crc kubenswrapper[4183]: E0813 19:54:48.209536 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:48 crc kubenswrapper[4183]: E0813 19:54:48.209624 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:48 crc kubenswrapper[4183]: E0813 19:54:48.210057 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.224150 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.240943 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.269389 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.284709 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.300043 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.316237 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.369308 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.386124 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.424071 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.434069 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:48 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:48 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:48 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.434370 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.463229 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.506307 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"2025-08-13T19:52:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e\\\\n2025-08-13T19:52:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e to /host/opt/cni/bin/\\\\n2025-08-13T19:52:54Z [verbose] multus-daemon started\\\\n2025-08-13T19:52:54Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:53:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.541843 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.583871 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.625697 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.667690 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.702519 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.746164 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.783633 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.823565 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.863562 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.903219 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.944529 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.985494 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.003689 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovnkube-controller/5.log" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.010720 4183 scope.go:117] "RemoveContainer" containerID="2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.011307 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.028365 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.064174 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.104760 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.141538 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.183514 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.208511 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.208631 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.208674 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.208742 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.208762 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.208883 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.208973 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.209149 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.209283 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.209408 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.209517 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.209542 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.209567 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.209633 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.209642 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.209635 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.209678 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.209736 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.209746 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.209760 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.209879 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.209914 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.209526 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.209469 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.210010 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.209839 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.210119 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.209467 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.210206 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.210279 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.210292 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.210336 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.210382 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.209430 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.210459 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.210481 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.210521 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.210611 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.210626 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.210939 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.210993 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.210996 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.211015 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.211100 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.211141 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.211200 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.209411 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.211259 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.211284 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.211313 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.211332 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.211371 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.211401 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.211424 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.211458 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.211401 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.211514 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.211580 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.211627 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.211657 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.211726 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.211899 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.212018 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.212141 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.212208 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.212274 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.212310 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.212385 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.212464 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.212538 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.212607 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.212685 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.212911 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.213000 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.213074 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.213097 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.213131 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.213191 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.213255 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.213323 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.213381 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.214523 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.231071 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.267169 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.307087 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.344234 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.387432 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.423284 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.432149 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:49 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:49 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:49 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.432526 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.465195 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.511085 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.544317 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.597975 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.624935 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.668350 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.701641 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.744235 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.791191 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.830885 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:24Z\\\",\\\"message\\\":\\\"094 reflector.go:295] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0813 19:53:23.937894 18094 handler.go:217] Removed *v1.Node event handler 10\\\\nI0813 19:53:23.937902 18094 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:53:23.937909 18094 handler.go:217] Removed *v1.EgressIP event handler 8\\\\nI0813 19:53:23.937915 18094 handler.go:217] Removed *v1.Pod event handler 3\\\\nI0813 19:53:23.937950 18094 reflector.go:295] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937977 18094 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:53:23.938001 18094 handler.go:203] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0813 19:53:23.938010 18094 handler.go:217] Removed *v1.EgressFirewall event handler 9\\\\nI0813 19:53:23.938033 18094 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:53:23.938059 18094 reflector.go:295] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937476 18094 handler.go:217] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:47Z\\\",\\\"message\\\":\\\"10.217.4.108:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0813 19:54:47.532432 19713 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:54:47.531652 19713 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532671 19713 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532752 19713 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532927 19713 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:54:47.532945 19713 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:54:47.532868 19713 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532892 19713 reflector.go:295] Stoppin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.866166 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.903388 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.913241 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.913510 4183 secret.go:194] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.913891 4183 configmap.go:199] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.913906 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.914296 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls podName:297ab9b6-2186-4d5b-a952-2bfd59af63c4 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.913918116 +0000 UTC m=+778.606583354 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls") pod "machine-config-controller-6df6df6b6b-58shh" (UID: "297ab9b6-2186-4d5b-a952-2bfd59af63c4") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.914507 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.914483242 +0000 UTC m=+778.607148110 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : object "openshift-service-ca"/"signing-cabundle" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.914705 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.915104 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.915323 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.915224 4183 secret.go:194] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.915430 4183 configmap.go:199] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.914881 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.916339 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.915604414 +0000 UTC m=+778.608976492 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : object "openshift-dns-operator"/"metrics-tls" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.916496 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.916479339 +0000 UTC m=+778.609144057 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"oauth-serving-cert" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.917014 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.916995534 +0000 UTC m=+778.609660332 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.918182 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.918297 4183 secret.go:194] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.918635 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.918753 4183 configmap.go:199] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.918839 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.918764944 +0000 UTC m=+778.611429532 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"serving-cert" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.919134 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.919165 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.919188 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.919214 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.919260 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.919290 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j7zrh\" (UniqueName: \"kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.919319 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.919345 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.919371 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.919470 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.919494 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.919520 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.919583 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.919629 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.919675 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.919699 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.919722 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.919748 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.920231 4183 secret.go:194] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.920279 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.920267767 +0000 UTC m=+778.612932585 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default-metrics-tls" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.920301 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.920292518 +0000 UTC m=+778.612957166 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"console-operator-config" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.920340 4183 configmap.go:199] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.920372 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.92036342 +0000 UTC m=+778.613028118 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.920421 4183 secret.go:194] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.920457 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.920445242 +0000 UTC m=+778.613109930 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"serving-cert" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.920498 4183 configmap.go:199] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.920531 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.920520524 +0000 UTC m=+778.613185212 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.920568 4183 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.920600 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.920590326 +0000 UTC m=+778.613255134 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"config" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.920648 4183 secret.go:194] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.920681 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.920671489 +0000 UTC m=+778.613336297 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : object "openshift-service-ca"/"signing-key" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.920856 4183 projected.go:294] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.920885 4183 projected.go:294] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.920900 4183 projected.go:200] Error preparing data for projected volume kube-api-access-j7zrh for pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.920952 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.920938726 +0000 UTC m=+778.613603424 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-j7zrh" (UniqueName: "kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.921011 4183 secret.go:194] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.921047 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.921036399 +0000 UTC m=+778.613701317 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.921090 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.921125 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.921116101 +0000 UTC m=+778.613781009 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.921173 4183 secret.go:194] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.921208 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.921198484 +0000 UTC m=+778.613863192 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.921254 4183 secret.go:194] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.921283 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.921274646 +0000 UTC m=+778.613939364 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.921325 4183 configmap.go:199] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.921361 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.921350078 +0000 UTC m=+778.614014756 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.921408 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.921443 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.9214332 +0000 UTC m=+778.614097868 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.921489 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.921519 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.921510553 +0000 UTC m=+778.614175201 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"serving-cert" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.921576 4183 projected.go:294] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.921591 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.921624 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.921614996 +0000 UTC m=+778.614279684 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.921668 4183 secret.go:194] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.921699 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.921689558 +0000 UTC m=+778.614354236 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"serving-cert" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.921740 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.921896 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.921878233 +0000 UTC m=+778.614543341 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.921959 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.921997 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.921987116 +0000 UTC m=+778.614651794 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.943583 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.983439 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.015143 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/3.log" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.015612 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/2.log" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.015720 4183 generic.go:334] "Generic (PLEG): container finished" podID="475321a1-8b7e-4033-8f72-b05a8b377347" containerID="c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791" exitCode=1 Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.015750 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q88th" event={"ID":"475321a1-8b7e-4033-8f72-b05a8b377347","Type":"ContainerDied","Data":"c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791"} Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.015873 4183 scope.go:117] "RemoveContainer" containerID="8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.016392 4183 scope.go:117] "RemoveContainer" containerID="c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.017160 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\"" pod="openshift-multus/multus-q88th" podUID="475321a1-8b7e-4033-8f72-b05a8b377347" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.022169 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.022569 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.023091 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.023118 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.023144 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.023264 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.023309 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.023342 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.023367 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.023753 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.023926 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.023907234 +0000 UTC m=+778.716571882 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.023994 4183 configmap.go:199] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.024023 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.024015577 +0000 UTC m=+778.716680215 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-images" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.024065 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.024090 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.024083329 +0000 UTC m=+778.716747977 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.024127 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.024152 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.024144361 +0000 UTC m=+778.716809119 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"image-import-ca" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.024182 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.024207 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.024200093 +0000 UTC m=+778.716864871 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.024247 4183 secret.go:194] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.024274 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.024266485 +0000 UTC m=+778.716931253 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.024312 4183 secret.go:194] Couldn't get secret openshift-image-registry/installation-pull-secrets: object "openshift-image-registry"/"installation-pull-secrets" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.024341 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.024332367 +0000 UTC m=+778.716997005 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "installation-pull-secrets" (UniqueName: "kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"installation-pull-secrets" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.024374 4183 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.024398 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.024391398 +0000 UTC m=+778.717056036 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"client-ca" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.024434 4183 secret.go:194] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.024480 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.024472751 +0000 UTC m=+778.717137389 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.112238 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.125847 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.125927 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxcp\" (UniqueName: \"kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp\") pod \"network-check-source-5c5478f8c-vqvt7\" (UID: \"d0f40333-c860-4c04-8058-a0bf572dcf12\") " pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.126026 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.126101 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.126134 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.126181 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.126215 4183 projected.go:294] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.126247 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.126280 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.126304 4183 secret.go:194] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.126223 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.126349 4183 projected.go:200] Error preparing data for projected volume kube-api-access-qcxcp for pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.126202 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.126327 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.126303866 +0000 UTC m=+778.818968554 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.126410 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.126398399 +0000 UTC m=+778.819063047 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"openshift-global-ca" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.126428 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.126420499 +0000 UTC m=+778.819085158 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"metrics-tls" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.126449 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp podName:d0f40333-c860-4c04-8058-a0bf572dcf12 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.12643953 +0000 UTC m=+778.819104238 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-qcxcp" (UniqueName: "kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp") pod "network-check-source-5c5478f8c-vqvt7" (UID: "d0f40333-c860-4c04-8058-a0bf572dcf12") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.126492 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.126540 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d7ntf\" (UniqueName: \"kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.126579 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.126636 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.126677 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.126736 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmhq\" (UniqueName: \"kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq\") pod \"migrator-f7c6d88df-q2fnv\" (UID: \"cf1a8966-f594-490a-9fbb-eec5bafd13d3\") " pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.126749 4183 projected.go:294] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.126848 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.126771 4183 projected.go:294] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.126886 4183 projected.go:200] Error preparing data for projected volume kube-api-access-d7ntf for pod openshift-service-ca/service-ca-666f99b6f-vlbxv: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.126911 4183 configmap.go:199] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.126939 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.126955 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.126975 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.126940 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.126919774 +0000 UTC m=+778.819584572 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-d7ntf" (UniqueName: "kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.126990 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hqmhq for pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127007 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127009 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.126995766 +0000 UTC m=+778.819660434 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"trusted-ca-bundle" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127029 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127041 4183 projected.go:200] Error preparing data for projected volume kube-api-access-n6sqt for pod openshift-marketplace/community-operators-8jhz6: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.127048 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127059 4183 configmap.go:199] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.126852 4183 secret.go:194] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127096 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt podName:3f4dca86-e6ee-4ec9-8324-86aff960225e nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.127066728 +0000 UTC m=+778.819731406 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-n6sqt" (UniqueName: "kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt") pod "community-operators-8jhz6" (UID: "3f4dca86-e6ee-4ec9-8324-86aff960225e") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127109 4183 configmap.go:199] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127134 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.12712121 +0000 UTC m=+778.819785938 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"encryption-config-1" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.127138 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127155 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq podName:cf1a8966-f594-490a-9fbb-eec5bafd13d3 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.12714568 +0000 UTC m=+778.819810368 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqmhq" (UniqueName: "kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq") pod "migrator-f7c6d88df-q2fnv" (UID: "cf1a8966-f594-490a-9fbb-eec5bafd13d3") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127189 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.127167371 +0000 UTC m=+778.819832019 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"config" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127207 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.127198732 +0000 UTC m=+778.819863380 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127214 4183 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127224 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.127216532 +0000 UTC m=+778.819881200 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127233 4183 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127244 4183 projected.go:200] Error preparing data for projected volume kube-api-access-55f7t for pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127310 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.127296184 +0000 UTC m=+778.819960883 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-55f7t" (UniqueName: "kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.127346 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tl5kg\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.127386 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.127432 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2h9\" (UniqueName: \"kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127458 4183 projected.go:294] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127481 4183 projected.go:294] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127493 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tl5kg for pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127553 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127581 4183 projected.go:294] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127600 4183 projected.go:294] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127612 4183 projected.go:200] Error preparing data for projected volume kube-api-access-lx2h9 for pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127602 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.127589303 +0000 UTC m=+778.820253991 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127737 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127871 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.12785508 +0000 UTC m=+778.820519768 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-tl5kg" (UniqueName: "kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127921 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9 podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.127906822 +0000 UTC m=+778.820571530 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-lx2h9" (UniqueName: "kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.128135 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.128197 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.128231 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.128267 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128268 4183 secret.go:194] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.128305 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128326 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs podName:a702c6d2-4dde-4077-ab8c-0f8df804bf7a nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.128312973 +0000 UTC m=+778.820977781 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs") pod "network-metrics-daemon-qdfr4" (UID: "a702c6d2-4dde-4077-ab8c-0f8df804bf7a") : object "openshift-multus"/"metrics-daemon-secret" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128361 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.128380 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128401 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.128390626 +0000 UTC m=+778.821055364 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"client-ca" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128404 4183 configmap.go:199] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.128438 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8dcvj\" (UniqueName: \"kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128453 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.128440887 +0000 UTC m=+778.821105575 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128473 4183 secret.go:194] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128495 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.128508 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128522 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs podName:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.128509599 +0000 UTC m=+778.821174327 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs") pod "multus-admission-controller-6c7c885997-4hbbc" (UID: "d5025cb4-ddb0-4107-88c1-bcbcdb779ac0") : object "openshift-multus"/"multus-admission-controller-secret" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128439 4183 configmap.go:199] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.128552 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128578 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.128567541 +0000 UTC m=+778.821232239 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128578 4183 projected.go:294] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128600 4183 projected.go:294] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128611 4183 projected.go:200] Error preparing data for projected volume kube-api-access-8dcvj for pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.128624 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kp86\" (UniqueName: \"kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128649 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.128637163 +0000 UTC m=+778.821301871 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-8dcvj" (UniqueName: "kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128660 4183 secret.go:194] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.128685 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hpzhn\" (UniqueName: \"kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128702 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.128691084 +0000 UTC m=+778.821355782 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128709 4183 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128728 4183 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.128732 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r8qj9\" (UniqueName: \"kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128740 4183 projected.go:200] Error preparing data for projected volume kube-api-access-6kp86 for pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.128771 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128852 4183 projected.go:294] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128611 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128856 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86 podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.128836468 +0000 UTC m=+778.821502216 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kp86" (UniqueName: "kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128907 4183 configmap.go:199] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.128917 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128878 4183 projected.go:294] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128922 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.12890901 +0000 UTC m=+778.821573748 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128943 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hpzhn for pod openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128964 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.128950722 +0000 UTC m=+778.821615360 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128984 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.128973962 +0000 UTC m=+778.821638640 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-hpzhn" (UniqueName: "kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128986 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.129030 4183 projected.go:294] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.129036 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.129021774 +0000 UTC m=+778.821686462 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.129048 4183 projected.go:294] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.129060 4183 projected.go:200] Error preparing data for projected volume kube-api-access-r8qj9 for pod openshift-apiserver/apiserver-67cbf64bc9-mtx25: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.129105 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4w8wh\" (UniqueName: \"kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.129149 4183 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.129190 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.129173258 +0000 UTC m=+778.821837946 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.129166 4183 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.129211 4183 projected.go:294] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.129226 4183 projected.go:200] Error preparing data for projected volume kube-api-access-4w8wh for pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.129218 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9 podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.129207419 +0000 UTC m=+778.821872057 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-r8qj9" (UniqueName: "kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.129234 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.129152 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.129267 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.12925417 +0000 UTC m=+778.821918868 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-4w8wh" (UniqueName: "kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.129307 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.129294471 +0000 UTC m=+778.821959169 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.129337 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.129417 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.129400375 +0000 UTC m=+778.822065193 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.129486 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-idp-0-file-data: object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.129533 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.129521058 +0000 UTC m=+778.822185736 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-idp-0-file-data" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.129582 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.129650 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.129686 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.129734 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.129861 4183 configmap.go:199] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.129874 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.129927 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.129908259 +0000 UTC m=+778.822573037 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-config" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.129961 4183 secret.go:194] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.129977 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.130024 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.130039 4183 configmap.go:199] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.130080 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.130068084 +0000 UTC m=+778.822732772 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.132218 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.132426 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.132528 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.132552 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.132570 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.132595 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.132630 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.132576685 +0000 UTC m=+778.825241403 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.132658 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.132645697 +0000 UTC m=+778.825310385 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.132680 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.132670098 +0000 UTC m=+778.825334786 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.132703 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.132711 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.132690988 +0000 UTC m=+778.825355656 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"audit-1" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.132715 4183 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.132732 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.132721179 +0000 UTC m=+778.825385837 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.132538 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.132765 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.13274916 +0000 UTC m=+778.825413858 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.132893 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.132905 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.132922 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.132932 4183 projected.go:200] Error preparing data for projected volume kube-api-access-ncrf5 for pod openshift-marketplace/certified-operators-7287f: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.132953 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-serving-cert\" (UniqueName: \"kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.132982 4183 secret.go:194] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.133036 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.133023498 +0000 UTC m=+778.825688346 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.133199 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5 podName:887d596e-c519-4bfa-af90-3edd9e1b2f0f nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.133186533 +0000 UTC m=+778.825851161 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-ncrf5" (UniqueName: "kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5") pod "certified-operators-7287f" (UID: "887d596e-c519-4bfa-af90-3edd9e1b2f0f") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.133273 4183 secret.go:194] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.133231 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.133217613 +0000 UTC m=+778.825882211 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.133340 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.133331537 +0000 UTC m=+778.825996135 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.133506 4183 secret.go:194] Couldn't get secret openshift-console-operator/webhook-serving-cert: object "openshift-console-operator"/"webhook-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.133544 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.133532552 +0000 UTC m=+778.826197360 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "webhook-serving-cert" (UniqueName: "kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : object "openshift-console-operator"/"webhook-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.134308 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.134361 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.134406 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.134493 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.134541 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.134588 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bxr\" (UniqueName: \"kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.134706 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.134740 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.134926 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.134975 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.135019 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.135054 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.135183 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bm986\" (UniqueName: \"kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.135329 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.135625 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.135688 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.135673293 +0000 UTC m=+778.828338082 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-client" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.135673 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.135757 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wrd8h\" (UniqueName: \"kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.136185 4183 configmap.go:199] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.136355 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.136355 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.136473 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.136551 4183 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.136564 4183 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.136574 4183 secret.go:194] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.136577 4183 projected.go:200] Error preparing data for projected volume kube-api-access-bm986 for pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.136841 4183 configmap.go:199] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.136919 4183 secret.go:194] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.137012 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.137190 4183 secret.go:194] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.137288 4183 secret.go:194] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.137325 4183 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.137346 4183 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.137362 4183 projected.go:200] Error preparing data for projected volume kube-api-access-l8bxr for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.137490 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.137505 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.137514 4183 projected.go:200] Error preparing data for projected volume kube-api-access-wrd8h for pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.137574 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.137551657 +0000 UTC m=+778.830216475 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l8bxr" (UniqueName: "kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.137617 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.137603969 +0000 UTC m=+778.830268597 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-wrd8h" (UniqueName: "kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.137980 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.138095 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nf4t2\" (UniqueName: \"kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.138165 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtrv\" (UniqueName: \"kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.138259 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.138236237 +0000 UTC m=+778.830900985 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.138295 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.138281538 +0000 UTC m=+778.830946236 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-session" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.138494 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.138480444 +0000 UTC m=+778.831145132 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"etcd-serving-ca" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.138518 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.138507944 +0000 UTC m=+778.831172632 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.138546 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986 podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.138529035 +0000 UTC m=+778.831193863 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-bm986" (UniqueName: "kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.138568 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.138556696 +0000 UTC m=+778.831221394 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"audit" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.138590 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.138580156 +0000 UTC m=+778.831244844 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.138614 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.138599577 +0000 UTC m=+778.831264225 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"config" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.138635 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.138624268 +0000 UTC m=+778.831288946 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.138652 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.138643238 +0000 UTC m=+778.831307906 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.139058 4183 projected.go:294] Couldn't get configMap hostpath-provisioner/kube-root-ca.crt: object "hostpath-provisioner"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.139079 4183 projected.go:294] Couldn't get configMap hostpath-provisioner/openshift-service-ca.crt: object "hostpath-provisioner"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.139089 4183 projected.go:200] Error preparing data for projected volume kube-api-access-vvtrv for pod hostpath-provisioner/csi-hostpathplugin-hvm8g: [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.139163 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.139222 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.138758162 +0000 UTC m=+778.831422820 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.139243 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.139276 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv podName:12e733dd-0939-4f1b-9cbb-13897e093787 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.139258376 +0000 UTC m=+778.831923084 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-vvtrv" (UniqueName: "kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv") pod "csi-hostpathplugin-hvm8g" (UID: "12e733dd-0939-4f1b-9cbb-13897e093787") : [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.139303 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.139290797 +0000 UTC m=+778.831955465 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.139333 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.139314887 +0000 UTC m=+778.831979565 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.139459 4183 projected.go:294] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.139483 4183 projected.go:294] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.139497 4183 projected.go:200] Error preparing data for projected volume kube-api-access-nf4t2 for pod openshift-dns-operator/dns-operator-75f687757b-nz2xb: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.139662 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2 podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.139648807 +0000 UTC m=+778.832313505 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-nf4t2" (UniqueName: "kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.153347 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.168408 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.186597 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.209165 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.209627 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.209172 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.209206 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.209251 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.210234 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.209292 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.210392 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.209374 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.210525 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.209434 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.210661 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.209984 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.210110 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.234358 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.239388 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.239413 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.239425 4183 projected.go:200] Error preparing data for projected volume kube-api-access-ptdrb for pod openshift-marketplace/redhat-operators-f4jkp: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.239491 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb podName:4092a9f8-5acc-4932-9e90-ef962eeb301a nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.239474365 +0000 UTC m=+778.932138983 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-ptdrb" (UniqueName: "kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb") pod "redhat-operators-f4jkp" (UID: "4092a9f8-5acc-4932-9e90-ef962eeb301a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.239238 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.239860 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2zpsk\" (UniqueName: \"kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk\") pod \"downloads-65476884b9-9wcvx\" (UID: \"6268b7fe-8910-4505-b404-6f1df638105c\") " pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.239921 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.239949 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.240043 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.240054 4183 secret.go:194] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.240077 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.240068332 +0000 UTC m=+778.932732950 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.240104 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.240091983 +0000 UTC m=+778.932756601 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-oauth-config" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.240154 4183 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.240179 4183 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.240193 4183 projected.go:200] Error preparing data for projected volume kube-api-access-2zpsk for pod openshift-console/downloads-65476884b9-9wcvx: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.240238 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk podName:6268b7fe-8910-4505-b404-6f1df638105c nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.240224957 +0000 UTC m=+778.932889615 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-2zpsk" (UniqueName: "kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk") pod "downloads-65476884b9-9wcvx" (UID: "6268b7fe-8910-4505-b404-6f1df638105c") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.239976 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.240274 4183 projected.go:269] Couldn't get secret openshift-image-registry/image-registry-tls: object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.240291 4183 projected.go:200] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-585546dd8b-v5m4t: object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.240324 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.240313619 +0000 UTC m=+778.932978247 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.240356 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.240438 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.240589 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.240579357 +0000 UTC m=+778.933243985 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.240702 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kgvs\" (UniqueName: \"kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.240900 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.240922 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.240934 4183 projected.go:200] Error preparing data for projected volume kube-api-access-6kgvs for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.241118 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.241135 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.241147 4183 projected.go:200] Error preparing data for projected volume kube-api-access-76gl8 for pod openshift-network-diagnostics/network-check-target-v54bt: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.241060 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-76gl8\" (UniqueName: \"kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8\") pod \"network-check-target-v54bt\" (UID: \"34a48baf-1bee-4921-8bb2-9b7320e76f79\") " pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.241160 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.241145283 +0000 UTC m=+778.933809931 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kgvs" (UniqueName: "kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.241281 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8 podName:34a48baf-1bee-4921-8bb2-9b7320e76f79 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.241267677 +0000 UTC m=+778.933932355 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-76gl8" (UniqueName: "kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8") pod "network-check-target-v54bt" (UID: "34a48baf-1bee-4921-8bb2-9b7320e76f79") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.241309 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.241361 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pzb57\" (UniqueName: \"kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.241434 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.241469 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.241498 4183 projected.go:294] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.241507 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.241518 4183 projected.go:294] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.241529 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.241531 4183 projected.go:200] Error preparing data for projected volume kube-api-access-pzb57 for pod openshift-controller-manager/controller-manager-6ff78978b4-q4vv8: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.241541 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tf29r for pod openshift-marketplace/redhat-marketplace-8s8pc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.241569 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57 podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.241559965 +0000 UTC m=+778.934224583 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-pzb57" (UniqueName: "kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.241603 4183 secret.go:194] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.241614 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.241633 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-w4r68\" (UniqueName: \"kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.241636 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.241628207 +0000 UTC m=+778.934292825 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.241657 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.241645127 +0000 UTC m=+778.934309775 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.241680 4183 projected.go:294] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.241690 4183 projected.go:294] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.241698 4183 projected.go:200] Error preparing data for projected volume kube-api-access-w4r68 for pod openshift-authentication/oauth-openshift-765b47f944-n2lhl: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.241726 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68 podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.241717949 +0000 UTC m=+778.934382577 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-w4r68" (UniqueName: "kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.241873 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r podName:c782cf62-a827-4677-b3c2-6f82c5f09cbb nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.241859883 +0000 UTC m=+778.934524631 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-tf29r" (UniqueName: "kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r") pod "redhat-marketplace-8s8pc" (UID: "c782cf62-a827-4677-b3c2-6f82c5f09cbb") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.242030 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.242058 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.242106 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.242142 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.242134141 +0000 UTC m=+778.934798759 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.242112 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.242175 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.242151 4183 secret.go:194] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.242209 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.242225 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.242217404 +0000 UTC m=+778.934882022 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"etcd-client" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.242245 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.242271 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.242264985 +0000 UTC m=+778.934929603 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"audit-1" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.242453 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hjlhw\" (UniqueName: \"kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.242481 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.242546 4183 configmap.go:199] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.242557 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.242517562 +0000 UTC m=+778.935182150 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.242670 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.242679 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tvc4r\" (UniqueName: \"kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.242697 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.242689817 +0000 UTC m=+778.935354405 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"trusted-ca" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.242742 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.242748 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.242755 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.242764 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tvc4r for pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.242877 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.242864322 +0000 UTC m=+778.935528950 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-tvc4r" (UniqueName: "kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.242906 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.242907 4183 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.242934 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9724w\" (UniqueName: \"kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.242943 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.242933234 +0000 UTC m=+778.935597852 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.242985 4183 projected.go:294] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.243097 4183 projected.go:294] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.243108 4183 projected.go:200] Error preparing data for projected volume kube-api-access-9724w for pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.243142 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.24313154 +0000 UTC m=+778.935796278 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-9724w" (UniqueName: "kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.243184 4183 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.243207 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.243200002 +0000 UTC m=+778.935864620 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-client" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.242683 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.243223 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hjlhw for pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.243245 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.243238833 +0000 UTC m=+778.935903461 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-hjlhw" (UniqueName: "kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.243033 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.243279 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.243304 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.243342 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.243366 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.243392 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.243416 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.243449 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dt5cx\" (UniqueName: \"kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx\") pod \"ingress-canary-2vhcn\" (UID: \"0b5d722a-1123-4935-9740-52a08d018bc9\") " pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.243537 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.243072 4183 configmap.go:199] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.243605 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.243594013 +0000 UTC m=+778.936258691 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"service-ca" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.243629 4183 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.243660 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.243653255 +0000 UTC m=+778.936317873 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.243702 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.243730 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-fqnmc\" (UniqueName: \"kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.243737 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.243727597 +0000 UTC m=+778.936392235 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.243758 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vhj\" (UniqueName: \"kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.243849 4183 secret.go:194] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.243989 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.244065 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-x5d97\" (UniqueName: \"kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.244151 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.244291 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5rpl7\" (UniqueName: \"kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.244388 4183 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.244400 4183 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.244409 4183 projected.go:200] Error preparing data for projected volume kube-api-access-5rpl7 for pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.244437 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7 podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.244429247 +0000 UTC m=+778.937093865 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-5rpl7" (UniqueName: "kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.244478 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.244489 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.244498 4183 projected.go:200] Error preparing data for projected volume kube-api-access-rg2zg for pod openshift-marketplace/marketplace-operator-8b455464d-f9xdt: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.244520 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.244513969 +0000 UTC m=+778.937178597 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-rg2zg" (UniqueName: "kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.244552 4183 secret.go:194] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.244576 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.244568421 +0000 UTC m=+778.937233039 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.244614 4183 projected.go:294] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.244629 4183 projected.go:294] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.244636 4183 projected.go:200] Error preparing data for projected volume kube-api-access-dt5cx for pod openshift-ingress-canary/ingress-canary-2vhcn: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.244661 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx podName:0b5d722a-1123-4935-9740-52a08d018bc9 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.244653833 +0000 UTC m=+778.937318451 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-dt5cx" (UniqueName: "kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx") pod "ingress-canary-2vhcn" (UID: "0b5d722a-1123-4935-9740-52a08d018bc9") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.244677 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.244670084 +0000 UTC m=+778.937334682 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.244714 4183 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.244724 4183 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.244731 4183 projected.go:200] Error preparing data for projected volume kube-api-access-fqnmc for pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.244754 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.244747586 +0000 UTC m=+778.937412214 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-fqnmc" (UniqueName: "kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.244915 4183 projected.go:294] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.244930 4183 projected.go:294] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.244938 4183 projected.go:200] Error preparing data for projected volume kube-api-access-d9vhj for pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.244966 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.244957402 +0000 UTC m=+778.937622020 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-d9vhj" (UniqueName: "kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.245319 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 podName: nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.245311182 +0000 UTC m=+778.937975920 (durationBeforeRetry 2m2s). Error: MountVolume.MountDevice failed for volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.245436 4183 secret.go:194] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.245458 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.245476 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.245487 4183 projected.go:200] Error preparing data for projected volume kube-api-access-x5d97 for pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.245464 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.245456736 +0000 UTC m=+778.938121364 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.245544 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.245532438 +0000 UTC m=+778.938197076 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.245563 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97 podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.245554229 +0000 UTC m=+778.938218907 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-x5d97" (UniqueName: "kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.266125 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.302926 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.341687 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.345649 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r7dbp\" (UniqueName: \"kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp\") pod \"redhat-marketplace-rmwfn\" (UID: \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\") " pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.346894 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lz9qh\" (UniqueName: \"kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.346993 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\") " pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.347377 4183 projected.go:294] Couldn't get configMap openshift-kube-controller-manager/kube-root-ca.crt: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.347422 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/revision-pruner-8-crc: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.347481 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access podName:72854c1e-5ae2-4ed6-9e50-ff3bccde2635 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.347464637 +0000 UTC m=+779.040129395 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access") pod "revision-pruner-8-crc" (UID: "72854c1e-5ae2-4ed6-9e50-ff3bccde2635") : object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.347661 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.347704 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.347717 4183 projected.go:200] Error preparing data for projected volume kube-api-access-r7dbp for pod openshift-marketplace/redhat-marketplace-rmwfn: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.347747 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp podName:9ad279b4-d9dc-42a8-a1c8-a002bd063482 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.347737474 +0000 UTC m=+779.040402172 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-r7dbp" (UniqueName: "kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp") pod "redhat-marketplace-rmwfn" (UID: "9ad279b4-d9dc-42a8-a1c8-a002bd063482") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.348006 4183 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.348048 4183 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.348059 4183 projected.go:200] Error preparing data for projected volume kube-api-access-lz9qh for pod openshift-console/console-84fccc7b6-mkncc: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.348116 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.348105455 +0000 UTC m=+779.040770163 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-lz9qh" (UniqueName: "kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.386081 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.424660 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.433886 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:50 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:50 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:50 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.434003 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.461591 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.469877 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.504766 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.544252 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"2025-08-13T19:52:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e\\\\n2025-08-13T19:52:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e to /host/opt/cni/bin/\\\\n2025-08-13T19:52:54Z [verbose] multus-daemon started\\\\n2025-08-13T19:52:54Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:53:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.582352 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.621119 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.664313 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.705319 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.745699 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.784401 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.824759 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.867726 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.905702 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.943268 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.984275 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.023422 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/3.log" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.031437 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.066347 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.102216 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.140548 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.187704 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.208662 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.208931 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.208993 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.209028 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.209068 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.209191 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.209198 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.209211 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.209245 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.209315 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.209316 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.209349 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.209426 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.209501 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.209530 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.209574 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.209637 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.209666 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.209709 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.209765 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.209906 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.209954 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.210013 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.210043 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.210084 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.210141 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.210167 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.210205 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.210259 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.210284 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.210322 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.210375 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.210402 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.210413 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.210440 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.210541 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.210592 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.210608 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.210650 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.210677 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.210698 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.210731 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.210735 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.210653 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.210869 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.210911 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.210951 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.210970 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.210991 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.211114 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.211234 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.211330 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.211414 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.211446 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.211483 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.211493 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.211496 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.211537 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.211575 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.211686 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.211907 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.211970 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.212023 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.212036 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.212081 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.212132 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.212195 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.212299 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.212333 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.212367 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.212411 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.212481 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.212560 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.212644 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.212715 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.212875 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.212950 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.213020 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.213106 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.213181 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.213238 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.213299 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.229056 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.262028 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.304446 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.346546 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.383386 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.425427 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.434259 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:51 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:51 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:51 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.434360 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.464237 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.505191 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.544248 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.596936 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.626083 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.663548 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.706377 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.745141 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.786217 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.822228 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.862367 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.864206 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.864260 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.864275 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.864295 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.864317 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:51Z","lastTransitionTime":"2025-08-13T19:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.881356 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.887980 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.888346 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.888455 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.888651 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.888855 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:51Z","lastTransitionTime":"2025-08-13T19:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.905387 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.909715 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.915192 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.915281 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.915302 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.915325 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.915351 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:51Z","lastTransitionTime":"2025-08-13T19:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.930991 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.935734 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.935872 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.935897 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.935924 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.935952 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:51Z","lastTransitionTime":"2025-08-13T19:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.945186 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.951005 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.956062 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.956113 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.956126 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.956144 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.956164 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:51Z","lastTransitionTime":"2025-08-13T19:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.970348 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.970708 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.983754 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.022489 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.064920 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.110390 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.143981 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.188057 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.209083 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.209152 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:52 crc kubenswrapper[4183]: E0813 19:54:52.209282 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.209107 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:52 crc kubenswrapper[4183]: E0813 19:54:52.209397 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.209447 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:52 crc kubenswrapper[4183]: E0813 19:54:52.209499 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.209531 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.209576 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:52 crc kubenswrapper[4183]: E0813 19:54:52.209630 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.209668 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:52 crc kubenswrapper[4183]: E0813 19:54:52.209871 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:52 crc kubenswrapper[4183]: E0813 19:54:52.210229 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:52 crc kubenswrapper[4183]: E0813 19:54:52.210335 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.225447 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.265255 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.397442 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.420496 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.433919 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:52 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:52 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:52 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.434441 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.439879 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.467628 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.493539 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:47Z\\\",\\\"message\\\":\\\"10.217.4.108:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0813 19:54:47.532432 19713 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:54:47.531652 19713 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532671 19713 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532752 19713 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532927 19713 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:54:47.532945 19713 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:54:47.532868 19713 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532892 19713 reflector.go:295] Stoppin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.558874 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.595904 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.618063 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.638364 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.663620 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.704286 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.743501 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.785609 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.823055 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.861640 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.911888 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.942851 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.982912 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.022895 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.066123 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.104664 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.142471 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.182116 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.208746 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.209511 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.208768 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209574 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.208895 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.208893 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.208939 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.209684 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.208995 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209014 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209016 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209030 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.209844 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209045 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209049 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.209922 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209066 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209074 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209101 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209098 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.210000 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209110 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209126 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209137 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.210065 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209139 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209147 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.210127 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209149 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209165 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209185 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.210185 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209187 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209202 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.210244 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209202 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209211 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209230 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209235 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.210309 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209236 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209242 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.210370 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209269 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209272 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209272 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.210426 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209282 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209297 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.210485 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209303 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209320 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209318 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209331 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.210552 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.210619 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.210710 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.210948 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.211101 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.211208 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.211285 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.211349 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.211409 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.211474 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.211536 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.211607 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.211667 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.211730 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.211881 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.211955 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.212028 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.212074 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.212250 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.212316 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.212380 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.212437 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.212492 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.212546 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.212614 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.212667 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.212857 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.212924 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.225002 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"2025-08-13T19:52:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e\\\\n2025-08-13T19:52:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e to /host/opt/cni/bin/\\\\n2025-08-13T19:52:54Z [verbose] multus-daemon started\\\\n2025-08-13T19:52:54Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:53:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:49Z\\\",\\\"message\\\":\\\"2025-08-13T19:54:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f\\\\n2025-08-13T19:54:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f to /host/opt/cni/bin/\\\\n2025-08-13T19:54:04Z [verbose] multus-daemon started\\\\n2025-08-13T19:54:04Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:54:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.260531 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.300846 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.349891 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.400562 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.439317 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:53 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:53 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:53 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.439448 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.474565 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.500498 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.536388 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.557510 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.650514 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.678939 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.705223 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.749555 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.769203 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.790200 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.821865 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.932385 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.952008 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.968266 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:54 crc kubenswrapper[4183]: I0813 19:54:54.208613 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:54 crc kubenswrapper[4183]: I0813 19:54:54.209060 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:54 crc kubenswrapper[4183]: I0813 19:54:54.208709 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:54 crc kubenswrapper[4183]: I0813 19:54:54.208757 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:54 crc kubenswrapper[4183]: I0813 19:54:54.208752 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:54 crc kubenswrapper[4183]: I0813 19:54:54.208585 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:54 crc kubenswrapper[4183]: E0813 19:54:54.209647 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:54 crc kubenswrapper[4183]: E0813 19:54:54.209871 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:54 crc kubenswrapper[4183]: E0813 19:54:54.210037 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:54 crc kubenswrapper[4183]: E0813 19:54:54.210178 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:54 crc kubenswrapper[4183]: E0813 19:54:54.210334 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:54 crc kubenswrapper[4183]: I0813 19:54:54.210357 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:54 crc kubenswrapper[4183]: E0813 19:54:54.210532 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:54 crc kubenswrapper[4183]: E0813 19:54:54.210757 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:54 crc kubenswrapper[4183]: I0813 19:54:54.433355 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:54 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:54 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:54 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:54 crc kubenswrapper[4183]: I0813 19:54:54.433447 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:54 crc kubenswrapper[4183]: I0813 19:54:54.674290 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 19:54:54 crc kubenswrapper[4183]: I0813 19:54:54.674438 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 19:54:54 crc kubenswrapper[4183]: I0813 19:54:54.674487 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 19:54:54 crc kubenswrapper[4183]: I0813 19:54:54.674523 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 19:54:54 crc kubenswrapper[4183]: I0813 19:54:54.674544 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.208135 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.208202 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.208239 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.208243 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.208183 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.208340 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.208348 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.208367 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.208416 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.208479 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.208517 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.208480 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.208559 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.208446 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.208168 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.208662 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.208679 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.208679 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.208730 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.208738 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.208741 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.208430 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.208885 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.208933 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.208955 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.208972 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.208934 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.209055 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.209068 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.209069 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.209100 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.209164 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.209242 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.209242 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.209275 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.209275 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.209421 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.209439 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.209526 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.209622 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.209628 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.209756 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.209873 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.209952 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.209998 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.210077 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.210122 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.210192 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.210263 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.210329 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.210348 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.210385 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.210412 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.210445 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.210550 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.210699 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.210847 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.210908 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.210997 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.211102 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.211232 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.211337 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.211492 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.211544 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.213065 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.213260 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.213269 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.213478 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.213581 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.213644 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.213704 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.213731 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.213747 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.213940 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.214060 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.214132 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.214184 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.214239 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.214317 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.214383 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.214456 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.214533 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.229499 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.244422 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.260315 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.275985 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.293209 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.312525 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.333056 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.348308 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.364608 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.381594 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.398950 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.417978 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.432617 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:55 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:55 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:55 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.432738 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.445479 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.461423 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.471930 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.477542 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.496230 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.526141 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.541361 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.559281 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.576610 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.590130 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.606901 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.620106 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.658302 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.672222 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.690672 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.733199 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.764121 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.786088 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.803209 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.821187 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.839703 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.855249 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.873204 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.891683 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.912246 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.929234 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.948525 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.975739 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.003162 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:47Z\\\",\\\"message\\\":\\\"10.217.4.108:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0813 19:54:47.532432 19713 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:54:47.531652 19713 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532671 19713 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532752 19713 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532927 19713 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:54:47.532945 19713 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:54:47.532868 19713 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532892 19713 reflector.go:295] Stoppin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.019617 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.037213 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.054994 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.070301 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.095497 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.114927 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.129125 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.144330 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.168599 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.193386 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.208467 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.208567 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.208610 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.208638 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:56 crc kubenswrapper[4183]: E0813 19:54:56.209550 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:56 crc kubenswrapper[4183]: E0813 19:54:56.209653 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.208645 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.208670 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.208742 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:56 crc kubenswrapper[4183]: E0813 19:54:56.209987 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:56 crc kubenswrapper[4183]: E0813 19:54:56.210112 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:56 crc kubenswrapper[4183]: E0813 19:54:56.210203 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:56 crc kubenswrapper[4183]: E0813 19:54:56.210209 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:56 crc kubenswrapper[4183]: E0813 19:54:56.210609 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.210727 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.227086 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.251561 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.269053 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.286215 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.300984 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.318296 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.335696 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"2025-08-13T19:52:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e\\\\n2025-08-13T19:52:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e to /host/opt/cni/bin/\\\\n2025-08-13T19:52:54Z [verbose] multus-daemon started\\\\n2025-08-13T19:52:54Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:53:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:49Z\\\",\\\"message\\\":\\\"2025-08-13T19:54:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f\\\\n2025-08-13T19:54:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f to /host/opt/cni/bin/\\\\n2025-08-13T19:54:04Z [verbose] multus-daemon started\\\\n2025-08-13T19:54:04Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:54:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.352451 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.370885 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.386214 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.421858 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.432623 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:56 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:56 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:56 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.432714 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.464577 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.504704 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.546354 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.583345 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.622537 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.209213 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.209269 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.209333 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.209398 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.209405 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.209437 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.209444 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.209512 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.209524 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.209552 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.209567 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.209617 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.209667 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.209882 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.209895 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.209901 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.209947 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.209976 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.210056 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.209880 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.210126 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.210161 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.210168 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.210230 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.210267 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.210317 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.210331 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.210230 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.210436 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.210448 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.210548 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.210568 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.210649 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.210678 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.210895 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.210895 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.210924 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.211059 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.211076 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.211200 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.211272 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.211351 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.211402 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.211464 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.211575 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.211644 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.211726 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.211873 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.212005 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.212057 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.212161 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.212212 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.212270 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.212366 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.212410 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.212474 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.212889 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.212927 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.213015 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.213063 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.213159 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.213244 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.213338 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.213422 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.213516 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.213610 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.213704 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.213731 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.213850 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.213959 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.213991 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.214025 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.214103 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.214205 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.214310 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.214391 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.214470 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.214552 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.214655 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.214761 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.214943 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.215024 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.433142 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:57 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:57 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:57 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.433391 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:58 crc kubenswrapper[4183]: I0813 19:54:58.208601 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:58 crc kubenswrapper[4183]: I0813 19:54:58.209037 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:58 crc kubenswrapper[4183]: I0813 19:54:58.208698 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:58 crc kubenswrapper[4183]: I0813 19:54:58.208701 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:58 crc kubenswrapper[4183]: I0813 19:54:58.208942 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:58 crc kubenswrapper[4183]: E0813 19:54:58.209321 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:58 crc kubenswrapper[4183]: E0813 19:54:58.209554 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:58 crc kubenswrapper[4183]: E0813 19:54:58.209661 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:58 crc kubenswrapper[4183]: I0813 19:54:58.210286 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:58 crc kubenswrapper[4183]: E0813 19:54:58.210460 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:58 crc kubenswrapper[4183]: E0813 19:54:58.210596 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:58 crc kubenswrapper[4183]: E0813 19:54:58.210651 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:58 crc kubenswrapper[4183]: I0813 19:54:58.210428 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:58 crc kubenswrapper[4183]: E0813 19:54:58.211279 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:58 crc kubenswrapper[4183]: I0813 19:54:58.433957 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:58 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:58 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:58 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:58 crc kubenswrapper[4183]: I0813 19:54:58.434101 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.209190 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.209256 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.209307 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.209360 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.209398 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.209413 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.209206 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.209479 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.209561 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.210357 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.210613 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.210670 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.210951 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.211041 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.211081 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.210978 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.210999 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.211157 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.210979 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.211511 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.211612 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.211556 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.211582 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.211707 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.211635 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.211849 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.211850 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.211908 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.211969 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.212044 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.211982 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.212010 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.212155 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.212165 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.212180 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.212241 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.212404 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.212478 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.212517 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.212465 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.212568 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.212857 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.212943 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.213069 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.213181 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.213194 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.213196 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.213311 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.213371 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.213460 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.213674 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.213972 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.214028 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.214116 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.214310 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.214381 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.214396 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.214544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.214684 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.214750 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.214921 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.215061 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.215147 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.215199 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.215242 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.215212 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.215346 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.215449 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.215600 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.215645 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.215724 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.215732 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.215909 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.215955 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.216100 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.216206 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.216350 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.216523 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.216666 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.216890 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.217036 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.217144 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.432859 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:59 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:59 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:59 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.432994 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:00 crc kubenswrapper[4183]: I0813 19:55:00.209734 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:00 crc kubenswrapper[4183]: E0813 19:55:00.210585 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:00 crc kubenswrapper[4183]: I0813 19:55:00.210873 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:00 crc kubenswrapper[4183]: I0813 19:55:00.211025 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:00 crc kubenswrapper[4183]: I0813 19:55:00.211086 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:00 crc kubenswrapper[4183]: I0813 19:55:00.211135 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:00 crc kubenswrapper[4183]: I0813 19:55:00.211169 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:00 crc kubenswrapper[4183]: E0813 19:55:00.211643 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:00 crc kubenswrapper[4183]: E0813 19:55:00.211921 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:00 crc kubenswrapper[4183]: I0813 19:55:00.212136 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:00 crc kubenswrapper[4183]: E0813 19:55:00.212152 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:00 crc kubenswrapper[4183]: E0813 19:55:00.212279 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:00 crc kubenswrapper[4183]: E0813 19:55:00.212447 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:00 crc kubenswrapper[4183]: E0813 19:55:00.212611 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:00 crc kubenswrapper[4183]: I0813 19:55:00.212679 4183 scope.go:117] "RemoveContainer" containerID="2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5" Aug 13 19:55:00 crc kubenswrapper[4183]: E0813 19:55:00.213536 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:55:00 crc kubenswrapper[4183]: I0813 19:55:00.432359 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:00 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:00 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:00 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:00 crc kubenswrapper[4183]: I0813 19:55:00.432441 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:00 crc kubenswrapper[4183]: E0813 19:55:00.473569 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.209188 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.209436 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.209654 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.209914 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.209924 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.209980 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.210659 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.210918 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.210961 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.210970 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.210918 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.210991 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.211093 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.211119 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.211203 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.211219 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.211253 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.211292 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.211329 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.211350 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.211364 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.211443 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.211444 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.211549 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.211603 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.211666 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.211683 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.211705 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.211739 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.211881 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.212005 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.212012 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.212049 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.212099 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.212105 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.212140 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.212153 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.212208 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.212216 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.212275 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.212301 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.212345 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.212455 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.212456 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.212561 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.212641 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.212685 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.212691 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.212975 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.213022 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.213107 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.213129 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.213129 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.213160 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.213188 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.213168 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.213192 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.213245 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.213352 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.213394 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.213484 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.213510 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.213589 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.213599 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.213896 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.213934 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.214018 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.214084 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.214118 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.214333 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.214373 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.214439 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.214471 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.214527 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.214592 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.214664 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.214674 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.214742 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.214904 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.215014 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.215199 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.215425 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.432877 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:01 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:01 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:01 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.432969 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.059343 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.059448 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.059466 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.059721 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.059752 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:02Z","lastTransitionTime":"2025-08-13T19:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:02 crc kubenswrapper[4183]: E0813 19:55:02.075262 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.080759 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.080880 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.080898 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.080919 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.080940 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:02Z","lastTransitionTime":"2025-08-13T19:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:02 crc kubenswrapper[4183]: E0813 19:55:02.098527 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.106482 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.106586 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.106611 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.106648 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.106692 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:02Z","lastTransitionTime":"2025-08-13T19:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:02 crc kubenswrapper[4183]: E0813 19:55:02.125104 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.131481 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.131575 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.131598 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.131627 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.131666 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:02Z","lastTransitionTime":"2025-08-13T19:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:02 crc kubenswrapper[4183]: E0813 19:55:02.149335 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.156171 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.156266 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.156291 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.156315 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.156354 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:02Z","lastTransitionTime":"2025-08-13T19:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:02 crc kubenswrapper[4183]: E0813 19:55:02.184532 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:02 crc kubenswrapper[4183]: E0813 19:55:02.184600 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.208198 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.208482 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:02 crc kubenswrapper[4183]: E0813 19:55:02.208552 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.208198 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.208244 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.208277 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.208322 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.208326 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:02 crc kubenswrapper[4183]: E0813 19:55:02.208886 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:02 crc kubenswrapper[4183]: E0813 19:55:02.209097 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:02 crc kubenswrapper[4183]: E0813 19:55:02.209115 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:02 crc kubenswrapper[4183]: E0813 19:55:02.209186 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:02 crc kubenswrapper[4183]: E0813 19:55:02.209281 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:02 crc kubenswrapper[4183]: E0813 19:55:02.209382 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.432679 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:02 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:02 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:02 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.432963 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.208291 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.208920 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.209099 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.209226 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.209397 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.208367 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.208426 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.209619 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.209660 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.209691 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.209610 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.210020 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.210141 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.210218 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.210264 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.210178 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.210324 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.210447 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.210495 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.210496 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.210458 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.210477 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.210586 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.210607 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.210631 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.210630 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.210560 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.210525 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.210459 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.210670 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.210750 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.210925 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.211022 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.211057 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.211105 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.211160 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.211199 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.211276 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.211285 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.211638 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.211393 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.211424 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.211720 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.211760 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.212115 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.212222 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.212316 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.212410 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.212423 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.212628 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.212942 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.213037 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.213221 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.213403 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.213604 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.213749 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.213903 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.214078 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.214140 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.214182 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.214202 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.214227 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.214261 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.214298 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.214440 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.214471 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.214518 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.214617 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.214881 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.214966 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.215368 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.215487 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.215574 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.215895 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.215899 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.215955 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.216054 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.216149 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.216238 4183 scope.go:117] "RemoveContainer" containerID="c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.216306 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.216512 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.216656 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\"" pod="openshift-multus/multus-q88th" podUID="475321a1-8b7e-4033-8f72-b05a8b377347" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.216659 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.216722 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.232063 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.247610 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.265721 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.283497 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.302981 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.323294 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.350613 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.366223 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.382297 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.397395 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.417904 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.432142 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:03 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:03 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:03 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.432243 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.434398 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.454439 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.471231 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.488719 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.555332 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.572157 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.588757 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.603857 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.621247 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.637302 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.653326 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.672145 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.688405 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.705952 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.726645 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.742215 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.760405 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.780920 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.799180 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.814006 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.830004 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.846385 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.864209 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.901163 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:47Z\\\",\\\"message\\\":\\\"10.217.4.108:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0813 19:54:47.532432 19713 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:54:47.531652 19713 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532671 19713 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532752 19713 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532927 19713 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:54:47.532945 19713 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:54:47.532868 19713 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532892 19713 reflector.go:295] Stoppin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.919360 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.935888 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.952267 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.974215 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.989706 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.005349 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.021434 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.036710 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.055051 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.071295 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.100228 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.116890 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.131052 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.145063 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.163065 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.178068 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.192517 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.208857 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.208922 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.209010 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.208879 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.209173 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:04 crc kubenswrapper[4183]: E0813 19:55:04.209221 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:04 crc kubenswrapper[4183]: E0813 19:55:04.209455 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.209491 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.209528 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:04 crc kubenswrapper[4183]: E0813 19:55:04.209594 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:04 crc kubenswrapper[4183]: E0813 19:55:04.209884 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:04 crc kubenswrapper[4183]: E0813 19:55:04.209966 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:04 crc kubenswrapper[4183]: E0813 19:55:04.210082 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:04 crc kubenswrapper[4183]: E0813 19:55:04.210164 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.211280 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.227991 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:49Z\\\",\\\"message\\\":\\\"2025-08-13T19:54:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f\\\\n2025-08-13T19:54:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f to /host/opt/cni/bin/\\\\n2025-08-13T19:54:04Z [verbose] multus-daemon started\\\\n2025-08-13T19:54:04Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:54:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.242520 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.256518 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.274705 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.294055 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.311960 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.328612 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.346929 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.364604 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.380202 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.397644 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.413517 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.429648 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.433177 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:04 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:04 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:04 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.433363 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.446724 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.208705 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.208952 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.209098 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.209110 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.209234 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.209276 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.209271 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.209237 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.209339 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.209470 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.209488 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.209520 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.209521 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.209603 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.209611 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.209731 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.209758 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.209733 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.209944 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.210031 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.210083 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.210124 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.210131 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.210136 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.210206 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.210216 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.210238 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.210329 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.210370 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.210371 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.210399 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.210419 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.210561 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.210599 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.210674 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.210683 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.210720 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.210996 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.211026 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.211033 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.211033 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.211060 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.211213 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.211246 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.211287 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.211336 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.211415 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.211476 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.211508 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.211636 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.211731 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.211884 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.212005 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.212093 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.212126 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.212008 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.212223 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.212320 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.212227 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.212544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.212810 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.212923 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.213073 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.213263 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.213435 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.213550 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.213699 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.213956 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.214007 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.215057 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.215257 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.215306 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.215353 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.215373 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.215387 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.215419 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.215449 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.215905 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.215928 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.216134 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.216167 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.216265 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.231175 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.250362 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.265662 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.293002 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.319971 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.355288 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.373403 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.393258 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.418137 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.434027 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:05 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:05 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:05 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.434105 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.436396 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.451638 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.475868 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.480125 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.498926 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.511933 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.528649 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.548498 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.565532 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.582098 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.604239 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.624152 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.646558 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.664405 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.687586 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.708194 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.730101 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.747442 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.766012 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.782908 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.800496 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.823873 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:47Z\\\",\\\"message\\\":\\\"10.217.4.108:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0813 19:54:47.532432 19713 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:54:47.531652 19713 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532671 19713 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532752 19713 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532927 19713 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:54:47.532945 19713 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:54:47.532868 19713 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532892 19713 reflector.go:295] Stoppin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.847081 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.862973 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.880015 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.899089 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.917710 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.935229 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.953451 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.970115 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.988675 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.004078 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.029213 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.044491 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.058377 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.075702 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.095342 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.111663 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.125131 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.141162 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.157732 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:49Z\\\",\\\"message\\\":\\\"2025-08-13T19:54:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f\\\\n2025-08-13T19:54:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f to /host/opt/cni/bin/\\\\n2025-08-13T19:54:04Z [verbose] multus-daemon started\\\\n2025-08-13T19:54:04Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:54:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.171548 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.185550 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.200047 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.209072 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.209118 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.209240 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.209307 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:06 crc kubenswrapper[4183]: E0813 19:55:06.209383 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.209450 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:06 crc kubenswrapper[4183]: E0813 19:55:06.209545 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.209575 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:06 crc kubenswrapper[4183]: E0813 19:55:06.209701 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:06 crc kubenswrapper[4183]: E0813 19:55:06.209923 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.209978 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:06 crc kubenswrapper[4183]: E0813 19:55:06.210066 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:06 crc kubenswrapper[4183]: E0813 19:55:06.210181 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:06 crc kubenswrapper[4183]: E0813 19:55:06.210412 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.218723 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.236574 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.255971 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.272664 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.289373 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.304926 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.331647 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.351437 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.368268 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.389485 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.428467 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.432070 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:06 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:06 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:06 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.432476 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.467640 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.508950 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.547345 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.596599 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.208959 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.209170 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.209448 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.209518 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.209457 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.208987 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.209459 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.209700 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.209710 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.208982 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.209889 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.209938 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.210043 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.210107 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.209609 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.210165 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.210256 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.210348 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.210502 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.210515 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.210676 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.210860 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.210989 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.211054 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.211084 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.211110 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.211135 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.211160 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.211191 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.211214 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.209491 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.210904 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.210922 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.210937 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.210956 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.210973 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.211339 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.211947 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.210885 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.212250 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.213193 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.215282 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.213352 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.213475 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.213658 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.213893 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.214063 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.214187 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.214508 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.214868 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.215112 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.215336 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.215466 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.215607 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.215853 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.215889 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.215932 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.214184 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.216061 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.216182 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.216500 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.216598 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.216690 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.216591 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.216605 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.217002 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.217066 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.217134 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.217237 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.217319 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.217383 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.217446 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.217499 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.217551 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.217582 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.217645 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.217735 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.217911 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.218020 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.218079 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.218157 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.218216 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.432260 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:07 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:07 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:07 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.432345 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:08 crc kubenswrapper[4183]: I0813 19:55:08.209058 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:08 crc kubenswrapper[4183]: I0813 19:55:08.209127 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:08 crc kubenswrapper[4183]: I0813 19:55:08.209134 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:08 crc kubenswrapper[4183]: I0813 19:55:08.209058 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:08 crc kubenswrapper[4183]: I0813 19:55:08.209101 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:08 crc kubenswrapper[4183]: I0813 19:55:08.209218 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:08 crc kubenswrapper[4183]: E0813 19:55:08.209287 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:08 crc kubenswrapper[4183]: E0813 19:55:08.209425 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:08 crc kubenswrapper[4183]: E0813 19:55:08.209541 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:08 crc kubenswrapper[4183]: I0813 19:55:08.209612 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:08 crc kubenswrapper[4183]: E0813 19:55:08.209631 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:08 crc kubenswrapper[4183]: E0813 19:55:08.209689 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:08 crc kubenswrapper[4183]: E0813 19:55:08.209895 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:08 crc kubenswrapper[4183]: E0813 19:55:08.210061 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:08 crc kubenswrapper[4183]: I0813 19:55:08.432578 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:08 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:08 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:08 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:08 crc kubenswrapper[4183]: I0813 19:55:08.432676 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.209334 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.209718 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.210010 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.210227 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.210336 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.210552 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.210734 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.211199 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.211419 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.211725 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.212157 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.212455 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.212585 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.213010 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.213195 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.213480 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.213667 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.214057 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.214177 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.214343 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.214516 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.214713 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.215707 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.216014 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.215745 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.216254 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.216341 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.216274 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.216547 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.216627 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.215534 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.217028 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.217095 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.217205 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.217212 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.217224 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.217338 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.217408 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.217421 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.217499 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.217528 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.217588 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.217629 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.217753 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.218107 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.218276 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.218309 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.218271 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.218469 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.218925 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.218587 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.219143 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.219180 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.219181 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.220026 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.220071 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.219456 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.219523 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.219602 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.220203 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.220240 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.220244 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.219737 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.219650 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.219919 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.220354 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.219989 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.219957 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.221127 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.221323 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.221322 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.220928 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.221480 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.221594 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.221679 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.221772 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.221976 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.222065 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.222187 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.222290 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.222533 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.223622 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.433538 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:09 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:09 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:09 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.433685 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:10 crc kubenswrapper[4183]: I0813 19:55:10.211198 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:10 crc kubenswrapper[4183]: I0813 19:55:10.211250 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:10 crc kubenswrapper[4183]: I0813 19:55:10.211364 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:10 crc kubenswrapper[4183]: I0813 19:55:10.211448 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:10 crc kubenswrapper[4183]: E0813 19:55:10.211461 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:10 crc kubenswrapper[4183]: I0813 19:55:10.211608 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:10 crc kubenswrapper[4183]: I0813 19:55:10.211662 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:10 crc kubenswrapper[4183]: E0813 19:55:10.211725 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:10 crc kubenswrapper[4183]: E0813 19:55:10.211854 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:10 crc kubenswrapper[4183]: E0813 19:55:10.211987 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:10 crc kubenswrapper[4183]: I0813 19:55:10.212061 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:10 crc kubenswrapper[4183]: E0813 19:55:10.212140 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:10 crc kubenswrapper[4183]: E0813 19:55:10.212236 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:10 crc kubenswrapper[4183]: E0813 19:55:10.212413 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:10 crc kubenswrapper[4183]: I0813 19:55:10.432199 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:10 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:10 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:10 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:10 crc kubenswrapper[4183]: I0813 19:55:10.432323 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:10 crc kubenswrapper[4183]: E0813 19:55:10.477003 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.100349 4183 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="fsnotify: can't remove non-existent watch: /etc/kubernetes/kubelet-ca.crt" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.208972 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.209024 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.209034 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.209079 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.209097 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.209121 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.209131 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.208991 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.209229 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.209229 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.209251 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.209261 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.209230 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.209302 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.209533 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.209539 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.209669 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.209741 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.209913 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.209920 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.209937 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.209992 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.210008 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.210114 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.210112 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.209673 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.210278 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.210298 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.210379 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.210398 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.210420 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.210580 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.210620 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.210640 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.210649 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.210650 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.210643 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.210743 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.210922 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.211047 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.211091 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.211155 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.211186 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.211225 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.211279 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.211292 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.211388 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.211436 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.211480 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.211540 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.211599 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.211633 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.211675 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.211731 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.211891 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.211904 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.211953 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.212017 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.212072 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.212154 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.212226 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.212450 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.212561 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.212606 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.212620 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.212661 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.212740 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.212896 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.213021 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.213028 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.213285 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.213337 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.213413 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.213428 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.213532 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.213626 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.213747 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.213921 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.214142 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.214152 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.214250 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.214349 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.214965 4183 scope.go:117] "RemoveContainer" containerID="2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.215468 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.432630 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:11 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:11 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:11 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.432725 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.208482 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.208536 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:12 crc kubenswrapper[4183]: E0813 19:55:12.208740 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.208502 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:12 crc kubenswrapper[4183]: E0813 19:55:12.208996 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.209062 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.209120 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:12 crc kubenswrapper[4183]: E0813 19:55:12.209226 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:12 crc kubenswrapper[4183]: E0813 19:55:12.209243 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.209376 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.209569 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:12 crc kubenswrapper[4183]: E0813 19:55:12.209765 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:12 crc kubenswrapper[4183]: E0813 19:55:12.209948 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:12 crc kubenswrapper[4183]: E0813 19:55:12.210078 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.312336 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.312499 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.312548 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.312627 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.312682 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:12Z","lastTransitionTime":"2025-08-13T19:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:12 crc kubenswrapper[4183]: E0813 19:55:12.336305 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.342032 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.342117 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.342139 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.342165 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.342196 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:12Z","lastTransitionTime":"2025-08-13T19:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:12 crc kubenswrapper[4183]: E0813 19:55:12.356905 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.362189 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.362509 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.362747 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.363118 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.363453 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:12Z","lastTransitionTime":"2025-08-13T19:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:12 crc kubenswrapper[4183]: E0813 19:55:12.382015 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.388729 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.389275 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.389477 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.389883 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.390274 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:12Z","lastTransitionTime":"2025-08-13T19:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:12 crc kubenswrapper[4183]: E0813 19:55:12.405680 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.413360 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.413707 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.413911 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.414047 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.414164 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:12Z","lastTransitionTime":"2025-08-13T19:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:12 crc kubenswrapper[4183]: E0813 19:55:12.429545 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:12 crc kubenswrapper[4183]: E0813 19:55:12.431016 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.431599 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:12 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:12 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:12 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.431959 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.208390 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.208710 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.208761 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.208978 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.208719 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.209002 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.209057 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.208493 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.208508 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.209134 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.208523 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.208473 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.208559 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.208561 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.208587 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.208588 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.208537 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.208620 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.208638 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.208641 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.209328 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.208658 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.208665 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.208683 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.209403 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.208692 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.209426 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.209444 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.208616 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.209514 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.209521 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.209242 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.209578 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.209660 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.209733 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.208456 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.209962 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.210093 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.210169 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.210263 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.210361 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.210407 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.210464 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.210513 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.210562 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.210877 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.211005 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.211048 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.211118 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.211156 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.211196 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.211284 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.211323 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.211417 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.211521 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.211594 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.211609 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.211673 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.211713 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.211991 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.212077 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.212178 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.212317 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.212631 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.212691 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.212907 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.212974 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.213085 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.213219 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.213267 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.213384 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.213405 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.213433 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.213566 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.213762 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.213964 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.214033 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.214293 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.214433 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.214348 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.214574 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.214697 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.433551 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:13 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:13 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:13 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.434499 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:14 crc kubenswrapper[4183]: I0813 19:55:14.208428 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:14 crc kubenswrapper[4183]: I0813 19:55:14.208572 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:14 crc kubenswrapper[4183]: E0813 19:55:14.208664 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:14 crc kubenswrapper[4183]: I0813 19:55:14.208746 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:14 crc kubenswrapper[4183]: E0813 19:55:14.209165 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:14 crc kubenswrapper[4183]: I0813 19:55:14.209332 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:14 crc kubenswrapper[4183]: I0813 19:55:14.209415 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:14 crc kubenswrapper[4183]: I0813 19:55:14.209369 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:14 crc kubenswrapper[4183]: E0813 19:55:14.209586 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:14 crc kubenswrapper[4183]: E0813 19:55:14.209710 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:14 crc kubenswrapper[4183]: I0813 19:55:14.209743 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:14 crc kubenswrapper[4183]: E0813 19:55:14.209904 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:14 crc kubenswrapper[4183]: E0813 19:55:14.209992 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:14 crc kubenswrapper[4183]: E0813 19:55:14.210078 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:14 crc kubenswrapper[4183]: I0813 19:55:14.432945 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:14 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:14 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:14 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:14 crc kubenswrapper[4183]: I0813 19:55:14.433072 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.208509 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.208573 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.208611 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.208708 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.208712 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.208732 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.208760 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.208904 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.208946 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.208965 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.209037 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.209040 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.209060 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.209119 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.209129 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.209131 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.209166 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.209236 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.209238 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.209282 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.209300 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.209351 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.209401 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.209403 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.209429 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.209463 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.209489 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.209494 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.209581 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.209693 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.209931 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.209979 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.210045 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.210189 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.210196 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.210290 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.210423 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.210501 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.210548 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.210617 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.210709 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.210898 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.210925 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.210952 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.210999 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.211016 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.210119 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.211105 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.211143 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.211200 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.211331 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.211358 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.211374 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.211401 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.211416 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.211549 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.211578 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.211684 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.211755 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.211966 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.212114 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.212137 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.212330 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.212360 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.212415 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.212482 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.212523 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.212616 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.212660 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.212719 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.212872 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.212878 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.212901 4183 scope.go:117] "RemoveContainer" containerID="c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.213039 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.213095 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.213181 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.213315 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.213411 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\"" pod="openshift-multus/multus-q88th" podUID="475321a1-8b7e-4033-8f72-b05a8b377347" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.213531 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.213596 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.213673 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.213731 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.213770 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.214064 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.271578 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.288009 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.314696 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.339255 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.362935 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.379905 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.396993 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.410210 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.425332 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.431379 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:15 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:15 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:15 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.431510 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.441597 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.456529 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.473162 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.478088 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.491167 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.518463 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.536989 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.550933 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.566937 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.585086 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.598515 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.618050 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.638710 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.657374 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.675866 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.694321 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.715765 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.734584 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.756474 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.776639 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.800921 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.823967 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.843718 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.864204 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.884223 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.905011 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.934225 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:47Z\\\",\\\"message\\\":\\\"10.217.4.108:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0813 19:54:47.532432 19713 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:54:47.531652 19713 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532671 19713 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532752 19713 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532927 19713 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:54:47.532945 19713 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:54:47.532868 19713 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532892 19713 reflector.go:295] Stoppin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.954983 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.971916 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.988501 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.003422 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.023573 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.043104 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.059921 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.079408 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.097012 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.119124 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.144412 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.161658 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.177352 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.194116 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.208765 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.208927 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.208861 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.208966 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.209012 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:16 crc kubenswrapper[4183]: E0813 19:55:16.209158 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:16 crc kubenswrapper[4183]: E0813 19:55:16.209469 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.209866 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:16 crc kubenswrapper[4183]: E0813 19:55:16.209993 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.210130 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:16 crc kubenswrapper[4183]: E0813 19:55:16.210289 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:16 crc kubenswrapper[4183]: E0813 19:55:16.210661 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:16 crc kubenswrapper[4183]: E0813 19:55:16.210731 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:16 crc kubenswrapper[4183]: E0813 19:55:16.210910 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.214635 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.238417 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.256768 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.279037 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.299457 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:49Z\\\",\\\"message\\\":\\\"2025-08-13T19:54:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f\\\\n2025-08-13T19:54:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f to /host/opt/cni/bin/\\\\n2025-08-13T19:54:04Z [verbose] multus-daemon started\\\\n2025-08-13T19:54:04Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:54:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.315298 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.332228 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.350105 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.367126 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.384980 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.401862 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.416449 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.432643 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:16 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:16 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:16 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.432737 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.434215 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.452651 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.471594 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.492431 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.511474 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.532210 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.208995 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.209029 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.209081 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.209289 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.209310 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.209459 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.209516 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.209536 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.209555 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.209568 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.209619 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.209636 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.209640 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.209706 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.209714 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.209730 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.209703 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.209752 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.209539 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.209919 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.209985 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.209988 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.210037 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.210041 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.210054 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.210065 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.210124 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.210128 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.210213 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.210275 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.210295 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.210420 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.210443 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.210480 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.210595 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.210736 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.210763 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.210940 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.210968 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.211041 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.211065 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.211105 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.212648 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.213629 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.214014 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.214345 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.214417 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.214639 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.214881 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.215004 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.215050 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.215133 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.215259 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.215379 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.215440 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.215600 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.215655 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.215977 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.216118 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.216247 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.216317 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.216443 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.216500 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.216602 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.216645 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.216744 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.216911 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.217305 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.217406 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.217497 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.217608 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.217698 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.217886 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.218361 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.218459 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.218569 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.218674 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.218755 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.218938 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.220934 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.221629 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.222623 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.434637 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:17 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:17 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:17 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.434881 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:18 crc kubenswrapper[4183]: I0813 19:55:18.208462 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:18 crc kubenswrapper[4183]: I0813 19:55:18.208565 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:18 crc kubenswrapper[4183]: I0813 19:55:18.208601 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:18 crc kubenswrapper[4183]: I0813 19:55:18.208610 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:18 crc kubenswrapper[4183]: E0813 19:55:18.208717 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:18 crc kubenswrapper[4183]: I0813 19:55:18.208463 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:18 crc kubenswrapper[4183]: I0813 19:55:18.208492 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:18 crc kubenswrapper[4183]: E0813 19:55:18.208958 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:18 crc kubenswrapper[4183]: E0813 19:55:18.209058 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:18 crc kubenswrapper[4183]: I0813 19:55:18.209073 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:18 crc kubenswrapper[4183]: E0813 19:55:18.209205 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:18 crc kubenswrapper[4183]: E0813 19:55:18.209336 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:18 crc kubenswrapper[4183]: E0813 19:55:18.209500 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:18 crc kubenswrapper[4183]: E0813 19:55:18.209679 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:18 crc kubenswrapper[4183]: I0813 19:55:18.432644 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:18 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:18 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:18 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:18 crc kubenswrapper[4183]: I0813 19:55:18.432873 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.209269 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.209351 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.209583 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.209729 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.209973 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.210003 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.210037 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.210043 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.210081 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.210122 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.210174 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.210217 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.210223 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.210280 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.210380 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.210393 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.210405 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.210519 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.210525 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.210610 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.210615 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.210647 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.210686 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.210689 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.210742 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.210755 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.210897 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.210916 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.210918 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.210942 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.210992 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.211101 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.211158 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.211243 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.211279 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.211337 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.211421 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.211488 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.211543 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.211634 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.211716 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.211748 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.211906 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.212012 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.212045 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.212124 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.212171 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.212251 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.212305 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.212395 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.212484 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.212522 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.212601 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.212694 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.212851 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.212950 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.213029 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.213088 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.213168 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.213303 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.213319 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.213543 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.213639 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.213707 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.213754 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.213969 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.214023 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.213979 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.214029 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.214194 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.214384 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.214527 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.214925 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.215116 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.215377 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.215447 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.215537 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.215608 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.215734 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.215950 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.216116 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.216215 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.432379 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:19 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:19 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:19 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.432546 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:20 crc kubenswrapper[4183]: I0813 19:55:20.208881 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:20 crc kubenswrapper[4183]: E0813 19:55:20.209104 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:20 crc kubenswrapper[4183]: I0813 19:55:20.209167 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:20 crc kubenswrapper[4183]: E0813 19:55:20.209240 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:20 crc kubenswrapper[4183]: I0813 19:55:20.209282 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:20 crc kubenswrapper[4183]: E0813 19:55:20.209349 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:20 crc kubenswrapper[4183]: I0813 19:55:20.209397 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:20 crc kubenswrapper[4183]: E0813 19:55:20.209470 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:20 crc kubenswrapper[4183]: I0813 19:55:20.209510 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:20 crc kubenswrapper[4183]: E0813 19:55:20.209579 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:20 crc kubenswrapper[4183]: I0813 19:55:20.209616 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:20 crc kubenswrapper[4183]: E0813 19:55:20.209684 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:20 crc kubenswrapper[4183]: I0813 19:55:20.209914 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:20 crc kubenswrapper[4183]: E0813 19:55:20.210140 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:20 crc kubenswrapper[4183]: I0813 19:55:20.434604 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:20 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:20 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:20 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:20 crc kubenswrapper[4183]: I0813 19:55:20.434755 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:20 crc kubenswrapper[4183]: E0813 19:55:20.480008 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.211410 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.212321 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.212768 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.213032 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.213362 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.213603 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.213999 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.214279 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.214046 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.214523 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.214088 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.214677 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.214096 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.214121 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.214852 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.214168 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.214196 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.214225 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.214247 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.214949 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.215026 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.215152 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.215398 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.215504 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.215421 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.215467 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.215900 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.215979 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.216186 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.216233 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.216011 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.216203 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.216016 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.216141 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.216171 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.216082 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.217382 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.217436 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.217476 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.217688 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.217724 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.217752 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.217853 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.217908 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.217499 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.218193 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.218144 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.218279 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.218331 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.218418 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.217635 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.218693 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.218758 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.218999 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.218927 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.219474 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.219616 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.219636 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.219670 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.219718 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.219759 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.219861 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.219895 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.219986 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.220030 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.220082 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.220102 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.220123 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.220143 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.220196 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.220313 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.220399 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.220482 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.220571 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.220626 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.220745 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.220968 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.221085 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.221171 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.221326 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.223100 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.223369 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.433308 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:21 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:21 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:21 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.433915 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.208543 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.208636 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.208659 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.208555 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.208591 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:22 crc kubenswrapper[4183]: E0813 19:55:22.208922 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:22 crc kubenswrapper[4183]: E0813 19:55:22.209018 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.209128 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.209227 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:22 crc kubenswrapper[4183]: E0813 19:55:22.209237 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:22 crc kubenswrapper[4183]: E0813 19:55:22.209358 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:22 crc kubenswrapper[4183]: E0813 19:55:22.210072 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:22 crc kubenswrapper[4183]: E0813 19:55:22.210241 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:22 crc kubenswrapper[4183]: E0813 19:55:22.210949 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.433042 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:22 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:22 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:22 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.433731 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.711431 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.711498 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.711516 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.711536 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.711557 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:22Z","lastTransitionTime":"2025-08-13T19:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:22 crc kubenswrapper[4183]: E0813 19:55:22.727956 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.733520 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.733744 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.733942 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.734119 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.734235 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:22Z","lastTransitionTime":"2025-08-13T19:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:22 crc kubenswrapper[4183]: E0813 19:55:22.750310 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.756190 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.756271 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.756292 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.756318 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.756354 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:22Z","lastTransitionTime":"2025-08-13T19:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:22 crc kubenswrapper[4183]: E0813 19:55:22.775457 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.781640 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.781703 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.781719 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.781743 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.781761 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:22Z","lastTransitionTime":"2025-08-13T19:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:22 crc kubenswrapper[4183]: E0813 19:55:22.799995 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.806295 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.806387 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.806411 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.806435 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.806472 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:22Z","lastTransitionTime":"2025-08-13T19:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:22 crc kubenswrapper[4183]: E0813 19:55:22.825055 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:22 crc kubenswrapper[4183]: E0813 19:55:22.825143 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.209153 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.209489 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.209525 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.209490 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.209239 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.209274 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.209275 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.209304 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.209310 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.209336 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.209345 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.209377 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.209385 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.209417 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.209693 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.209418 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.209233 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.209450 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.209455 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.209905 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.210511 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.210607 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.210616 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.210700 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.210723 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.210929 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.210969 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.210930 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.210971 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.210938 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.211177 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.211197 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.211206 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.211354 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.211391 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.211440 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.211500 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.211539 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.211654 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.211730 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.211981 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.212044 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.212183 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.212298 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.212299 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.212407 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.212444 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.212456 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.212540 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.212840 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.212871 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.212931 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.213021 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.213176 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.213282 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.213370 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.213485 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.213583 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.213692 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.213888 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.214003 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.214126 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.214209 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.214279 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.214386 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.214443 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.214537 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.214616 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.214670 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.214727 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.214895 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.215037 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.215191 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.215275 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.215352 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.215423 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.215508 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.215608 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.215754 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.216065 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.216135 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.216274 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.433767 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:23 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:23 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:23 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.433988 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:24 crc kubenswrapper[4183]: I0813 19:55:24.208504 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:24 crc kubenswrapper[4183]: I0813 19:55:24.208566 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:24 crc kubenswrapper[4183]: E0813 19:55:24.209318 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:24 crc kubenswrapper[4183]: I0813 19:55:24.208600 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:24 crc kubenswrapper[4183]: E0813 19:55:24.209499 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:24 crc kubenswrapper[4183]: E0813 19:55:24.209318 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:24 crc kubenswrapper[4183]: I0813 19:55:24.208643 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:24 crc kubenswrapper[4183]: I0813 19:55:24.208653 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:24 crc kubenswrapper[4183]: E0813 19:55:24.209857 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:24 crc kubenswrapper[4183]: I0813 19:55:24.208650 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:24 crc kubenswrapper[4183]: E0813 19:55:24.209903 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:24 crc kubenswrapper[4183]: I0813 19:55:24.208755 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:24 crc kubenswrapper[4183]: E0813 19:55:24.210019 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:24 crc kubenswrapper[4183]: E0813 19:55:24.210109 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:24 crc kubenswrapper[4183]: I0813 19:55:24.433126 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:24 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:24 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:24 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:24 crc kubenswrapper[4183]: I0813 19:55:24.433531 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.208433 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.208481 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.208648 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.208657 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.208694 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.208648 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.208963 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.209001 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.208964 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.209110 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.209121 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.209129 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.209262 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.209266 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.209267 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.208300 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.209536 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.209548 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.209674 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.209724 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.209864 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.209994 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.210001 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.210074 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.210109 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.210132 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.210143 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.210178 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.210261 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.210296 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.210310 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.210349 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.210352 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.210404 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.210419 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.210445 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.210559 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.210567 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.210629 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.210650 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.210667 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.210724 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.210731 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.210765 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.210921 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.211043 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.211131 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.211145 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.211187 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.211237 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.211305 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.211363 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.211394 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.211487 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.211555 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.211634 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.211673 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.211680 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.211993 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.212094 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.212226 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.212283 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.212304 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.212368 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.212368 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.212512 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.212589 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.212607 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.213318 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.213468 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.213569 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.213702 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.214230 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.214338 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.214381 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.214457 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.214534 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.214590 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.214666 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.214901 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.215220 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.215414 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.228412 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.246437 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.265169 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.283258 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.316577 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:47Z\\\",\\\"message\\\":\\\"10.217.4.108:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0813 19:54:47.532432 19713 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:54:47.531652 19713 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532671 19713 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532752 19713 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532927 19713 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:54:47.532945 19713 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:54:47.532868 19713 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532892 19713 reflector.go:295] Stoppin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.338447 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.355035 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.376554 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.393883 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.410875 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.427722 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.432051 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:25 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:25 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:25 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.432186 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.447463 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.464002 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.478635 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.481638 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.500612 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.525681 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.545535 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.561605 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.577200 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.596899 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.617866 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.633453 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.650299 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.668548 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:49Z\\\",\\\"message\\\":\\\"2025-08-13T19:54:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f\\\\n2025-08-13T19:54:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f to /host/opt/cni/bin/\\\\n2025-08-13T19:54:04Z [verbose] multus-daemon started\\\\n2025-08-13T19:54:04Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:54:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.686041 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.701069 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.717958 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.734020 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.750552 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.767982 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.785708 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.808717 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.829407 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.847739 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.868006 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.889086 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.909030 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.925576 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.945757 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.964566 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.984769 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.001131 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.021317 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.039002 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.055656 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.073092 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.090569 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.110707 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.128750 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.146329 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.165602 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.180669 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.196983 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.208506 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:26 crc kubenswrapper[4183]: E0813 19:55:26.208681 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.208974 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:26 crc kubenswrapper[4183]: E0813 19:55:26.209052 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.209169 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.209189 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.209380 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:26 crc kubenswrapper[4183]: E0813 19:55:26.209472 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:26 crc kubenswrapper[4183]: E0813 19:55:26.209857 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.210130 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.210362 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:26 crc kubenswrapper[4183]: E0813 19:55:26.210498 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:26 crc kubenswrapper[4183]: E0813 19:55:26.210918 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:26 crc kubenswrapper[4183]: E0813 19:55:26.211061 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.211409 4183 scope.go:117] "RemoveContainer" containerID="2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5" Aug 13 19:55:26 crc kubenswrapper[4183]: E0813 19:55:26.212017 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.217974 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.234638 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.248240 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.266141 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.279599 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.301294 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.320499 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.340955 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.358432 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.376079 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.395497 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.412336 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.428665 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.433750 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:26 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:26 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:26 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.433963 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.448253 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.208762 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.209938 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.208938 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.210231 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.208947 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.208962 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.210440 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.210477 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.208969 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.210565 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.208992 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.210648 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209004 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209001 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.210987 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.211093 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209034 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209046 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209057 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.212218 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209062 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209089 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209096 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209105 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.212506 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209155 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209164 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209185 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.212871 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.213031 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209190 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.213131 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209207 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.213229 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209225 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.213311 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209237 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209239 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209242 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.213632 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.213752 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.213950 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209251 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209265 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.214192 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.214381 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.214573 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209268 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209277 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.214910 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.215149 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209292 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.215328 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.215429 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209301 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209318 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.215768 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209323 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209329 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209338 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209355 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209359 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209370 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209379 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209389 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.216167 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.208873 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.216272 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209421 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.211949 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.215985 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.216077 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.216378 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.216491 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.217039 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.217204 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.217341 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.217355 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.217501 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.217673 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.217910 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.218028 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.433219 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:27 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:27 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:27 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.433345 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:28 crc kubenswrapper[4183]: I0813 19:55:28.209148 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:28 crc kubenswrapper[4183]: I0813 19:55:28.209204 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:28 crc kubenswrapper[4183]: I0813 19:55:28.209148 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:28 crc kubenswrapper[4183]: I0813 19:55:28.209169 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:28 crc kubenswrapper[4183]: I0813 19:55:28.209309 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:28 crc kubenswrapper[4183]: E0813 19:55:28.209379 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:28 crc kubenswrapper[4183]: E0813 19:55:28.209561 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:28 crc kubenswrapper[4183]: E0813 19:55:28.210045 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:28 crc kubenswrapper[4183]: E0813 19:55:28.210213 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:28 crc kubenswrapper[4183]: E0813 19:55:28.210254 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:28 crc kubenswrapper[4183]: I0813 19:55:28.210330 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:28 crc kubenswrapper[4183]: I0813 19:55:28.210366 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:28 crc kubenswrapper[4183]: E0813 19:55:28.210479 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:28 crc kubenswrapper[4183]: E0813 19:55:28.210641 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:28 crc kubenswrapper[4183]: I0813 19:55:28.432879 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:28 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:28 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:28 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:28 crc kubenswrapper[4183]: I0813 19:55:28.432997 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.208586 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.208609 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.208638 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.210451 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.210724 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.210922 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211178 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.212105 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211257 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.212400 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211277 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211289 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.213613 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211318 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.214013 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211336 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211350 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.214410 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.214501 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211367 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.215004 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211389 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.215247 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211403 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.215547 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.215557 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211437 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.215693 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211455 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.215878 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211475 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.215981 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211489 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.216077 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211507 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211418 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211521 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.216306 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211537 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211579 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211596 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211616 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211636 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211651 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211680 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.216447 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211565 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211698 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.216535 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211659 4183 scope.go:117] "RemoveContainer" containerID="c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.216602 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.216714 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.216903 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.216991 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.217073 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211712 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.217171 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211729 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.217242 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211749 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211866 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211871 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211886 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211892 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.217355 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211902 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211912 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211918 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211926 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211766 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.218270 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.218417 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.218504 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.218953 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.219138 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.219248 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.219300 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.219376 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.219439 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.219500 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.219855 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.220154 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.218705 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.433518 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:29 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:29 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:29 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.434314 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.172465 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/3.log" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.172591 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q88th" event={"ID":"475321a1-8b7e-4033-8f72-b05a8b377347","Type":"ContainerStarted","Data":"2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f"} Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.189584 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.209158 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:30 crc kubenswrapper[4183]: E0813 19:55:30.209354 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.209424 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:30 crc kubenswrapper[4183]: E0813 19:55:30.209496 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.209540 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:30 crc kubenswrapper[4183]: E0813 19:55:30.209606 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.210004 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.210052 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:30 crc kubenswrapper[4183]: E0813 19:55:30.210150 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.210020 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:30 crc kubenswrapper[4183]: E0813 19:55:30.210397 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:30 crc kubenswrapper[4183]: E0813 19:55:30.210405 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.210631 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:30 crc kubenswrapper[4183]: E0813 19:55:30.210866 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.212703 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.229473 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.252125 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.273569 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.300068 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.316046 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.333293 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.352597 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.372152 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.390719 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.407591 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.425401 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.432760 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:30 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:30 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:30 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.432946 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.432999 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.434179 4183 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02"} pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" containerMessage="Container router failed startup probe, will be restarted" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.434265 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" containerID="cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02" gracePeriod=3600 Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.443916 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.462295 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.482482 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: E0813 19:55:30.484139 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.495955 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.509494 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.527689 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.542402 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.558071 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.574222 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.596126 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.617001 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.637687 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.658970 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.677549 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.698057 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.714660 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.733913 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.748767 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.764003 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.782511 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.807091 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:47Z\\\",\\\"message\\\":\\\"10.217.4.108:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0813 19:54:47.532432 19713 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:54:47.531652 19713 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532671 19713 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532752 19713 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532927 19713 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:54:47.532945 19713 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:54:47.532868 19713 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532892 19713 reflector.go:295] Stoppin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.828143 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.846845 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.863386 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.880347 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.897313 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.913133 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.937187 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.954651 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.970318 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.991030 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.007065 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.022281 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.038561 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.052525 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.078614 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.098370 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.114293 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.133737 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.153227 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:49Z\\\",\\\"message\\\":\\\"2025-08-13T19:54:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f\\\\n2025-08-13T19:54:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f to /host/opt/cni/bin/\\\\n2025-08-13T19:54:04Z [verbose] multus-daemon started\\\\n2025-08-13T19:54:04Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:54:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:55:29Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.170938 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.188355 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.205023 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.208313 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.208422 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.208423 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.208517 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.208627 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.208654 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.208664 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.208708 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.208737 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.208762 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.208523 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.208893 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.208918 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.208984 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.208985 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.209018 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.209038 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.209092 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.209164 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.209207 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.209264 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.209293 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.209313 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.209373 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.209462 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.209518 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.209610 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.209657 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.209713 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.209883 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.209965 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.210092 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.210176 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.210265 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.210273 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.210313 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.210707 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.211204 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.210716 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.210736 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.210753 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.210836 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.210856 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.210864 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.210887 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.211399 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.211483 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.210950 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.210982 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.211059 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.211617 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.211027 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.211076 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.211754 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.211094 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.211876 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.211126 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.211949 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.211142 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.211159 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.212017 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.212059 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.211230 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.212126 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.212178 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.212229 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.212322 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.212351 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.212411 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.212472 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.212542 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.212610 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.212660 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.212736 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.212951 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.213024 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.213106 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.213191 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.213271 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.213404 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.213475 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.213536 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.225973 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.243672 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.262906 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.278759 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.300198 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.323760 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.344152 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.363258 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.390700 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.413277 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.433133 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:32 crc kubenswrapper[4183]: I0813 19:55:32.208636 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:32 crc kubenswrapper[4183]: I0813 19:55:32.208934 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:32 crc kubenswrapper[4183]: I0813 19:55:32.208963 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:32 crc kubenswrapper[4183]: E0813 19:55:32.208997 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:32 crc kubenswrapper[4183]: I0813 19:55:32.209053 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:32 crc kubenswrapper[4183]: E0813 19:55:32.209067 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:32 crc kubenswrapper[4183]: E0813 19:55:32.209349 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:32 crc kubenswrapper[4183]: I0813 19:55:32.209549 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:32 crc kubenswrapper[4183]: E0813 19:55:32.209651 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:32 crc kubenswrapper[4183]: I0813 19:55:32.209673 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:32 crc kubenswrapper[4183]: I0813 19:55:32.209903 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:32 crc kubenswrapper[4183]: E0813 19:55:32.209992 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:32 crc kubenswrapper[4183]: E0813 19:55:32.210200 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:32 crc kubenswrapper[4183]: E0813 19:55:32.210395 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.011591 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.011920 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.011944 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.011966 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.012000 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:33Z","lastTransitionTime":"2025-08-13T19:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.031264 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:33Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.037199 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.037444 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.037566 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.037688 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.037889 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:33Z","lastTransitionTime":"2025-08-13T19:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.060851 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:33Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.065963 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.066043 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.066066 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.066089 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.066116 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:33Z","lastTransitionTime":"2025-08-13T19:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.087550 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:33Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.093403 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.093486 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.093500 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.093520 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.093540 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:33Z","lastTransitionTime":"2025-08-13T19:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.107668 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:33Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.113186 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.113241 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.113262 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.113285 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.113313 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:33Z","lastTransitionTime":"2025-08-13T19:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.128925 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:33Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.128988 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.208653 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.208749 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.208863 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.208881 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.208949 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.209022 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.209033 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.209035 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.209053 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.209072 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.209140 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.209147 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.209225 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.209235 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.209249 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.209269 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.209322 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.209253 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.208687 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.209382 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.209448 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.209529 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.209579 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.209611 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.209592 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.209661 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.209686 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.209693 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.209707 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.209745 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.209931 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.209938 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.210070 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.210165 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.210223 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.210316 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.210362 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.210512 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.210560 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.210664 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.210700 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.210891 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.210996 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.211043 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.211105 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.211350 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.211559 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.211689 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.211751 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.211931 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.212070 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.212136 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.212372 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.212524 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.212586 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.212642 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.212732 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.212939 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.212945 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.213049 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.213142 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.213221 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.213348 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.213427 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.213670 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.214024 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.214080 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.214171 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.214273 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.214517 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.215250 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.215325 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.215441 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.215580 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.215581 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.215768 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.215977 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.216065 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.216335 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.216394 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.216510 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.216711 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:34 crc kubenswrapper[4183]: I0813 19:55:34.208864 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:34 crc kubenswrapper[4183]: I0813 19:55:34.208992 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:34 crc kubenswrapper[4183]: E0813 19:55:34.209088 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:34 crc kubenswrapper[4183]: I0813 19:55:34.209101 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:34 crc kubenswrapper[4183]: I0813 19:55:34.208992 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:34 crc kubenswrapper[4183]: I0813 19:55:34.209181 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:34 crc kubenswrapper[4183]: E0813 19:55:34.209302 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:34 crc kubenswrapper[4183]: E0813 19:55:34.209478 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:34 crc kubenswrapper[4183]: I0813 19:55:34.209550 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:34 crc kubenswrapper[4183]: I0813 19:55:34.209655 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:34 crc kubenswrapper[4183]: E0813 19:55:34.209759 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:34 crc kubenswrapper[4183]: E0813 19:55:34.210119 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:34 crc kubenswrapper[4183]: E0813 19:55:34.210345 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:34 crc kubenswrapper[4183]: E0813 19:55:34.210610 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.208489 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.208563 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.208714 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.208737 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.208930 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.208999 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209060 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.209078 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209139 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209146 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.209237 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209430 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.211020 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209468 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.211236 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209474 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.211411 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209482 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.211548 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209494 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.211715 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209510 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.211877 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209527 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.212105 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209527 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.212239 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209534 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.212373 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209551 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.212505 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209566 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.212650 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209578 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.212936 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209582 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.213095 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.208489 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.213240 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209590 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.214088 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209605 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.214222 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209688 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.214300 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.209689 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209690 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.214420 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209714 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.214503 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209719 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.214579 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.209844 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209868 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.214686 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209869 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.214859 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209895 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.214954 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209908 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.215037 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209915 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.215111 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.209965 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209995 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.215207 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.210062 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.210080 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.215292 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.210110 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.210127 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.215375 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.215468 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.210144 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.215550 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.210162 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.215628 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.210184 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.210195 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.215714 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.215911 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.210237 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.229598 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.247083 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.271492 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.287404 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.304102 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.324512 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.346598 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.363728 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.386112 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.404159 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.419047 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.438455 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.457017 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.473934 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.485910 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.503259 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.536591 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.577260 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.606515 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.625880 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.645143 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.665059 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.684499 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.702131 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.718974 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.738712 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.755947 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.775367 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.788612 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.805972 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.821347 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.846705 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.862544 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.878903 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.903665 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.918377 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.939758 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.959140 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.977195 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.996896 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.013973 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.030155 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.048141 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.064376 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.092633 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:47Z\\\",\\\"message\\\":\\\"10.217.4.108:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0813 19:54:47.532432 19713 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:54:47.531652 19713 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532671 19713 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532752 19713 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532927 19713 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:54:47.532945 19713 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:54:47.532868 19713 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532892 19713 reflector.go:295] Stoppin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.111030 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.135614 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.151971 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.166049 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.179345 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.192850 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.208522 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:36 crc kubenswrapper[4183]: E0813 19:55:36.208753 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.209036 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:36 crc kubenswrapper[4183]: E0813 19:55:36.209131 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.209242 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:36 crc kubenswrapper[4183]: E0813 19:55:36.209319 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.209426 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:36 crc kubenswrapper[4183]: E0813 19:55:36.209497 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.209597 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:36 crc kubenswrapper[4183]: E0813 19:55:36.209661 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.209758 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:36 crc kubenswrapper[4183]: E0813 19:55:36.209929 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.210046 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:36 crc kubenswrapper[4183]: E0813 19:55:36.210113 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.210661 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.226267 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.240867 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.260632 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.288667 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.302312 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.316731 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.330416 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.348769 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.367972 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.382453 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.398560 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.412950 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:49Z\\\",\\\"message\\\":\\\"2025-08-13T19:54:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f\\\\n2025-08-13T19:54:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f to /host/opt/cni/bin/\\\\n2025-08-13T19:54:04Z [verbose] multus-daemon started\\\\n2025-08-13T19:54:04Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:54:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:55:29Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.427473 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.441537 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.459557 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.480479 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.209053 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.209124 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.209239 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.209255 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.209333 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.209340 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.209425 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.209432 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.209475 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.209493 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.209603 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.209611 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.209689 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.209721 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.209846 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.209877 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.209939 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.209954 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.210029 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.210036 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.210088 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.210098 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.210101 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.210182 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.210204 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.210231 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.210288 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.210294 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.210334 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.210342 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.210412 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.210415 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.210456 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.210503 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.210509 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.210548 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.210603 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.210622 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.210646 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.210712 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.210715 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.210909 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.210999 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.211177 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.211271 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.211318 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.211389 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.211442 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.211536 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.211575 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.211656 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.211933 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.212098 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.212187 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.212364 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.212550 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.212694 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.212870 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.212897 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.213070 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.213156 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.213458 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.213171 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.213193 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.213298 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.213874 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.213920 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.214009 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.214099 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.214374 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.214629 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.214869 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.214945 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.215022 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.215092 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.215226 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.215344 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.215358 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.215417 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.215479 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.215657 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.215772 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:38 crc kubenswrapper[4183]: I0813 19:55:38.208993 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:38 crc kubenswrapper[4183]: I0813 19:55:38.209060 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:38 crc kubenswrapper[4183]: I0813 19:55:38.209139 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:38 crc kubenswrapper[4183]: E0813 19:55:38.209218 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:38 crc kubenswrapper[4183]: I0813 19:55:38.209023 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:38 crc kubenswrapper[4183]: E0813 19:55:38.209291 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:38 crc kubenswrapper[4183]: I0813 19:55:38.209415 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:38 crc kubenswrapper[4183]: I0813 19:55:38.209521 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:38 crc kubenswrapper[4183]: I0813 19:55:38.209531 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:38 crc kubenswrapper[4183]: E0813 19:55:38.209704 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:38 crc kubenswrapper[4183]: E0813 19:55:38.209722 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:38 crc kubenswrapper[4183]: E0813 19:55:38.210029 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:38 crc kubenswrapper[4183]: E0813 19:55:38.210421 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:38 crc kubenswrapper[4183]: E0813 19:55:38.210504 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:38 crc kubenswrapper[4183]: I0813 19:55:38.212039 4183 scope.go:117] "RemoveContainer" containerID="2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5" Aug 13 19:55:38 crc kubenswrapper[4183]: E0813 19:55:38.212633 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.208113 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.208173 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.208643 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.208900 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.208900 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.208926 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.208943 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.209016 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.209019 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.209050 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.208978 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.209279 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.209288 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.209334 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.209374 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.209425 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.209432 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.209447 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.209511 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.209554 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.209606 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.209646 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.209665 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.209700 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.209770 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.209901 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.209953 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.210022 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.210028 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.210054 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.210141 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.210157 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.210213 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.210314 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.210352 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.210387 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.210402 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.210446 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.210457 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.210606 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.210527 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.210677 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.210715 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.210736 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.210748 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.210716 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.210231 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.210564 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.210767 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.211105 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.210224 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.211237 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.211354 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.211358 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.211409 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.211463 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.211570 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.211617 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.211655 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.211725 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.211938 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.212046 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.212091 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.212131 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.212168 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.212205 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.212326 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.212443 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.212470 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.212585 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.212768 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.212917 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.212982 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.213108 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.213232 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.213278 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.213295 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.213314 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.213462 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.213509 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.213613 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.213737 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:40 crc kubenswrapper[4183]: I0813 19:55:40.208415 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:40 crc kubenswrapper[4183]: E0813 19:55:40.208693 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:40 crc kubenswrapper[4183]: I0813 19:55:40.209145 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:40 crc kubenswrapper[4183]: E0813 19:55:40.209231 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:40 crc kubenswrapper[4183]: I0813 19:55:40.209264 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:40 crc kubenswrapper[4183]: E0813 19:55:40.209342 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:40 crc kubenswrapper[4183]: I0813 19:55:40.209377 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:40 crc kubenswrapper[4183]: E0813 19:55:40.209449 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:40 crc kubenswrapper[4183]: I0813 19:55:40.209480 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:40 crc kubenswrapper[4183]: E0813 19:55:40.209599 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:40 crc kubenswrapper[4183]: I0813 19:55:40.209636 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:40 crc kubenswrapper[4183]: E0813 19:55:40.209703 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:40 crc kubenswrapper[4183]: I0813 19:55:40.209730 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:40 crc kubenswrapper[4183]: E0813 19:55:40.209870 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:40 crc kubenswrapper[4183]: E0813 19:55:40.487965 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.209057 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.209372 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.209948 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.210056 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.210118 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.210203 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.210243 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.210310 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.210350 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.210413 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.210446 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.210520 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.210565 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.210628 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.210662 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.210759 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.210883 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.210958 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.210995 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.211061 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.211103 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.211174 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.211209 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.211270 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.211311 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.211383 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.211421 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.211511 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.211562 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.211670 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.211714 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.211869 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.211923 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.211996 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.212087 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.212152 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.212191 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.212261 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.212257 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.212292 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.212329 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.212358 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.212383 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.212303 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.212518 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.212590 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.212603 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.212661 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.212664 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.212709 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.212735 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.212709 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.212859 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.212887 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.212895 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.212956 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.213038 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.213058 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.213107 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.213137 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.213161 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.213183 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.213205 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.213279 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.213325 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.213933 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.214161 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.214544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.215624 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.215751 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.215911 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.216188 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.216497 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.216549 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.216887 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.217043 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.217232 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.217561 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.217738 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.217744 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.218971 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.219169 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:42 crc kubenswrapper[4183]: I0813 19:55:42.208945 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:42 crc kubenswrapper[4183]: I0813 19:55:42.209018 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:42 crc kubenswrapper[4183]: I0813 19:55:42.209054 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:42 crc kubenswrapper[4183]: I0813 19:55:42.209095 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:42 crc kubenswrapper[4183]: I0813 19:55:42.209135 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:42 crc kubenswrapper[4183]: I0813 19:55:42.209212 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:42 crc kubenswrapper[4183]: E0813 19:55:42.210008 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:42 crc kubenswrapper[4183]: I0813 19:55:42.209211 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:42 crc kubenswrapper[4183]: E0813 19:55:42.209483 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:42 crc kubenswrapper[4183]: E0813 19:55:42.210123 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:42 crc kubenswrapper[4183]: E0813 19:55:42.210225 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:42 crc kubenswrapper[4183]: E0813 19:55:42.210327 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:42 crc kubenswrapper[4183]: E0813 19:55:42.210396 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:42 crc kubenswrapper[4183]: E0813 19:55:42.210460 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.212882 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.214334 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.214483 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.214498 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.214576 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.214762 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.214921 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.214968 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.214857 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213122 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213124 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.215051 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213139 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.214927 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213196 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213207 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213202 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213242 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.215191 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213238 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213148 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213335 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213348 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.215282 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213373 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213389 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213411 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.215358 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213410 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213420 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213453 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213461 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213493 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213620 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213636 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213639 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213671 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213722 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213756 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213989 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.212912 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.214869 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.215433 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213315 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.215518 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.215597 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.215847 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.215880 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.215975 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.216050 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.216241 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.216288 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.216384 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.216466 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.216541 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.216614 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.216689 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.216868 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.216928 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.216947 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.217046 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.217083 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.217251 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.217328 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.217395 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.217464 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.217536 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.217571 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.217644 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.217648 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.217677 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.217704 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.217749 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.217866 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.217889 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.217958 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.218023 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.218100 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.218479 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.218537 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.218897 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.219057 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.525399 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.525463 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.525481 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.525502 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.525527 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:43Z","lastTransitionTime":"2025-08-13T19:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.545583 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.549591 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.549672 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.549752 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.549889 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.550257 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:43Z","lastTransitionTime":"2025-08-13T19:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.563961 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.567932 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.568007 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.568079 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.568152 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.568178 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:43Z","lastTransitionTime":"2025-08-13T19:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.581710 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.586657 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.586755 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.586897 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.586931 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.586967 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:43Z","lastTransitionTime":"2025-08-13T19:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.603058 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.607984 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.608023 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.608034 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.608053 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.608073 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:43Z","lastTransitionTime":"2025-08-13T19:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.627194 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.627269 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:55:44 crc kubenswrapper[4183]: I0813 19:55:44.208693 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:44 crc kubenswrapper[4183]: I0813 19:55:44.208766 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:44 crc kubenswrapper[4183]: I0813 19:55:44.208710 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:44 crc kubenswrapper[4183]: I0813 19:55:44.209015 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:44 crc kubenswrapper[4183]: E0813 19:55:44.209076 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:44 crc kubenswrapper[4183]: I0813 19:55:44.209138 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:44 crc kubenswrapper[4183]: I0813 19:55:44.209167 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:44 crc kubenswrapper[4183]: E0813 19:55:44.209262 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:44 crc kubenswrapper[4183]: E0813 19:55:44.209388 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:44 crc kubenswrapper[4183]: E0813 19:55:44.209548 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:44 crc kubenswrapper[4183]: E0813 19:55:44.209638 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:44 crc kubenswrapper[4183]: E0813 19:55:44.209866 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:44 crc kubenswrapper[4183]: I0813 19:55:44.210209 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:44 crc kubenswrapper[4183]: E0813 19:55:44.210494 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.208536 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.209525 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.208563 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.208592 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.208583 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.208621 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.208638 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.208650 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.208688 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.208700 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.208726 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.208913 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.208952 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.208945 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.208969 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.208970 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.208981 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.208994 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209006 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209012 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209036 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209041 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209043 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209048 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209064 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209084 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209084 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209098 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209085 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209124 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209130 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209129 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209151 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209157 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209163 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209164 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209180 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209198 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209193 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209201 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209209 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209240 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.209945 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.210293 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.210704 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.211203 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.211323 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.213333 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.214501 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.214584 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.214963 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.215088 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.215314 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.215321 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.215647 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.215988 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.216197 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.216344 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.217096 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.217899 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.218129 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.218347 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.218622 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.218936 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.219321 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.219501 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.219936 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.220203 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.224091 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.224350 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.224595 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.224740 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.224860 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.224891 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.224917 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.225121 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.224936 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.224960 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.225176 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.225299 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.225358 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.224982 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.234598 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.252326 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.274895 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.293900 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.311923 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.325454 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.344118 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.364002 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.383908 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.404294 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.423966 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.439324 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.454437 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.475139 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.489594 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.491009 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.508589 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.528109 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.546926 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.563106 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.579428 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.597344 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.613067 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.627648 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.645576 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.663074 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.679884 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.694595 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.713364 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.730897 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.744865 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.761743 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.779617 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.799458 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.820684 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.839895 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.856965 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.881500 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.902081 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.926453 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.949887 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.971894 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.989187 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.013545 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.036552 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.055414 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.081184 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:47Z\\\",\\\"message\\\":\\\"10.217.4.108:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0813 19:54:47.532432 19713 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:54:47.531652 19713 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532671 19713 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532752 19713 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532927 19713 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:54:47.532945 19713 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:54:47.532868 19713 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532892 19713 reflector.go:295] Stoppin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.101311 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.119677 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.138006 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.158192 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.180428 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.207305 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.208509 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.208544 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.208719 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:46 crc kubenswrapper[4183]: E0813 19:55:46.208859 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.208879 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.208947 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:46 crc kubenswrapper[4183]: E0813 19:55:46.209060 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.209100 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:46 crc kubenswrapper[4183]: E0813 19:55:46.209202 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.209399 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:46 crc kubenswrapper[4183]: E0813 19:55:46.209492 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:46 crc kubenswrapper[4183]: E0813 19:55:46.209402 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:46 crc kubenswrapper[4183]: E0813 19:55:46.209767 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:46 crc kubenswrapper[4183]: E0813 19:55:46.209909 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.227698 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.249455 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.274009 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.294673 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.324302 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.340935 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.359362 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.378906 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.402173 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.423390 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.442699 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.465405 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.495478 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:49Z\\\",\\\"message\\\":\\\"2025-08-13T19:54:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f\\\\n2025-08-13T19:54:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f to /host/opt/cni/bin/\\\\n2025-08-13T19:54:04Z [verbose] multus-daemon started\\\\n2025-08-13T19:54:04Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:54:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:55:29Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.515068 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.531978 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.209038 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.209275 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.209498 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.209613 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.209742 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.209911 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.210072 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.210176 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.210307 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.210423 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.210585 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.210683 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.210744 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.210913 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.211141 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.211247 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.211369 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.211467 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.211594 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.211693 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.211734 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.212094 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.212182 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.212222 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.212220 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.212294 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.212228 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.212259 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.212511 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.212519 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.212591 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.212599 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.212609 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.212636 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.212661 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.212695 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.212707 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.212510 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.212869 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.212882 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.212885 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.212952 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.212973 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.213041 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.213096 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.213094 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.213154 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.213215 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.213216 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.213317 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.213399 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.213436 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.213523 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.213553 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.213624 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.213693 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.213732 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.213883 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.213930 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.213986 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.214054 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.214106 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.214169 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.214197 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.214307 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.214341 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.214430 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.214501 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.214542 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.214636 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.214954 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.215086 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.215163 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.215240 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.215299 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.215359 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.215435 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.215555 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.215647 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.216597 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.216771 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.217018 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:48 crc kubenswrapper[4183]: I0813 19:55:48.208722 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:48 crc kubenswrapper[4183]: I0813 19:55:48.208849 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:48 crc kubenswrapper[4183]: I0813 19:55:48.208883 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:48 crc kubenswrapper[4183]: I0813 19:55:48.208884 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:48 crc kubenswrapper[4183]: I0813 19:55:48.208724 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:48 crc kubenswrapper[4183]: I0813 19:55:48.209028 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:48 crc kubenswrapper[4183]: E0813 19:55:48.209064 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:48 crc kubenswrapper[4183]: I0813 19:55:48.209183 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:48 crc kubenswrapper[4183]: E0813 19:55:48.209324 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:48 crc kubenswrapper[4183]: E0813 19:55:48.209543 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:48 crc kubenswrapper[4183]: E0813 19:55:48.209691 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:48 crc kubenswrapper[4183]: E0813 19:55:48.209984 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:48 crc kubenswrapper[4183]: E0813 19:55:48.210183 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:48 crc kubenswrapper[4183]: E0813 19:55:48.210346 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.209262 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.209330 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.209350 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.209384 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.209419 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.209440 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.209572 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.209586 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.209601 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.209644 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.209769 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.209904 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.209914 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.209942 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.209995 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.210001 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.210087 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.210100 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.210121 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.210120 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.210235 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.210312 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.210343 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.210388 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.210396 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.210517 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.210659 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.210669 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.210669 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.210692 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.210753 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.210765 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.212077 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.212139 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.210963 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.211006 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.211011 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.211028 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.211051 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.212258 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.211069 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.211079 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.212314 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.211205 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.211250 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.211367 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.211411 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.212384 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.211415 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.211446 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.211519 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.211534 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.212454 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.211571 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.211592 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.211713 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.211719 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.211748 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.211756 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.211861 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.211872 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.212544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.211892 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.211910 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.211943 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.212063 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.212630 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.212721 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.212915 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.213055 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.213163 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.213265 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.213371 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.213493 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.213604 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.213722 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.213763 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.213995 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.214110 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.214276 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.214300 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.214391 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:50 crc kubenswrapper[4183]: I0813 19:55:50.208946 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:50 crc kubenswrapper[4183]: E0813 19:55:50.209422 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:50 crc kubenswrapper[4183]: I0813 19:55:50.209882 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:50 crc kubenswrapper[4183]: I0813 19:55:50.210081 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:50 crc kubenswrapper[4183]: I0813 19:55:50.209651 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:50 crc kubenswrapper[4183]: I0813 19:55:50.210622 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:50 crc kubenswrapper[4183]: E0813 19:55:50.210711 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:50 crc kubenswrapper[4183]: I0813 19:55:50.210890 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:50 crc kubenswrapper[4183]: I0813 19:55:50.210937 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:50 crc kubenswrapper[4183]: E0813 19:55:50.211133 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:50 crc kubenswrapper[4183]: E0813 19:55:50.211001 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:50 crc kubenswrapper[4183]: E0813 19:55:50.211072 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:50 crc kubenswrapper[4183]: E0813 19:55:50.211441 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:50 crc kubenswrapper[4183]: E0813 19:55:50.211586 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:50 crc kubenswrapper[4183]: E0813 19:55:50.491999 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.208606 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.209117 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.209190 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.209214 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.208720 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.208750 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.208681 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.209125 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.208938 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.209014 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.209452 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.209047 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.209470 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.209064 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.209093 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.209097 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.209131 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.209159 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.209162 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.208978 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.209653 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.208893 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.209695 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.209711 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.209850 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.209942 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.209953 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.210016 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.210044 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.210092 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.210170 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.210177 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.210295 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.210324 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.210364 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.210375 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.210418 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.210484 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.210523 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.210557 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.210588 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.210664 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.210669 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.210704 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.208645 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.210907 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.210975 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.211002 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.211040 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.211068 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.211086 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.211174 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.211576 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.211595 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.211659 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.211706 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.211751 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.211923 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.212011 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.212140 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.212346 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.212484 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.212524 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.212580 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.212705 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.212905 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.212978 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.212989 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.213022 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.213113 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.213153 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.213273 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.213316 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.213566 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.213599 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.213985 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.214212 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.214340 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.214422 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.214735 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.214861 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.214913 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:52 crc kubenswrapper[4183]: I0813 19:55:52.208619 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:52 crc kubenswrapper[4183]: I0813 19:55:52.208704 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:52 crc kubenswrapper[4183]: E0813 19:55:52.208949 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:52 crc kubenswrapper[4183]: I0813 19:55:52.208979 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:52 crc kubenswrapper[4183]: I0813 19:55:52.209090 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:52 crc kubenswrapper[4183]: I0813 19:55:52.209142 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:52 crc kubenswrapper[4183]: E0813 19:55:52.209161 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:52 crc kubenswrapper[4183]: I0813 19:55:52.209177 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:52 crc kubenswrapper[4183]: E0813 19:55:52.209214 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:52 crc kubenswrapper[4183]: E0813 19:55:52.209323 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:52 crc kubenswrapper[4183]: I0813 19:55:52.209411 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:52 crc kubenswrapper[4183]: E0813 19:55:52.209612 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:52 crc kubenswrapper[4183]: E0813 19:55:52.209700 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:52 crc kubenswrapper[4183]: E0813 19:55:52.209916 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.208352 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.208399 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.208353 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.208381 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.208526 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.208574 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.208614 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.208753 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.208934 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.208958 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.208966 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.208989 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.209018 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.209100 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.209120 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.209137 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.209181 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.209210 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.209215 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.209249 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.209309 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.209382 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.209441 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.209449 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.209440 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.209624 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.209642 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.209691 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.209757 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.209877 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.209947 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.209974 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.210015 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.210074 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.210098 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.210191 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.210201 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.210238 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.210333 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.210355 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.210407 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.210434 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.210466 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.210470 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.210506 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.210574 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.210577 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.210665 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.210708 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.210905 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.211018 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.211018 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.211239 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.211299 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.211367 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.211432 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.211475 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.211536 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.211615 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.211699 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.211768 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.211904 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.211947 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.212014 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.212161 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.212264 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.212324 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.212459 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.212583 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.212598 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.212726 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.213096 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.213510 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.213741 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.214331 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.214340 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.214650 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.215030 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.215385 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.215640 4183 scope.go:117] "RemoveContainer" containerID="2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.215920 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.216227 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.216397 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.216446 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.764365 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.764412 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.764428 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.764459 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.764483 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:53Z","lastTransitionTime":"2025-08-13T19:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.783246 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.791630 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.792179 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.792451 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.792580 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.792731 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:53Z","lastTransitionTime":"2025-08-13T19:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.811048 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.817735 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.817922 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.818121 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.818278 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.818402 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:53Z","lastTransitionTime":"2025-08-13T19:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.849442 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.857971 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.858316 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.858459 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.858629 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.858764 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:53Z","lastTransitionTime":"2025-08-13T19:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.887074 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.894340 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.894603 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.894719 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.894963 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.895304 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:53Z","lastTransitionTime":"2025-08-13T19:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.913122 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.913189 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:55:54 crc kubenswrapper[4183]: I0813 19:55:54.208462 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:54 crc kubenswrapper[4183]: I0813 19:55:54.208648 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:54 crc kubenswrapper[4183]: E0813 19:55:54.208860 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:54 crc kubenswrapper[4183]: I0813 19:55:54.208523 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:54 crc kubenswrapper[4183]: E0813 19:55:54.209055 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:54 crc kubenswrapper[4183]: I0813 19:55:54.209139 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:54 crc kubenswrapper[4183]: I0813 19:55:54.209208 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:54 crc kubenswrapper[4183]: E0813 19:55:54.209280 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:54 crc kubenswrapper[4183]: I0813 19:55:54.209355 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:54 crc kubenswrapper[4183]: E0813 19:55:54.209644 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:54 crc kubenswrapper[4183]: E0813 19:55:54.209996 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:54 crc kubenswrapper[4183]: E0813 19:55:54.210073 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:54 crc kubenswrapper[4183]: I0813 19:55:54.210484 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:54 crc kubenswrapper[4183]: E0813 19:55:54.210768 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:54 crc kubenswrapper[4183]: I0813 19:55:54.675270 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 19:55:54 crc kubenswrapper[4183]: I0813 19:55:54.675642 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 19:55:54 crc kubenswrapper[4183]: I0813 19:55:54.675762 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 19:55:54 crc kubenswrapper[4183]: I0813 19:55:54.675990 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 19:55:54 crc kubenswrapper[4183]: I0813 19:55:54.676105 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.208515 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.209103 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.209136 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.209108 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.208609 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.209108 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.209258 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.209336 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.209340 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.209344 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.209432 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.209439 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.208569 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.209553 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.209588 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.209604 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.209619 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.209675 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.209682 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.209722 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.209684 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.209867 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.209908 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.210010 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.210016 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.210079 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.210140 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.210162 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.210218 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.210229 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.210262 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.210310 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.210350 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.210431 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.210504 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.210539 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.210612 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.210646 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.210734 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.210766 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.210905 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.210974 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.211024 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.211091 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.211119 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.211177 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.211178 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.211315 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.211359 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.211363 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.211403 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.211568 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.211577 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.211652 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.211685 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.211706 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.211844 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.211871 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.211917 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.211959 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.211994 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.212048 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.212103 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.212284 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.212315 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.212414 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.212506 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.212594 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.212731 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.213102 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.213189 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.213283 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.213582 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.213583 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.213703 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.213873 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.214051 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.214057 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.214157 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.214260 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.214365 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.214456 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.227942 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.244287 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.264242 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.285094 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.301999 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.318688 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.335131 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.360851 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.377755 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.397198 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.422138 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.441441 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.458117 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.479031 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.493683 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.501282 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.522960 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:49Z\\\",\\\"message\\\":\\\"2025-08-13T19:54:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f\\\\n2025-08-13T19:54:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f to /host/opt/cni/bin/\\\\n2025-08-13T19:54:04Z [verbose] multus-daemon started\\\\n2025-08-13T19:54:04Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:54:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:55:29Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.548171 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.562761 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.577018 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.593049 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.614034 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.634136 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.651295 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.668755 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.685663 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.703587 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.718571 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.736345 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.752665 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.767003 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.781137 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.797148 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.815248 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.836444 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.852126 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.868337 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.883350 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.902739 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.922450 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.937512 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.953555 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.968726 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.987218 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.003406 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.019021 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.035238 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.055083 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.067732 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.095583 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.113874 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.137709 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.162697 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.185519 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.205393 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.208690 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.208712 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.208734 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.208690 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.208758 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:56 crc kubenswrapper[4183]: E0813 19:55:56.208966 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.209037 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:56 crc kubenswrapper[4183]: E0813 19:55:56.209288 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:56 crc kubenswrapper[4183]: E0813 19:55:56.209302 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:56 crc kubenswrapper[4183]: E0813 19:55:56.209432 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:56 crc kubenswrapper[4183]: E0813 19:55:56.209506 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:56 crc kubenswrapper[4183]: E0813 19:55:56.209649 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.210274 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:56 crc kubenswrapper[4183]: E0813 19:55:56.210563 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.228200 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.245220 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.266423 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.285124 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.305767 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.322163 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.340535 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.355586 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.370258 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.385888 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.408444 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:47Z\\\",\\\"message\\\":\\\"10.217.4.108:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0813 19:54:47.532432 19713 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:54:47.531652 19713 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532671 19713 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532752 19713 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532927 19713 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:54:47.532945 19713 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:54:47.532868 19713 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532892 19713 reflector.go:295] Stoppin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.426946 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.442190 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.209181 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.209410 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.209465 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.209563 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.209676 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.209693 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.209678 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.209728 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.209859 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.209863 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.209915 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.209980 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.210125 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.210129 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.210161 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.210197 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.210243 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.210269 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.210478 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.210525 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.210590 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.210604 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.210630 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.210676 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.210679 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.210723 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.210863 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.210871 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.210914 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.210937 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.211040 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.211063 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.211158 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.211178 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.211241 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.211355 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.211441 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.211515 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.211585 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.211656 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.211720 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.211857 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.211899 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.211953 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.211955 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.212021 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.212115 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.212165 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.212254 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.212376 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.212458 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.212553 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.212576 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.212633 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.212683 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.212715 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.212749 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.212844 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.212889 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.212923 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.212954 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.212986 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.213038 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.213041 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.212961 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.213093 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.213178 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.213274 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.213363 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.213449 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.213515 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.213603 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.213648 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.213728 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.213885 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.213966 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.214144 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.214256 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.214318 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.214410 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.214480 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.214593 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:58 crc kubenswrapper[4183]: I0813 19:55:58.209060 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:58 crc kubenswrapper[4183]: E0813 19:55:58.209265 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:58 crc kubenswrapper[4183]: I0813 19:55:58.209568 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:58 crc kubenswrapper[4183]: I0813 19:55:58.209658 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:58 crc kubenswrapper[4183]: I0813 19:55:58.209568 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:58 crc kubenswrapper[4183]: I0813 19:55:58.209758 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:58 crc kubenswrapper[4183]: I0813 19:55:58.209855 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:58 crc kubenswrapper[4183]: I0813 19:55:58.209863 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:58 crc kubenswrapper[4183]: E0813 19:55:58.210503 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:58 crc kubenswrapper[4183]: E0813 19:55:58.210710 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:58 crc kubenswrapper[4183]: E0813 19:55:58.210889 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:58 crc kubenswrapper[4183]: E0813 19:55:58.210967 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:58 crc kubenswrapper[4183]: E0813 19:55:58.211052 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:58 crc kubenswrapper[4183]: E0813 19:55:58.211117 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.208589 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.208669 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.208694 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.208609 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.208755 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.208912 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.208919 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.208958 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.208749 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.208990 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.209005 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.208769 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.208759 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.208685 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.209152 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.209181 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.209200 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.209153 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.209258 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.209272 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.209297 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.209328 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.209372 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.209401 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.209421 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.209435 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.209441 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.209549 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.209597 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.209654 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.209694 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.209715 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.209740 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.209905 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.209939 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.209995 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.210029 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.210106 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.210214 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.210222 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.210252 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.210351 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.210647 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.210707 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.210908 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.211016 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.211085 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.211188 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.211398 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.211555 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.211607 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.211715 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.211757 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.211918 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.211964 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.211976 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.212014 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.212047 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.212225 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.212233 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.212256 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.212318 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.212352 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.212387 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.212431 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.212442 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.212448 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.213395 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.213892 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.214003 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.214535 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.214615 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.214698 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.214715 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.214728 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.214739 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.214750 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.214761 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.214847 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.214862 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.214875 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.214889 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:00 crc kubenswrapper[4183]: I0813 19:56:00.208527 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:00 crc kubenswrapper[4183]: E0813 19:56:00.208869 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:00 crc kubenswrapper[4183]: I0813 19:56:00.209116 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:00 crc kubenswrapper[4183]: E0813 19:56:00.209259 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:00 crc kubenswrapper[4183]: I0813 19:56:00.209448 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:00 crc kubenswrapper[4183]: E0813 19:56:00.209591 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:00 crc kubenswrapper[4183]: I0813 19:56:00.209762 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:00 crc kubenswrapper[4183]: I0813 19:56:00.209294 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:00 crc kubenswrapper[4183]: I0813 19:56:00.210318 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:00 crc kubenswrapper[4183]: E0813 19:56:00.210575 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:00 crc kubenswrapper[4183]: I0813 19:56:00.209769 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:00 crc kubenswrapper[4183]: E0813 19:56:00.211118 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:00 crc kubenswrapper[4183]: E0813 19:56:00.211142 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:00 crc kubenswrapper[4183]: E0813 19:56:00.211335 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:00 crc kubenswrapper[4183]: E0813 19:56:00.495437 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.208692 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.208744 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.208878 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.208966 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.209049 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.209064 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.209084 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.209114 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.209127 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.209239 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.209297 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.209404 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.209435 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.209521 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.209574 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.209575 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.209636 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.209651 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.209711 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.209051 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.209891 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.209932 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.210030 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.210197 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.210295 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.210425 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.210489 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.210575 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.210678 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.210726 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.210884 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.210957 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.211014 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.211083 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.211149 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.211190 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.211307 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.211421 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.211431 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.211499 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.211520 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.211592 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.211667 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.211931 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.212096 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.212164 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.212235 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.212336 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.212413 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.212571 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.212972 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.213067 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.213125 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.213151 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.213195 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.213236 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.213335 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.213394 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.213517 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.213604 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.213674 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.213745 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.213942 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.213954 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.214015 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.214068 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.214143 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.214182 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.214266 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.214306 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.214360 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.214432 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.214538 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.214688 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.214973 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.215059 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.215108 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.215248 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.215292 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.215261 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.215413 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.215516 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:02 crc kubenswrapper[4183]: I0813 19:56:02.209156 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:02 crc kubenswrapper[4183]: I0813 19:56:02.209234 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:02 crc kubenswrapper[4183]: I0813 19:56:02.209256 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:02 crc kubenswrapper[4183]: I0813 19:56:02.209184 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:02 crc kubenswrapper[4183]: I0813 19:56:02.209199 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:02 crc kubenswrapper[4183]: I0813 19:56:02.209388 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:02 crc kubenswrapper[4183]: I0813 19:56:02.209411 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:02 crc kubenswrapper[4183]: E0813 19:56:02.209548 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:02 crc kubenswrapper[4183]: E0813 19:56:02.209601 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:02 crc kubenswrapper[4183]: E0813 19:56:02.209704 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:02 crc kubenswrapper[4183]: E0813 19:56:02.209934 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:02 crc kubenswrapper[4183]: E0813 19:56:02.210134 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:02 crc kubenswrapper[4183]: E0813 19:56:02.210238 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:02 crc kubenswrapper[4183]: E0813 19:56:02.210332 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.208115 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.208154 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.208130 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.208223 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.208283 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.208313 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.208375 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.208452 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.208469 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.208517 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.208576 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.208604 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.208645 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.208666 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.208693 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.208714 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.208864 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.208904 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.209004 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.209066 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.209108 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.209206 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.209272 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.209332 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.209708 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.209872 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.210031 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.210164 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.210250 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.210342 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.210384 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.210476 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.210563 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.210601 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.210652 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.210735 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.210843 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.210917 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.211081 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.211169 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.211229 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.211318 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.211344 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.211425 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.211433 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.211510 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.211533 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.211566 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.211598 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.211649 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.211657 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.211706 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.211712 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.211747 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.211887 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.211911 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.211923 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.212069 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.212202 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.212296 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.212371 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.212397 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.212485 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.212525 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.212542 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.212615 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.212635 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.212726 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.212908 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.213023 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.213116 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.213224 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.213360 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.213451 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.213572 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.213653 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.213738 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.213897 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.214028 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.214157 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.214282 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.214389 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.199875 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.199970 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.199992 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.200018 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.200053 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:04Z","lastTransitionTime":"2025-08-13T19:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.209100 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.209204 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.209494 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:04 crc kubenswrapper[4183]: E0813 19:56:04.209734 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.210086 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.210126 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.210196 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:04 crc kubenswrapper[4183]: E0813 19:56:04.210765 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:04 crc kubenswrapper[4183]: E0813 19:56:04.211455 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.211488 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:04 crc kubenswrapper[4183]: E0813 19:56:04.211601 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:04 crc kubenswrapper[4183]: E0813 19:56:04.211717 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:04 crc kubenswrapper[4183]: E0813 19:56:04.212487 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:04 crc kubenswrapper[4183]: E0813 19:56:04.212649 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:04 crc kubenswrapper[4183]: E0813 19:56:04.215692 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.220665 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.220712 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.220727 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.220747 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.220843 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:04Z","lastTransitionTime":"2025-08-13T19:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:04 crc kubenswrapper[4183]: E0813 19:56:04.235272 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.239285 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.239362 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.239383 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.239408 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.239446 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:04Z","lastTransitionTime":"2025-08-13T19:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:04 crc kubenswrapper[4183]: E0813 19:56:04.253328 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.258665 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.258733 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.258752 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.259075 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.259163 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:04Z","lastTransitionTime":"2025-08-13T19:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:04 crc kubenswrapper[4183]: E0813 19:56:04.276034 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.282154 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.282206 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.282220 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.282242 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.282262 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:04Z","lastTransitionTime":"2025-08-13T19:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:04 crc kubenswrapper[4183]: E0813 19:56:04.297033 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:04 crc kubenswrapper[4183]: E0813 19:56:04.297166 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.208422 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.209287 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.209357 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.208482 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.208523 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.209628 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.209693 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.208568 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.208578 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.209933 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.208599 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.210076 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.210157 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.208620 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.208628 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.210277 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.208696 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.210361 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.208708 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.208715 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.210478 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.208740 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.210585 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.210671 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.208747 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.210940 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.208774 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.211113 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.211211 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.208878 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.211302 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.211396 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.208909 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.208917 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.212037 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.208939 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.212154 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.208958 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.212297 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.208982 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.212439 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.209000 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.209008 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.212734 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.213034 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.213146 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.213228 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.209034 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.213639 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.209030 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.213745 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.209060 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.213938 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.209064 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.214052 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.209085 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.214153 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.209103 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.214285 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.209124 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.214411 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.209138 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.214708 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.209151 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.215193 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.209155 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.215435 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.209168 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.215595 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.209207 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.215953 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.209196 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.216122 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.209208 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.216274 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.209240 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.216429 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.209392 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.216746 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.208439 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.217234 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.217976 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.229952 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.247903 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.264720 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.282737 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.300145 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.320640 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.338568 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.356176 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.373593 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.387611 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.403395 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.419729 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.442408 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.461205 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.480482 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.495992 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.497511 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.512671 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.528541 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.551161 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.565525 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.582667 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.597932 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.616593 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.632259 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.649354 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.664988 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.691337 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:47Z\\\",\\\"message\\\":\\\"10.217.4.108:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0813 19:54:47.532432 19713 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:54:47.531652 19713 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532671 19713 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532752 19713 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532927 19713 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:54:47.532945 19713 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:54:47.532868 19713 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532892 19713 reflector.go:295] Stoppin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.711227 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.730971 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.750932 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.770151 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.791679 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.815346 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.836715 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.854622 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.875412 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.902980 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.920983 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.943894 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.963982 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.986722 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.009168 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.025738 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.049182 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.076325 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:49Z\\\",\\\"message\\\":\\\"2025-08-13T19:54:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f\\\\n2025-08-13T19:54:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f to /host/opt/cni/bin/\\\\n2025-08-13T19:54:04Z [verbose] multus-daemon started\\\\n2025-08-13T19:54:04Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:54:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:55:29Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.098116 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.115644 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.136405 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.158495 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.178972 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.200008 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.208540 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.208625 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:06 crc kubenswrapper[4183]: E0813 19:56:06.208858 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.208883 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.208571 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:06 crc kubenswrapper[4183]: E0813 19:56:06.209116 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.209338 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:06 crc kubenswrapper[4183]: E0813 19:56:06.209544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:06 crc kubenswrapper[4183]: E0813 19:56:06.209932 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.210166 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.210300 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:06 crc kubenswrapper[4183]: E0813 19:56:06.210400 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:06 crc kubenswrapper[4183]: E0813 19:56:06.210533 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:06 crc kubenswrapper[4183]: E0813 19:56:06.210633 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.220044 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.240662 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.259263 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.278110 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.295642 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.315056 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.332747 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.353472 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.369451 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.385669 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.403059 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.419689 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.438641 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.460732 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.480764 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.500187 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.209024 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.209048 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.209143 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.209196 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.209198 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.209236 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.209241 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.209110 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.209185 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.209065 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.209300 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.209469 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.209483 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.209495 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.209506 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.209526 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.209640 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.209660 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.209165 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.209716 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.209744 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.209876 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.209914 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.209928 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.209984 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.210002 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.210044 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.210166 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.210198 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.210303 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.210200 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.210361 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.210308 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.210453 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.210470 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.210491 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.210541 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.210549 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.210701 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.210815 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.210957 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.211010 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.211029 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.211121 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.211224 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.211310 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.211374 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.211465 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.211499 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.211590 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.211625 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.212020 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.212028 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.212164 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.212190 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.212198 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.212247 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.212284 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.212332 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.212372 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.212376 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.212400 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.212492 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.212571 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.212605 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.212680 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.212714 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.212877 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.212880 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.213014 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.213308 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.213457 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.213512 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.213538 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.213620 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.213696 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.213861 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.214270 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.214317 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.214337 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.214346 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.214360 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:08 crc kubenswrapper[4183]: I0813 19:56:08.208690 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:08 crc kubenswrapper[4183]: I0813 19:56:08.208746 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:08 crc kubenswrapper[4183]: I0813 19:56:08.209504 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:08 crc kubenswrapper[4183]: E0813 19:56:08.209013 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:08 crc kubenswrapper[4183]: I0813 19:56:08.209109 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:08 crc kubenswrapper[4183]: I0813 19:56:08.209149 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:08 crc kubenswrapper[4183]: I0813 19:56:08.209188 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:08 crc kubenswrapper[4183]: I0813 19:56:08.209229 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:08 crc kubenswrapper[4183]: E0813 19:56:08.209902 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:08 crc kubenswrapper[4183]: E0813 19:56:08.209956 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:08 crc kubenswrapper[4183]: E0813 19:56:08.210042 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:08 crc kubenswrapper[4183]: E0813 19:56:08.210117 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:08 crc kubenswrapper[4183]: E0813 19:56:08.210183 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:08 crc kubenswrapper[4183]: E0813 19:56:08.210639 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:08 crc kubenswrapper[4183]: I0813 19:56:08.211077 4183 scope.go:117] "RemoveContainer" containerID="2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5" Aug 13 19:56:08 crc kubenswrapper[4183]: E0813 19:56:08.212000 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.209224 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.209335 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.209495 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.209552 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.209588 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.209715 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.209757 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.209918 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.209966 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.210020 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.210061 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.210116 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.210140 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.210206 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.210243 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.210298 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.210323 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.210366 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.210414 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.210440 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.210495 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.210548 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.210574 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.210627 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.210706 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.210746 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.210924 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.211003 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.211037 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.211085 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.211142 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.211169 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.211212 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.211263 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.211289 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.211330 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.211382 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.211411 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.211495 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.211582 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.211898 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.211978 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.212045 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.212088 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.212126 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.212215 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.212540 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.212602 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.212630 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.212743 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.212931 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.212966 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.213005 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.213058 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.213080 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.213120 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.213173 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.213197 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.213236 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.213293 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.213322 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.213398 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.213443 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.213502 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.213527 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.213575 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.213631 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.213656 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.213698 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.213861 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.213898 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.213939 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.214000 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.214061 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.214134 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.214205 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.214296 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.214362 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.214439 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.215061 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.215758 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.216261 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:10 crc kubenswrapper[4183]: I0813 19:56:10.143478 4183 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Aug 13 19:56:10 crc kubenswrapper[4183]: I0813 19:56:10.143573 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Aug 13 19:56:10 crc kubenswrapper[4183]: I0813 19:56:10.208542 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:10 crc kubenswrapper[4183]: I0813 19:56:10.208705 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:10 crc kubenswrapper[4183]: E0813 19:56:10.208895 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:10 crc kubenswrapper[4183]: I0813 19:56:10.209046 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:10 crc kubenswrapper[4183]: E0813 19:56:10.209056 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:10 crc kubenswrapper[4183]: I0813 19:56:10.209115 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:10 crc kubenswrapper[4183]: I0813 19:56:10.209125 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:10 crc kubenswrapper[4183]: I0813 19:56:10.209187 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:10 crc kubenswrapper[4183]: E0813 19:56:10.209188 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:10 crc kubenswrapper[4183]: E0813 19:56:10.209310 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:10 crc kubenswrapper[4183]: I0813 19:56:10.209372 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:10 crc kubenswrapper[4183]: E0813 19:56:10.209526 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:10 crc kubenswrapper[4183]: E0813 19:56:10.209896 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:10 crc kubenswrapper[4183]: E0813 19:56:10.209998 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:10 crc kubenswrapper[4183]: E0813 19:56:10.499170 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.209246 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.209335 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.209351 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.209379 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.209356 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.209451 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.209464 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.209471 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.209517 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.209246 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.209618 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.209641 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.209642 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.209699 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.209706 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.209298 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.209699 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.209896 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.209935 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.209941 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.209961 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.210005 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.210054 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.210060 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.210100 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.210153 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.210255 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.210261 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.210301 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.210321 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.210155 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.210404 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.210408 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.210417 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.210554 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.210652 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.210557 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.210861 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.210897 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.210918 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.210976 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.211053 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.211168 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.211234 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.211365 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.211465 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.211551 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.211588 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.211663 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.211696 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.211740 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.211898 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.211946 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.211992 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.212068 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.212146 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.212212 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.212278 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.212302 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.212409 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.212554 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.212652 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.212737 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.212875 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.212916 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.212961 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.213021 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.213054 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.213122 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.213210 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.213309 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.213417 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.213458 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.213476 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.213625 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.213724 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.213924 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.214027 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.214132 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.214234 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.214324 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.214483 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:12 crc kubenswrapper[4183]: I0813 19:56:12.208911 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:12 crc kubenswrapper[4183]: I0813 19:56:12.209049 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:12 crc kubenswrapper[4183]: I0813 19:56:12.209077 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:12 crc kubenswrapper[4183]: E0813 19:56:12.209889 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:12 crc kubenswrapper[4183]: I0813 19:56:12.209073 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:12 crc kubenswrapper[4183]: E0813 19:56:12.210193 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:12 crc kubenswrapper[4183]: I0813 19:56:12.209103 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:12 crc kubenswrapper[4183]: I0813 19:56:12.209114 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:12 crc kubenswrapper[4183]: I0813 19:56:12.209134 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:12 crc kubenswrapper[4183]: E0813 19:56:12.209705 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:12 crc kubenswrapper[4183]: E0813 19:56:12.210459 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:12 crc kubenswrapper[4183]: E0813 19:56:12.210625 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:12 crc kubenswrapper[4183]: E0813 19:56:12.210882 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:12 crc kubenswrapper[4183]: E0813 19:56:12.211068 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.209106 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.209220 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.209283 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.209157 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.209345 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.209367 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.209383 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.209196 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.209413 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.209462 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.209489 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.209492 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.209570 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.209579 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.209613 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.209614 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.209663 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.209670 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.209710 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.209747 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.209760 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.209746 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.209904 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.209910 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.210018 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.210097 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.210133 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.210138 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.210160 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.210171 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.210338 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.210471 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.210489 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.210666 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.210720 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.210539 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.210549 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.210573 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.210575 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.210583 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.210900 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.210876 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.211032 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.211101 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.211100 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.211102 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.211258 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.211271 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.211544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.211556 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.211643 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.211716 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.211723 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.211765 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.211943 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.212046 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.212099 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.212105 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.212175 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.212306 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.212441 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.212511 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.212678 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.212890 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.212946 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.213178 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.213299 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.213426 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.213041 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.213539 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.213650 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.213867 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.213965 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.214055 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.214195 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.214232 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.214381 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.214412 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.214514 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.214595 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.214731 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.214636 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.208320 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.208496 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:14 crc kubenswrapper[4183]: E0813 19:56:14.208653 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.208949 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.208353 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.208449 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.209024 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:14 crc kubenswrapper[4183]: E0813 19:56:14.209129 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.209342 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:14 crc kubenswrapper[4183]: E0813 19:56:14.209522 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:14 crc kubenswrapper[4183]: E0813 19:56:14.209757 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:14 crc kubenswrapper[4183]: E0813 19:56:14.210114 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:14 crc kubenswrapper[4183]: E0813 19:56:14.210259 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:14 crc kubenswrapper[4183]: E0813 19:56:14.210386 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.512190 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.512226 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.512237 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.512278 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.512299 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:14Z","lastTransitionTime":"2025-08-13T19:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:14 crc kubenswrapper[4183]: E0813 19:56:14.526050 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.531306 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.531393 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.531414 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.531435 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.531464 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:14Z","lastTransitionTime":"2025-08-13T19:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:14 crc kubenswrapper[4183]: E0813 19:56:14.545560 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.550937 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.551008 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.551025 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.551047 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.551074 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:14Z","lastTransitionTime":"2025-08-13T19:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:14 crc kubenswrapper[4183]: E0813 19:56:14.564534 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.568959 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.569035 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.569052 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.569073 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.569093 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:14Z","lastTransitionTime":"2025-08-13T19:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:14 crc kubenswrapper[4183]: E0813 19:56:14.588623 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.594962 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.595040 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.595057 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.595078 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.595100 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:14Z","lastTransitionTime":"2025-08-13T19:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:14 crc kubenswrapper[4183]: E0813 19:56:14.608550 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:14 crc kubenswrapper[4183]: E0813 19:56:14.608622 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.208309 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.208339 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.208347 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.208417 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.209302 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.210027 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.210067 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.209456 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.209429 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.209486 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.209551 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.209580 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.209595 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.209636 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.209720 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.210276 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.209726 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.209769 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.209951 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.209987 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.209989 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.210414 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.210462 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.210472 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.210035 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.210283 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.210380 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.209665 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.210572 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.210578 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.210604 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.210651 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.210671 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.210700 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.210719 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.210768 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.210843 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.210931 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.211016 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.211061 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.211067 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.211118 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.211194 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.211226 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.211299 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.211382 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.211445 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.211528 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.211561 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.211583 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.211388 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.211654 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.211309 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.212191 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.212281 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.212327 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.212358 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.212447 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.212527 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.212593 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.212593 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.212717 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.212866 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.212918 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.213018 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.213171 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.213319 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.213444 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.213611 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.213753 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.213997 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.214105 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.214249 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.214349 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.214498 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.214604 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.214705 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.214949 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.215141 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.215228 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.215170 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.215373 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.227187 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.243581 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.259957 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.277879 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.294626 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.311998 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.334503 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.350299 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/4.log" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.351259 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/3.log" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.351508 4183 generic.go:334] "Generic (PLEG): container finished" podID="475321a1-8b7e-4033-8f72-b05a8b377347" containerID="2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f" exitCode=1 Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.351608 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q88th" event={"ID":"475321a1-8b7e-4033-8f72-b05a8b377347","Type":"ContainerDied","Data":"2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f"} Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.352652 4183 scope.go:117] "RemoveContainer" containerID="c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.352885 4183 scope.go:117] "RemoveContainer" containerID="2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.353509 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\"" pod="openshift-multus/multus-q88th" podUID="475321a1-8b7e-4033-8f72-b05a8b377347" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.357679 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.378410 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.399024 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.414248 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.434201 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.449765 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.464647 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.489757 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.500460 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.506362 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.524239 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.541269 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.557247 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.572415 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.588387 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.604962 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.620897 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.638676 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.655024 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.670020 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.685454 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.699234 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.721493 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.737484 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.760495 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.780416 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.801081 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.816886 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.836202 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.858273 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.875002 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.894663 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.926236 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.945697 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.969096 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.003642 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.020976 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.047977 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:47Z\\\",\\\"message\\\":\\\"10.217.4.108:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0813 19:54:47.532432 19713 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:54:47.531652 19713 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532671 19713 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532752 19713 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532927 19713 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:54:47.532945 19713 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:54:47.532868 19713 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532892 19713 reflector.go:295] Stoppin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.068544 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.093756 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.108855 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.124349 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.149236 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.165761 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.184978 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.202232 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.209056 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.209092 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:16 crc kubenswrapper[4183]: E0813 19:56:16.209769 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:16 crc kubenswrapper[4183]: E0813 19:56:16.209884 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.209140 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:16 crc kubenswrapper[4183]: E0813 19:56:16.211103 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.209161 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.209172 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.209247 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.209098 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:16 crc kubenswrapper[4183]: E0813 19:56:16.211474 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:16 crc kubenswrapper[4183]: E0813 19:56:16.211626 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:16 crc kubenswrapper[4183]: E0813 19:56:16.211754 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:16 crc kubenswrapper[4183]: E0813 19:56:16.212072 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.222499 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.239376 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.269516 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.286118 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.304628 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.320653 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.339570 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.356069 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/4.log" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.364234 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.382370 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.399208 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.415684 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:49Z\\\",\\\"message\\\":\\\"2025-08-13T19:54:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f\\\\n2025-08-13T19:54:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f to /host/opt/cni/bin/\\\\n2025-08-13T19:54:04Z [verbose] multus-daemon started\\\\n2025-08-13T19:54:04Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:54:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:55:29Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.432113 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.445471 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.462009 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.483643 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.503598 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.522071 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.539124 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.558173 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.576366 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.593889 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.611320 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.631165 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.665153 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.698930 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.715747 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.734008 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.751758 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.771091 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.788994 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.806566 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.836602 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:47Z\\\",\\\"message\\\":\\\"10.217.4.108:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0813 19:54:47.532432 19713 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:54:47.531652 19713 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532671 19713 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532752 19713 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532927 19713 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:54:47.532945 19713 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:54:47.532868 19713 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532892 19713 reflector.go:295] Stoppin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.854684 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.878263 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.904117 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.919747 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.940516 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.959600 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.978959 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.993682 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.008120 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.033367 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.047860 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.063536 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.105051 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.143381 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.180872 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.208151 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.208305 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.208575 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.208618 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.208671 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.208717 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.208717 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.208719 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.208755 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.208942 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.208945 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.209005 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.209086 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.209107 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.209131 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.209159 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.209110 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.209237 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.209263 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.209284 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.209320 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.209390 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.209402 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.209402 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.209449 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.209516 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.209519 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.209531 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.209598 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.209624 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.209681 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.209733 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.209757 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.209894 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.209895 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.209934 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.210025 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.210090 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.210126 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.210160 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.210198 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.210234 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.210237 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.210302 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.210310 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.210346 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.210373 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.210434 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.210469 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.210539 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.210581 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.210581 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.210677 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.210734 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.210736 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.210927 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.211011 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.211059 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.211086 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.211113 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.211175 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.211227 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.212143 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.212508 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.213090 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.213642 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.213764 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.213879 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.214385 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.215172 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.215868 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.216106 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.216240 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.216711 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.217499 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.217851 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.218234 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.218331 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.218443 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.218528 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.219003 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.219961 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.228751 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.261428 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.302559 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:49Z\\\",\\\"message\\\":\\\"2025-08-13T19:54:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f\\\\n2025-08-13T19:54:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f to /host/opt/cni/bin/\\\\n2025-08-13T19:54:04Z [verbose] multus-daemon started\\\\n2025-08-13T19:54:04Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:54:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"2025-08-13T19:55:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fbb7e047-911e-45ca-8d98-7b9cca149b61\\\\n2025-08-13T19:55:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fbb7e047-911e-45ca-8d98-7b9cca149b61 to /host/opt/cni/bin/\\\\n2025-08-13T19:55:29Z [verbose] multus-daemon started\\\\n2025-08-13T19:55:29Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:56:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:55:29Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.343270 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.364224 4183 generic.go:334] "Generic (PLEG): container finished" podID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerID="4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02" exitCode=0 Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.364294 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" event={"ID":"aa90b3c2-febd-4588-a063-7fbbe82f00c1","Type":"ContainerDied","Data":"4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02"} Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.364331 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" event={"ID":"aa90b3c2-febd-4588-a063-7fbbe82f00c1","Type":"ContainerStarted","Data":"6b6b2db3637481270955ecfaf63f08f80ee970eeaa15bd54430df884620e38ac"} Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.364361 4183 scope.go:117] "RemoveContainer" containerID="0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.383378 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.421610 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.429562 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.433483 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:17 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:17 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:17 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.433580 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.460251 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.504343 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.549174 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.580721 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.625069 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.662510 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.702944 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.744126 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.783212 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.819603 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.860931 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.903214 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.940803 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.982282 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.026320 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.063229 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.101991 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.143582 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.183959 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.208994 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.209070 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.209173 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.209222 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.209279 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:18 crc kubenswrapper[4183]: E0813 19:56:18.209353 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.209374 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.209506 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:18 crc kubenswrapper[4183]: E0813 19:56:18.209714 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:18 crc kubenswrapper[4183]: E0813 19:56:18.210182 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:18 crc kubenswrapper[4183]: E0813 19:56:18.210324 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:18 crc kubenswrapper[4183]: E0813 19:56:18.210442 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:18 crc kubenswrapper[4183]: E0813 19:56:18.210588 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:18 crc kubenswrapper[4183]: E0813 19:56:18.210240 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.226485 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.269575 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.320088 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.352706 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.389781 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.423420 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.432088 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:18 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:18 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:18 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.432205 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.463661 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.499939 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.543935 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.582775 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.623942 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.663709 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.702396 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.742640 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.780916 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.825602 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.872737 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.903669 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.944095 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.981653 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.026010 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.062359 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.101305 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.144546 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.181393 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.209254 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.209301 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.209287 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.209349 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.209372 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.209308 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.209409 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.209437 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.209471 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.209554 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.209568 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.209576 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.209606 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.209682 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.209690 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.209570 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.209442 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.209927 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.209937 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.209937 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.209455 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.210031 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.210070 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.210083 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.210186 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.210197 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.210216 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.210252 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.210309 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.210308 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.210582 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.210762 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.211028 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.211133 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.211222 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.211336 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.211402 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.211517 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.211574 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.211688 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.211747 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.211909 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.212040 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.212153 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.212237 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.212457 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.212590 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.212593 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.212723 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.212770 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.212917 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.213148 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.213223 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.213261 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.213343 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.212875 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.213400 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.213423 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.213528 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.213576 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.213662 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.213768 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.213769 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.213954 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.214004 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.214095 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.214169 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.214496 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.214681 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.214753 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.214965 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.215086 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.215202 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.215284 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.215346 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.215409 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.215698 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.215798 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.215926 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.216113 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.216204 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.216258 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.219556 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.264480 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.302417 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.342617 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.384648 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.424299 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.432424 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:19 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:19 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:19 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.432912 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.462156 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.505269 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6b2db3637481270955ecfaf63f08f80ee970eeaa15bd54430df884620e38ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:56:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:56:16Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.545297 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.673785 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.706049 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.727940 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.752038 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.769963 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.832900 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.849550 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.876249 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:47Z\\\",\\\"message\\\":\\\"10.217.4.108:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0813 19:54:47.532432 19713 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:54:47.531652 19713 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532671 19713 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532752 19713 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532927 19713 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:54:47.532945 19713 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:54:47.532868 19713 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532892 19713 reflector.go:295] Stoppin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.906455 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.943305 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.983367 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.023068 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.063089 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.100118 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.141356 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.181972 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.210105 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.210152 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.210221 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:20 crc kubenswrapper[4183]: E0813 19:56:20.210314 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.210465 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.210492 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:20 crc kubenswrapper[4183]: E0813 19:56:20.210725 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:20 crc kubenswrapper[4183]: E0813 19:56:20.210955 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.211033 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:20 crc kubenswrapper[4183]: E0813 19:56:20.211163 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:20 crc kubenswrapper[4183]: E0813 19:56:20.211619 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:20 crc kubenswrapper[4183]: E0813 19:56:20.211989 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.213946 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:20 crc kubenswrapper[4183]: E0813 19:56:20.214105 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.213966 4183 scope.go:117] "RemoveContainer" containerID="2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5" Aug 13 19:56:20 crc kubenswrapper[4183]: E0813 19:56:20.215044 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.227465 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.265598 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.339301 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.354486 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.386384 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.430137 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.432887 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:20 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:20 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:20 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.432975 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.464674 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.496082 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: E0813 19:56:20.502134 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.519051 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.546187 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.593213 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.624098 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:49Z\\\",\\\"message\\\":\\\"2025-08-13T19:54:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f\\\\n2025-08-13T19:54:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f to /host/opt/cni/bin/\\\\n2025-08-13T19:54:04Z [verbose] multus-daemon started\\\\n2025-08-13T19:54:04Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:54:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"2025-08-13T19:55:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fbb7e047-911e-45ca-8d98-7b9cca149b61\\\\n2025-08-13T19:55:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fbb7e047-911e-45ca-8d98-7b9cca149b61 to /host/opt/cni/bin/\\\\n2025-08-13T19:55:29Z [verbose] multus-daemon started\\\\n2025-08-13T19:55:29Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:56:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:55:29Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.660919 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.699445 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.744033 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.784885 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.821728 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.862049 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.900037 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.941619 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.985322 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.023563 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:21Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.064307 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:21Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.106073 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:21Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.144427 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:21Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.183199 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:21Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.208880 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.208930 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.209003 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.209020 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.209037 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.208881 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.209127 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.209137 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.209157 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.209137 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.209256 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.209268 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.209314 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.209321 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.208951 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.209400 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.209419 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.209444 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.209474 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.209543 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.209551 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.209587 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.209614 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.209681 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.209681 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.209735 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.209744 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.209905 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.210007 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.210031 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.210151 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.210387 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.210447 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.210491 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.210490 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.210530 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.210643 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.210693 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.210711 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.210881 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.210886 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.211005 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.211061 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.211110 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.211222 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.211289 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.211372 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.211425 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.211456 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.211479 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.211496 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.211510 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.211554 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.211621 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.211680 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.211727 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.211780 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.211869 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.211895 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.211912 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.211934 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.211977 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.211997 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.212060 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.212149 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.212283 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.212648 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.212746 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.213083 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.213181 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.213289 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.213365 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.213442 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.213669 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.213736 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.213765 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.213862 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.213915 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.213994 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.214056 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.214119 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.214198 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.224279 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:21Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.262956 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:21Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.432690 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:21 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:21 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:21 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.432951 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:22 crc kubenswrapper[4183]: I0813 19:56:22.209548 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:22 crc kubenswrapper[4183]: I0813 19:56:22.209647 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:22 crc kubenswrapper[4183]: I0813 19:56:22.209548 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:22 crc kubenswrapper[4183]: I0813 19:56:22.209557 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:22 crc kubenswrapper[4183]: I0813 19:56:22.209585 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:22 crc kubenswrapper[4183]: I0813 19:56:22.209617 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:22 crc kubenswrapper[4183]: I0813 19:56:22.209630 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:22 crc kubenswrapper[4183]: E0813 19:56:22.210093 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:22 crc kubenswrapper[4183]: E0813 19:56:22.210358 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:22 crc kubenswrapper[4183]: E0813 19:56:22.210552 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:22 crc kubenswrapper[4183]: E0813 19:56:22.210727 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:22 crc kubenswrapper[4183]: E0813 19:56:22.210995 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:22 crc kubenswrapper[4183]: E0813 19:56:22.211166 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:22 crc kubenswrapper[4183]: E0813 19:56:22.211317 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:22 crc kubenswrapper[4183]: I0813 19:56:22.432667 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:22 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:22 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:22 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:22 crc kubenswrapper[4183]: I0813 19:56:22.432776 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.208984 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.209470 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.209362 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.209420 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.210078 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.209751 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.210022 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.210211 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.210274 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.210331 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.210383 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.210434 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.210531 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.210617 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.210690 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.210745 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.210872 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.210962 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.211018 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.211074 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.211137 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.211206 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.211271 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.211537 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.211630 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.211766 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.211569 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.211606 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.212397 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.212525 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.212588 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.212423 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.212447 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.212464 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.212659 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.212466 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.212698 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.212746 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.212755 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.212500 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.212495 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.212933 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.212937 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.212957 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.213035 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.213048 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.213068 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.213186 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.213269 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.213293 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.213368 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.213464 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.213527 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.213602 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.213635 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.213728 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.213784 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.213930 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.214011 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.214085 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.214112 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.214118 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.214171 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.214251 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.214334 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.214370 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.214443 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.214474 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.214527 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.214894 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.214937 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.214993 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.215075 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.215185 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.215300 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.215379 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.215459 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.215546 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.216007 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.216264 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.216350 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.216570 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.432949 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:23 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:23 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:23 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.433115 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.208208 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:24 crc kubenswrapper[4183]: E0813 19:56:24.208442 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.208593 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.208648 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:24 crc kubenswrapper[4183]: E0813 19:56:24.208743 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.208760 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.208939 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:24 crc kubenswrapper[4183]: E0813 19:56:24.209299 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:24 crc kubenswrapper[4183]: E0813 19:56:24.209410 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:24 crc kubenswrapper[4183]: E0813 19:56:24.209491 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.209334 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.209532 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:24 crc kubenswrapper[4183]: E0813 19:56:24.209679 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:24 crc kubenswrapper[4183]: E0813 19:56:24.209977 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.433884 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:24 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:24 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:24 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.434077 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.700048 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.700153 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.700178 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.700209 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.700251 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:24Z","lastTransitionTime":"2025-08-13T19:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:24 crc kubenswrapper[4183]: E0813 19:56:24.716426 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.723122 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.723215 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.723295 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.723329 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.723370 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:24Z","lastTransitionTime":"2025-08-13T19:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:24 crc kubenswrapper[4183]: E0813 19:56:24.739930 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.745076 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.745146 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.745162 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.745185 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.745232 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:24Z","lastTransitionTime":"2025-08-13T19:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:24 crc kubenswrapper[4183]: E0813 19:56:24.759840 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.765005 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.765054 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.765075 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.765100 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.765126 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:24Z","lastTransitionTime":"2025-08-13T19:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:24 crc kubenswrapper[4183]: E0813 19:56:24.779063 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.784634 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.784675 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.784688 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.784710 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.784737 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:24Z","lastTransitionTime":"2025-08-13T19:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:24 crc kubenswrapper[4183]: E0813 19:56:24.798616 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:24 crc kubenswrapper[4183]: E0813 19:56:24.798684 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.208307 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.208358 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.208376 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.208335 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.208952 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.209018 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.209071 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.209092 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.209115 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.209163 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.209248 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.209249 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.209277 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.209341 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.209383 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.209462 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.209509 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.209558 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.209631 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.209682 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.209730 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.209967 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.210143 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.210299 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.210349 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.210419 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.210544 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.210603 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.210662 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.210766 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.210915 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.211001 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.211107 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.211177 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.211239 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.211339 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.211370 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.211396 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.211467 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.211561 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.212484 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.212493 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.212601 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.212662 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.212669 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.212786 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.213019 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.213027 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.213066 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.213116 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.213148 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.213118 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.213219 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.213225 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.213301 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.213401 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.213482 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.213544 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.213622 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.213695 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.213740 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.213908 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.213997 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.214043 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.214104 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.214162 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.214197 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.214238 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.214293 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.214321 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.214391 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.214460 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.214520 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.214587 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.214649 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.214712 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.214767 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.215126 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.215183 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.215240 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.215298 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.215348 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.230450 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.247405 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.268587 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.286304 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.303362 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.317926 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.335582 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.350646 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.373454 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.390222 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.408072 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.425205 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.431509 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:25 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:25 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:25 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.431603 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.441396 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.455680 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.473899 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.491360 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.503760 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.509424 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.533014 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.548674 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.564661 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.580489 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.601151 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.623561 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.641068 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.656056 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.673617 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.690458 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.701915 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.714892 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.732566 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.751595 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.768521 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.785310 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.802176 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.818491 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.838598 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.854870 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.882463 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6b2db3637481270955ecfaf63f08f80ee970eeaa15bd54430df884620e38ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:56:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:56:16Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.902023 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.920979 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.938464 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.955760 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.973037 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.998760 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:47Z\\\",\\\"message\\\":\\\"10.217.4.108:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0813 19:54:47.532432 19713 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:54:47.531652 19713 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532671 19713 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532752 19713 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532927 19713 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:54:47.532945 19713 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:54:47.532868 19713 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532892 19713 reflector.go:295] Stoppin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.018333 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.035385 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.050514 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.065416 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.100773 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.155977 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.193292 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.209007 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.209041 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.209060 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.209132 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.209167 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.209172 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:26 crc kubenswrapper[4183]: E0813 19:56:26.209338 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.209346 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:26 crc kubenswrapper[4183]: E0813 19:56:26.209428 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:26 crc kubenswrapper[4183]: E0813 19:56:26.209543 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:26 crc kubenswrapper[4183]: E0813 19:56:26.209620 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:26 crc kubenswrapper[4183]: E0813 19:56:26.209691 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:26 crc kubenswrapper[4183]: E0813 19:56:26.209765 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:26 crc kubenswrapper[4183]: E0813 19:56:26.209937 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.211758 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.231006 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.248508 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.277241 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.296764 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.311566 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.326400 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.354032 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.379063 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.403086 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.429143 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.432332 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:26 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:26 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:26 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.432446 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.450079 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:49Z\\\",\\\"message\\\":\\\"2025-08-13T19:54:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f\\\\n2025-08-13T19:54:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f to /host/opt/cni/bin/\\\\n2025-08-13T19:54:04Z [verbose] multus-daemon started\\\\n2025-08-13T19:54:04Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:54:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"2025-08-13T19:55:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fbb7e047-911e-45ca-8d98-7b9cca149b61\\\\n2025-08-13T19:55:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fbb7e047-911e-45ca-8d98-7b9cca149b61 to /host/opt/cni/bin/\\\\n2025-08-13T19:55:29Z [verbose] multus-daemon started\\\\n2025-08-13T19:55:29Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:56:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:55:29Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.465152 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.483462 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.501607 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.519010 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.209186 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.209280 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.209355 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.209430 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.209467 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.209579 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.209632 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.209682 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.209763 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.209998 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.210073 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.210102 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.210264 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.210264 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.210296 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.209248 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.210428 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.210437 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.210497 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.209214 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.210586 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.210623 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.210679 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.210735 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.210782 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.210926 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.210974 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.210987 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.211008 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.211033 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.211093 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.211106 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.211158 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.211225 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.211231 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.211284 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.211361 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.211415 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.211503 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.211506 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.211539 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.211570 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.211647 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.211711 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.211888 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.211972 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.212073 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.212092 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.212148 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.212151 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.212193 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.212259 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.212309 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.212394 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.212470 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.212522 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.212589 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.212689 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.212945 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.212950 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.212992 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.213012 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.213084 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.213187 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.213218 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.213334 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.213413 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.213496 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.213582 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.213651 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.213901 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.214315 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.214386 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.215184 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.215409 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.215964 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.216180 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.216712 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.217522 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.224625 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.225390 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.225536 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.434736 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:27 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:27 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:27 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.435371 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.208964 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.209098 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.209172 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.209273 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.209311 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:28 crc kubenswrapper[4183]: E0813 19:56:28.209354 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.209375 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:28 crc kubenswrapper[4183]: E0813 19:56:28.209494 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.209632 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:28 crc kubenswrapper[4183]: E0813 19:56:28.209660 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:28 crc kubenswrapper[4183]: E0813 19:56:28.209771 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:28 crc kubenswrapper[4183]: E0813 19:56:28.209946 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:28 crc kubenswrapper[4183]: E0813 19:56:28.210054 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:28 crc kubenswrapper[4183]: E0813 19:56:28.210134 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.210573 4183 scope.go:117] "RemoveContainer" containerID="2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f" Aug 13 19:56:28 crc kubenswrapper[4183]: E0813 19:56:28.211131 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\"" pod="openshift-multus/multus-q88th" podUID="475321a1-8b7e-4033-8f72-b05a8b377347" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.233990 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.260299 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.357336 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.418047 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.433269 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:28 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:28 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:28 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.433428 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.445871 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.466549 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.521455 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.542326 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.561741 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.593118 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.612964 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.634608 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.654884 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.674200 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.695472 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.712581 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.728469 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.746766 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.769183 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.791487 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.809622 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.832051 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.852544 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.892347 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.916869 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.935146 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.956539 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6b2db3637481270955ecfaf63f08f80ee970eeaa15bd54430df884620e38ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:56:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:56:16Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.980600 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.025381 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.044215 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.063505 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.081120 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.114123 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:47Z\\\",\\\"message\\\":\\\"10.217.4.108:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0813 19:54:47.532432 19713 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:54:47.531652 19713 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532671 19713 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532752 19713 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532927 19713 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:54:47.532945 19713 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:54:47.532868 19713 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532892 19713 reflector.go:295] Stoppin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.135507 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.151580 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.169593 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.182709 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.200942 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.209067 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.209122 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.209078 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.209156 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.209236 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.209125 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.209095 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.209359 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.209399 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.209492 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.209540 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.209586 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.209639 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.209676 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.209749 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.209872 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.210009 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.210034 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.210105 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.210141 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.210185 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.210278 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.210279 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.210352 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.209091 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.210420 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.210513 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.210526 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.210681 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.210751 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.210760 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.210892 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.210965 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.211016 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.211149 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.211160 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.211196 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.211255 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.211333 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.211362 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.211432 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.211461 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.211503 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.211563 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.211584 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.211634 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.211664 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.211703 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.211728 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.211907 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.211914 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.212017 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.212101 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.212133 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.212213 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.212289 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.212357 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.212468 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.212472 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.212495 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.212527 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.212545 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.212678 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.212762 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.212903 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.213036 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.213126 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.213307 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.213416 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.213455 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.213481 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.213627 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.213661 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.213643 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.213746 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.213919 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.214058 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.214115 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.214188 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.214242 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.214306 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.214432 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.223285 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.239358 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.254769 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.272688 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.289248 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.317186 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.336041 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.358950 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.388992 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.439765 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:29 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:29 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:29 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.439940 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.565093 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.598019 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.617523 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.646586 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.693340 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"2025-08-13T19:55:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fbb7e047-911e-45ca-8d98-7b9cca149b61\\\\n2025-08-13T19:55:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fbb7e047-911e-45ca-8d98-7b9cca149b61 to /host/opt/cni/bin/\\\\n2025-08-13T19:55:29Z [verbose] multus-daemon started\\\\n2025-08-13T19:55:29Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:56:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:55:29Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.718939 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.737067 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.755028 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.774198 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.794979 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.811977 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.831254 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.854229 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.877495 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.900155 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.924133 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.952726 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.971984 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.996294 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:30 crc kubenswrapper[4183]: I0813 19:56:30.021090 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:30 crc kubenswrapper[4183]: I0813 19:56:30.209181 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:30 crc kubenswrapper[4183]: E0813 19:56:30.209411 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:30 crc kubenswrapper[4183]: I0813 19:56:30.209631 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:30 crc kubenswrapper[4183]: E0813 19:56:30.209755 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:30 crc kubenswrapper[4183]: I0813 19:56:30.209983 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:30 crc kubenswrapper[4183]: E0813 19:56:30.210098 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:30 crc kubenswrapper[4183]: I0813 19:56:30.210564 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:30 crc kubenswrapper[4183]: E0813 19:56:30.211050 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:30 crc kubenswrapper[4183]: I0813 19:56:30.211274 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:30 crc kubenswrapper[4183]: I0813 19:56:30.211448 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:30 crc kubenswrapper[4183]: E0813 19:56:30.211516 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:30 crc kubenswrapper[4183]: E0813 19:56:30.211585 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:30 crc kubenswrapper[4183]: I0813 19:56:30.211650 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:30 crc kubenswrapper[4183]: E0813 19:56:30.211763 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:30 crc kubenswrapper[4183]: I0813 19:56:30.433915 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:30 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:30 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:30 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:30 crc kubenswrapper[4183]: I0813 19:56:30.434596 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:30 crc kubenswrapper[4183]: E0813 19:56:30.509667 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.208966 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.209006 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.209129 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.209205 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.209217 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.209314 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.209490 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.209565 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.209646 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.209668 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.209867 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.209868 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.210017 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.210061 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.210103 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.210148 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.209005 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.210102 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.210218 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.210069 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.210313 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.210313 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.210357 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.210377 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.210459 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.210461 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.210478 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.210566 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.210578 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.210642 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.210692 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.210718 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.210643 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.210754 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.210873 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.210947 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.210967 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.211007 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.211016 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.211139 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.211307 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.211316 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.211312 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.211451 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.211587 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.211622 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.211663 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.211730 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.211735 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.211586 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.211899 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.211917 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.211957 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.212120 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.212192 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.212248 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.212307 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.212316 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.212444 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.212490 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.212543 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.212606 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.212708 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.212935 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.212968 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.212988 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.213348 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.213360 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.213459 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.213557 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.213636 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.213713 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.213776 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.213974 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.214074 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.214254 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.214299 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.214346 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.214389 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.214476 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.214991 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.215155 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.433423 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:31 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:31 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:31 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.433535 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:32 crc kubenswrapper[4183]: I0813 19:56:32.209107 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:32 crc kubenswrapper[4183]: I0813 19:56:32.209135 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:32 crc kubenswrapper[4183]: E0813 19:56:32.210362 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:32 crc kubenswrapper[4183]: E0813 19:56:32.210452 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:32 crc kubenswrapper[4183]: I0813 19:56:32.209203 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:32 crc kubenswrapper[4183]: E0813 19:56:32.210571 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:32 crc kubenswrapper[4183]: I0813 19:56:32.209218 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:32 crc kubenswrapper[4183]: E0813 19:56:32.210691 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:32 crc kubenswrapper[4183]: I0813 19:56:32.209257 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:32 crc kubenswrapper[4183]: E0813 19:56:32.210896 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:32 crc kubenswrapper[4183]: I0813 19:56:32.209282 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:32 crc kubenswrapper[4183]: E0813 19:56:32.211035 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:32 crc kubenswrapper[4183]: I0813 19:56:32.209291 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:32 crc kubenswrapper[4183]: E0813 19:56:32.211216 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:32 crc kubenswrapper[4183]: I0813 19:56:32.211417 4183 scope.go:117] "RemoveContainer" containerID="2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5" Aug 13 19:56:32 crc kubenswrapper[4183]: E0813 19:56:32.212166 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:56:32 crc kubenswrapper[4183]: I0813 19:56:32.433986 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:32 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:32 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:32 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:32 crc kubenswrapper[4183]: I0813 19:56:32.434187 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.208190 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.208512 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.208211 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.208547 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.208687 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.208233 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.208707 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.208726 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.208238 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.208260 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.208259 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.208272 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.208289 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.208305 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.208312 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.208330 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.208327 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.209144 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.209258 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.208568 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.208582 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.208586 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.208593 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.209456 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.209503 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.209516 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.209534 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.209553 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.209631 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.208225 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.209692 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.208883 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.209741 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.209278 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.209376 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.209878 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.209907 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.209928 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.209988 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.210002 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.210061 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.210132 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.210135 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.210164 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.210226 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.210267 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.210347 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.210417 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.210511 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.210559 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.210622 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.210726 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.211145 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.210734 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.210855 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.210867 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.210867 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.210878 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.210888 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.210909 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.210962 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.211055 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.211097 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.211468 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.211616 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.211730 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.211737 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.211919 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.211966 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.211979 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.212003 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.212124 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.212272 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.212426 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.213253 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.213474 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.213664 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.213713 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.213726 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.213866 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.213910 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.213990 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.434643 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:33 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:33 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:33 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.434749 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:34 crc kubenswrapper[4183]: I0813 19:56:34.208437 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:34 crc kubenswrapper[4183]: I0813 19:56:34.208597 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:34 crc kubenswrapper[4183]: I0813 19:56:34.208601 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:34 crc kubenswrapper[4183]: I0813 19:56:34.208744 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:34 crc kubenswrapper[4183]: E0813 19:56:34.208749 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:34 crc kubenswrapper[4183]: I0813 19:56:34.209075 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:34 crc kubenswrapper[4183]: I0813 19:56:34.209097 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:34 crc kubenswrapper[4183]: E0813 19:56:34.209211 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:34 crc kubenswrapper[4183]: E0813 19:56:34.209314 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:34 crc kubenswrapper[4183]: E0813 19:56:34.209437 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:34 crc kubenswrapper[4183]: I0813 19:56:34.209496 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:34 crc kubenswrapper[4183]: E0813 19:56:34.209564 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:34 crc kubenswrapper[4183]: E0813 19:56:34.209681 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:34 crc kubenswrapper[4183]: E0813 19:56:34.209719 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:34 crc kubenswrapper[4183]: I0813 19:56:34.432745 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:34 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:34 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:34 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:34 crc kubenswrapper[4183]: I0813 19:56:34.432945 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.173154 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.173230 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.173258 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.173282 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.173312 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:35Z","lastTransitionTime":"2025-08-13T19:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.190060 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.195649 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.195729 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.195747 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.195769 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.195884 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:35Z","lastTransitionTime":"2025-08-13T19:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.208310 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.208476 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.208519 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.208528 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.208344 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.208914 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.208947 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.208765 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.209040 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.208877 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.209061 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.209106 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.209126 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.209164 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.209275 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.209299 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.209318 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.209329 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.209410 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.209414 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.209441 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.209017 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.209547 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.209542 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.209604 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.209605 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.209643 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.209734 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.209755 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.209928 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.209941 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.209966 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.210048 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.210051 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.210074 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.210116 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.210153 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.210197 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.210227 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.210300 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.210387 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.210472 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.210502 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.210527 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.210668 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.210905 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.210964 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.210907 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.211009 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.211031 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.211069 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.211140 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.211274 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.211359 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.211368 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.211464 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.211553 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.211667 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.211716 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.211925 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.212143 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.212188 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.212060 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.212260 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.212341 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.212433 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.212528 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.212570 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.212572 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.212612 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.212688 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.212937 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.213037 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.213200 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.213313 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.213413 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.213473 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.213536 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.213971 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.214096 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.214223 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.214317 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.218438 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.225371 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.227084 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.227234 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.227378 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.227629 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:35Z","lastTransitionTime":"2025-08-13T19:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.231321 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.255693 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.255997 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.263157 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.263280 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.263408 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.263670 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.263763 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:35Z","lastTransitionTime":"2025-08-13T19:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.281267 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.285061 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.291047 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.291150 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.291174 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.291199 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.291235 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:35Z","lastTransitionTime":"2025-08-13T19:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.303167 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.305852 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.306098 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.319336 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.335442 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.350897 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.368024 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.390210 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.407238 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.427848 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.432689 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:35 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:35 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:35 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.432882 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.444910 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.463133 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.479682 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.498515 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.511479 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.515038 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.533761 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6b2db3637481270955ecfaf63f08f80ee970eeaa15bd54430df884620e38ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:56:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:56:16Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.551324 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.569990 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.588562 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.616902 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.631852 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.647608 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.663514 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.679297 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.704639 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:47Z\\\",\\\"message\\\":\\\"10.217.4.108:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0813 19:54:47.532432 19713 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:54:47.531652 19713 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532671 19713 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532752 19713 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532927 19713 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:54:47.532945 19713 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:54:47.532868 19713 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532892 19713 reflector.go:295] Stoppin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.723493 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.739076 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.756040 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.771347 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.787684 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.803370 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.824609 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.842581 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.860221 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.884089 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.914013 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.931507 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.946445 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.965138 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.986679 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.004674 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.021088 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.042220 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.127137 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"2025-08-13T19:55:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fbb7e047-911e-45ca-8d98-7b9cca149b61\\\\n2025-08-13T19:55:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fbb7e047-911e-45ca-8d98-7b9cca149b61 to /host/opt/cni/bin/\\\\n2025-08-13T19:55:29Z [verbose] multus-daemon started\\\\n2025-08-13T19:55:29Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:56:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:55:29Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.148172 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.169532 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.190295 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.208705 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.208869 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.208966 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:36 crc kubenswrapper[4183]: E0813 19:56:36.209045 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.209126 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.208990 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.209216 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:36 crc kubenswrapper[4183]: E0813 19:56:36.209277 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:36 crc kubenswrapper[4183]: E0813 19:56:36.209490 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:36 crc kubenswrapper[4183]: E0813 19:56:36.209496 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:36 crc kubenswrapper[4183]: E0813 19:56:36.209584 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:36 crc kubenswrapper[4183]: E0813 19:56:36.209845 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.209949 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:36 crc kubenswrapper[4183]: E0813 19:56:36.210115 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.213053 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.232737 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.250069 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.266667 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.290307 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.312315 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.335611 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.355864 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.374987 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.397634 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.415379 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.432980 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.433615 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:36 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:36 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:36 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.433816 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.451944 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.468602 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.489233 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.506230 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.524348 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.541674 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.558081 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.208578 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.208713 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.208726 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.209005 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.209328 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.209392 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.209397 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.209548 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.209605 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.209628 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.209750 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.209856 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.210067 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.210108 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.210137 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.210150 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.210174 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.209863 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.209974 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.210099 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.210532 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.210733 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.210889 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.210973 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.211083 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.211101 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.211138 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.211144 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.211189 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.211200 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.211225 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.211238 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.211259 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.211282 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.211291 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.211325 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.211334 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.211358 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.211376 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.211407 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.211418 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.211328 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.211449 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.211458 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.211497 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.211502 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.211509 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.211541 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.212058 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.212338 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.212530 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.212720 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.213004 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.213187 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.213481 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.213632 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.213633 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.214002 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.214105 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.214208 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.214322 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.214414 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.214542 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.214864 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.215057 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.215247 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.215300 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.215483 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.215609 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.215667 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.215721 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.216176 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.216326 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.216483 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.216601 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.216701 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.216933 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.217047 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.217455 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.217565 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.217632 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.217693 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.432029 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:37 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:37 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:37 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.432528 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:38 crc kubenswrapper[4183]: I0813 19:56:38.208739 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:38 crc kubenswrapper[4183]: I0813 19:56:38.208886 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:38 crc kubenswrapper[4183]: I0813 19:56:38.208953 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:38 crc kubenswrapper[4183]: I0813 19:56:38.208992 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:38 crc kubenswrapper[4183]: E0813 19:56:38.209084 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:38 crc kubenswrapper[4183]: I0813 19:56:38.209086 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:38 crc kubenswrapper[4183]: E0813 19:56:38.209226 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:38 crc kubenswrapper[4183]: E0813 19:56:38.209288 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:38 crc kubenswrapper[4183]: I0813 19:56:38.209343 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:38 crc kubenswrapper[4183]: E0813 19:56:38.209434 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:38 crc kubenswrapper[4183]: E0813 19:56:38.209561 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:38 crc kubenswrapper[4183]: E0813 19:56:38.209679 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:38 crc kubenswrapper[4183]: I0813 19:56:38.210472 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:38 crc kubenswrapper[4183]: E0813 19:56:38.210999 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:38 crc kubenswrapper[4183]: I0813 19:56:38.432474 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:38 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:38 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:38 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:38 crc kubenswrapper[4183]: I0813 19:56:38.432591 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209168 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209990 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.210132 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209181 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209185 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209230 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209249 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209262 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209284 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209292 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209316 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209314 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209348 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209344 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209377 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209376 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209405 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209406 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209429 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209452 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209488 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209517 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209623 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209651 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209673 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209694 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209710 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209743 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209763 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209869 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209886 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209915 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209968 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.210683 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.211141 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.211470 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.211492 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.211603 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.211711 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.211869 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.212020 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.212148 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.212478 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.212552 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.212732 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.212864 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.213703 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.213737 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.213982 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.214081 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.214120 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.214250 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.214320 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.214404 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.214721 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.214755 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.214957 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.215062 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.215153 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.215247 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.215295 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.215380 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.215412 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.215495 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.215596 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.215682 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.215855 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.216041 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.216112 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.216143 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.216182 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.216249 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.216312 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.216351 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.216404 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.216478 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.216540 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.216595 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.216673 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.216885 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.216950 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.217018 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.431916 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:39 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:39 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:39 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.432043 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:40 crc kubenswrapper[4183]: I0813 19:56:40.143853 4183 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Aug 13 19:56:40 crc kubenswrapper[4183]: I0813 19:56:40.143985 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Aug 13 19:56:40 crc kubenswrapper[4183]: I0813 19:56:40.208260 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:40 crc kubenswrapper[4183]: E0813 19:56:40.208520 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:40 crc kubenswrapper[4183]: I0813 19:56:40.209201 4183 scope.go:117] "RemoveContainer" containerID="2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f" Aug 13 19:56:40 crc kubenswrapper[4183]: E0813 19:56:40.209738 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\"" pod="openshift-multus/multus-q88th" podUID="475321a1-8b7e-4033-8f72-b05a8b377347" Aug 13 19:56:40 crc kubenswrapper[4183]: I0813 19:56:40.210059 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:40 crc kubenswrapper[4183]: I0813 19:56:40.210164 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:40 crc kubenswrapper[4183]: I0813 19:56:40.210275 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:40 crc kubenswrapper[4183]: I0813 19:56:40.210313 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:40 crc kubenswrapper[4183]: E0813 19:56:40.210220 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:40 crc kubenswrapper[4183]: E0813 19:56:40.210392 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:40 crc kubenswrapper[4183]: I0813 19:56:40.210238 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:40 crc kubenswrapper[4183]: I0813 19:56:40.210272 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:40 crc kubenswrapper[4183]: E0813 19:56:40.210744 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:40 crc kubenswrapper[4183]: E0813 19:56:40.210897 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:40 crc kubenswrapper[4183]: E0813 19:56:40.210975 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:40 crc kubenswrapper[4183]: E0813 19:56:40.211098 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:40 crc kubenswrapper[4183]: I0813 19:56:40.431658 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:40 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:40 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:40 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:40 crc kubenswrapper[4183]: I0813 19:56:40.431818 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:40 crc kubenswrapper[4183]: E0813 19:56:40.513339 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.208971 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209327 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209334 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.209536 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209602 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209152 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209188 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209200 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209230 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209241 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209248 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209752 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.209818 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209862 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209262 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209274 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209283 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.209950 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209992 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209273 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.210080 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209304 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209313 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209314 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209345 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.210180 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209352 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209356 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209359 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209377 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.210278 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.210343 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.210350 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209378 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209143 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209385 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.210483 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.210510 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209395 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.210575 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.210638 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.210687 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.210695 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209410 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209416 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209412 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209417 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.210884 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.210943 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.211015 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.211065 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.211146 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.211221 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.211293 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.211351 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.211420 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.211447 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.211517 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.211594 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.211724 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.211906 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.211955 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.212046 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.212134 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.212201 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.212265 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.212341 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.212404 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.212483 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.212560 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.212653 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.212656 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.212744 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.212890 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.212937 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.212981 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.213004 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.213060 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.213136 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.213189 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.213403 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.213653 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.432079 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:41 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:41 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:41 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.432175 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:42 crc kubenswrapper[4183]: I0813 19:56:42.208408 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:42 crc kubenswrapper[4183]: E0813 19:56:42.209321 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:42 crc kubenswrapper[4183]: I0813 19:56:42.208481 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:42 crc kubenswrapper[4183]: E0813 19:56:42.210253 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:42 crc kubenswrapper[4183]: I0813 19:56:42.208491 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:42 crc kubenswrapper[4183]: I0813 19:56:42.208556 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:42 crc kubenswrapper[4183]: I0813 19:56:42.208570 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:42 crc kubenswrapper[4183]: I0813 19:56:42.208616 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:42 crc kubenswrapper[4183]: I0813 19:56:42.208685 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:42 crc kubenswrapper[4183]: E0813 19:56:42.211230 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:42 crc kubenswrapper[4183]: E0813 19:56:42.211280 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:42 crc kubenswrapper[4183]: E0813 19:56:42.211576 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:42 crc kubenswrapper[4183]: E0813 19:56:42.212181 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:42 crc kubenswrapper[4183]: E0813 19:56:42.212232 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:42 crc kubenswrapper[4183]: I0813 19:56:42.433717 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:42 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:42 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:42 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:42 crc kubenswrapper[4183]: I0813 19:56:42.433924 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.209177 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.209280 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.209394 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.210325 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.210403 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.209475 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.210615 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.209200 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.209195 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.209533 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.209530 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.209681 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.209747 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.210766 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.209885 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.210893 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.210980 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.211013 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.210981 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.210059 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.210068 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.210096 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.210096 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.211117 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.211216 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.211273 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.211374 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.210127 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.211453 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.211514 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.211541 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.211585 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.211655 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.211721 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.211943 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.210153 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.210164 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.212103 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.210021 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.210178 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.210207 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.210574 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.210170 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.212205 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.212253 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.212319 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.212401 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.212403 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.212456 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.212516 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.212608 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.212610 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.212694 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.212753 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.212913 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.213037 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.213041 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.213157 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.213242 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.213350 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.213453 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.213520 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.213631 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.213867 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.214048 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.214048 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.214217 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.214313 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.214414 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.214520 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.214594 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.214673 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.214725 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.214850 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.214880 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.215030 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.215184 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.215310 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.215402 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.215500 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.215580 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.216009 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.433489 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:43 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:43 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:43 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.433667 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:44 crc kubenswrapper[4183]: I0813 19:56:44.209194 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:44 crc kubenswrapper[4183]: I0813 19:56:44.209270 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:44 crc kubenswrapper[4183]: I0813 19:56:44.209295 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:44 crc kubenswrapper[4183]: I0813 19:56:44.209313 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:44 crc kubenswrapper[4183]: I0813 19:56:44.209320 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:44 crc kubenswrapper[4183]: I0813 19:56:44.209361 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:44 crc kubenswrapper[4183]: E0813 19:56:44.209893 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:44 crc kubenswrapper[4183]: E0813 19:56:44.210182 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:44 crc kubenswrapper[4183]: E0813 19:56:44.210299 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:44 crc kubenswrapper[4183]: E0813 19:56:44.210403 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:44 crc kubenswrapper[4183]: E0813 19:56:44.210501 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:44 crc kubenswrapper[4183]: E0813 19:56:44.210576 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:44 crc kubenswrapper[4183]: I0813 19:56:44.210735 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:44 crc kubenswrapper[4183]: E0813 19:56:44.211042 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:44 crc kubenswrapper[4183]: I0813 19:56:44.211324 4183 scope.go:117] "RemoveContainer" containerID="2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5" Aug 13 19:56:44 crc kubenswrapper[4183]: E0813 19:56:44.211923 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:56:44 crc kubenswrapper[4183]: I0813 19:56:44.437341 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:44 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:44 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:44 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:44 crc kubenswrapper[4183]: I0813 19:56:44.437424 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.209028 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.209199 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.209334 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.209442 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.209479 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.209551 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.209758 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.210095 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.210167 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.210246 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.209105 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.210425 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.210469 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.210611 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.210611 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.209151 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.210733 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.210870 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.210959 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.211049 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.211076 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.209127 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.211263 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.211338 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.211468 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.211526 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.211533 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.211574 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.211599 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.211737 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.211848 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.211540 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.211966 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.211893 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.212134 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.212187 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.212223 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.212297 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.212249 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.212273 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.212356 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.212364 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.212429 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.212425 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.212515 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.212560 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.212615 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.212650 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.212904 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.212919 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.212949 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.212918 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.213031 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.213146 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.213156 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.213213 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.213242 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.213310 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.213339 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.213404 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.213469 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.213481 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.213539 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.213570 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.213654 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.213746 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.214008 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.214104 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.214202 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.214247 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.214337 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.214459 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.214479 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.214529 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.214717 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.214816 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.215138 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.215191 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.215271 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.215379 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.215464 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.215704 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.236102 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6b2db3637481270955ecfaf63f08f80ee970eeaa15bd54430df884620e38ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:56:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:56:16Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.256173 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.277871 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.299117 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.318504 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.337392 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.357907 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.377195 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.396886 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.412724 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.430289 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.435076 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:45 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:45 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:45 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.435208 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.446585 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.464267 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.484097 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.509656 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.509724 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.509744 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.509768 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.509888 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:45Z","lastTransitionTime":"2025-08-13T19:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.509980 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:47Z\\\",\\\"message\\\":\\\"10.217.4.108:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0813 19:54:47.532432 19713 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:54:47.531652 19713 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532671 19713 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532752 19713 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532927 19713 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:54:47.532945 19713 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:54:47.532868 19713 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532892 19713 reflector.go:295] Stoppin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.514750 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.525106 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.531511 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.531565 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.531580 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.531602 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.531631 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:45Z","lastTransitionTime":"2025-08-13T19:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.532275 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.547266 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.549638 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.553289 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.553351 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.553370 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.553390 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.553415 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:45Z","lastTransitionTime":"2025-08-13T19:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.565862 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.567330 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.572534 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.572578 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.572592 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.572610 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.572631 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:45Z","lastTransitionTime":"2025-08-13T19:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.582486 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.588900 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.594104 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.594190 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.594209 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.594230 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.594260 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:45Z","lastTransitionTime":"2025-08-13T19:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.602856 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.609613 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.609669 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.620545 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.637701 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.654989 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.679207 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.707893 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.722575 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.737966 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.753518 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.780421 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.802770 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.819528 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.838022 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.855218 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"2025-08-13T19:55:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fbb7e047-911e-45ca-8d98-7b9cca149b61\\\\n2025-08-13T19:55:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fbb7e047-911e-45ca-8d98-7b9cca149b61 to /host/opt/cni/bin/\\\\n2025-08-13T19:55:29Z [verbose] multus-daemon started\\\\n2025-08-13T19:55:29Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:56:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:55:29Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.876594 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.892557 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.909627 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.928076 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.946730 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.965170 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.984230 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.002926 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.020187 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.034956 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.049617 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.067022 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.084043 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.099638 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.113445 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.128618 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.143514 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.164310 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.182140 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.201018 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.208969 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.209125 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:46 crc kubenswrapper[4183]: E0813 19:56:46.209268 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.209292 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:46 crc kubenswrapper[4183]: E0813 19:56:46.209430 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:46 crc kubenswrapper[4183]: E0813 19:56:46.209491 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.209920 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:46 crc kubenswrapper[4183]: E0813 19:56:46.210062 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.210243 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:46 crc kubenswrapper[4183]: E0813 19:56:46.210354 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.210480 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:46 crc kubenswrapper[4183]: E0813 19:56:46.210588 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.210704 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:46 crc kubenswrapper[4183]: E0813 19:56:46.210864 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.220238 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.238069 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.254369 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.269416 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.285891 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.303135 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.317994 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.333931 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.352186 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.374635 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.391973 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.404684 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.422394 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.434053 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:46 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:46 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:46 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.434252 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.438904 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.209005 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.209073 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.209092 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.209132 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.209271 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.209287 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.209314 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.209350 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.209277 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.209445 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.209460 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.209529 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.209637 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.209648 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.209676 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.209691 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.209888 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.209932 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.210297 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.209940 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.209973 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.210073 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.210082 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.210126 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.210149 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.210179 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.210438 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.210198 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.210495 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.210211 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.210566 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.210640 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.210677 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.210693 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.210764 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.210911 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.210960 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.210968 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.211050 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.211078 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.211115 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.211254 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.211337 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.211256 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.210256 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.211448 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.211523 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.211567 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.211638 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.211733 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.211861 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.211941 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.211953 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.212053 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.212126 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.212167 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.212290 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.212303 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.212362 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.212512 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.212607 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.212635 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.212688 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.212762 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.213011 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.213094 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.213217 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.213462 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.213534 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.213623 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.213631 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.213665 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.213898 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.213948 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.214039 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.213960 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.214120 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.214196 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.214293 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.214369 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.214421 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.214514 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.432962 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:47 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:47 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:47 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.433095 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:48 crc kubenswrapper[4183]: I0813 19:56:48.208338 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:48 crc kubenswrapper[4183]: I0813 19:56:48.208401 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:48 crc kubenswrapper[4183]: I0813 19:56:48.208415 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:48 crc kubenswrapper[4183]: I0813 19:56:48.208751 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:48 crc kubenswrapper[4183]: I0813 19:56:48.208473 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:48 crc kubenswrapper[4183]: E0813 19:56:48.208561 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:48 crc kubenswrapper[4183]: I0813 19:56:48.208623 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:48 crc kubenswrapper[4183]: E0813 19:56:48.209000 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:48 crc kubenswrapper[4183]: E0813 19:56:48.209071 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:48 crc kubenswrapper[4183]: I0813 19:56:48.208641 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:48 crc kubenswrapper[4183]: E0813 19:56:48.209273 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:48 crc kubenswrapper[4183]: E0813 19:56:48.209363 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:48 crc kubenswrapper[4183]: E0813 19:56:48.209440 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:48 crc kubenswrapper[4183]: E0813 19:56:48.209585 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:48 crc kubenswrapper[4183]: I0813 19:56:48.432899 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:48 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:48 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:48 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:48 crc kubenswrapper[4183]: I0813 19:56:48.433067 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.209067 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.209365 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.209082 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.209095 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.209624 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.209112 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.209974 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.210198 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.210550 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.210630 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.210667 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.210688 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.210727 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.210736 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.210872 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.210910 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.210954 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.211012 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.211114 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.211136 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.211196 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.211244 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.211307 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.211310 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.211334 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.211322 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.211432 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.211455 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.211496 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.211557 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.211570 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.211597 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.211644 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.211679 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.211728 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.211885 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.211984 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.212054 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.212052 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.212073 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.212110 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.212133 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.212157 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.212171 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.212209 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.212221 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.212243 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.212264 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.212277 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.212297 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.212320 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.212330 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.212354 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.212376 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.212384 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.212413 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.212434 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.212441 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.212469 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.212489 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.212506 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.212572 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.212573 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.212626 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.212647 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.212666 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.212723 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.212885 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.212983 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.213080 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.213167 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.213236 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.213319 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.213381 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.213446 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.213536 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.213633 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.213647 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.213737 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.213897 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.213961 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.214022 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.432293 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:49 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:49 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:49 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.432456 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:50 crc kubenswrapper[4183]: I0813 19:56:50.208475 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:50 crc kubenswrapper[4183]: E0813 19:56:50.208852 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:50 crc kubenswrapper[4183]: I0813 19:56:50.209066 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:50 crc kubenswrapper[4183]: E0813 19:56:50.209154 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:50 crc kubenswrapper[4183]: I0813 19:56:50.209284 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:50 crc kubenswrapper[4183]: E0813 19:56:50.209366 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:50 crc kubenswrapper[4183]: I0813 19:56:50.209590 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:50 crc kubenswrapper[4183]: E0813 19:56:50.209706 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:50 crc kubenswrapper[4183]: I0813 19:56:50.209921 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:50 crc kubenswrapper[4183]: E0813 19:56:50.210008 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:50 crc kubenswrapper[4183]: I0813 19:56:50.210140 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:50 crc kubenswrapper[4183]: E0813 19:56:50.210219 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:50 crc kubenswrapper[4183]: I0813 19:56:50.210324 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:50 crc kubenswrapper[4183]: E0813 19:56:50.210432 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:50 crc kubenswrapper[4183]: I0813 19:56:50.432382 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:50 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:50 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:50 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:50 crc kubenswrapper[4183]: I0813 19:56:50.432541 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:50 crc kubenswrapper[4183]: E0813 19:56:50.516577 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.209095 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.209144 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.209133 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.209242 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.209264 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.209351 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.209417 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.209247 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.209447 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.209096 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.209267 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.209133 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.209212 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.209225 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.209550 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.209559 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.209317 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.209290 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.209668 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.209692 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.209714 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.209870 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.209879 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.209922 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.209885 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.210041 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.210133 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.210139 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.210184 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.210210 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.210261 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.210269 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.210363 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.210363 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.210460 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.210535 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.210619 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.210668 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.210729 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.210762 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.210900 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.210972 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.211026 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.211078 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.211136 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.211137 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.211198 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.211227 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.211313 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.211343 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.211478 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.211524 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.211548 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.211637 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.211712 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.211881 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.211960 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.211993 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.212041 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.212083 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.212236 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.212320 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.212407 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.212474 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.212551 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.212588 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.212663 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.212705 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.212760 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.212897 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.212975 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.213058 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.213086 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.213153 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.213218 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.213272 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.213342 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.213412 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.213856 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.214006 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.214143 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.214262 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.432427 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:51 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:51 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:51 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.432549 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.006655 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.006914 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.007002 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.007056 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.007111 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.007161 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.007227 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.007253 4183 secret.go:194] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.007285 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j7zrh\" (UniqueName: \"kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.007346 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.007333 4183 configmap.go:199] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.007415 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.007349852 +0000 UTC m=+900.700014910 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.007452 4183 secret.go:194] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.007494 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.007472935 +0000 UTC m=+900.700137893 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"console-operator-config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.007555 4183 secret.go:194] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.007558 4183 configmap.go:199] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.007600 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.007573378 +0000 UTC m=+900.700238106 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : object "openshift-service-ca"/"signing-key" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.007603 4183 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.007630 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.007616919 +0000 UTC m=+900.700281577 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.007564 4183 secret.go:194] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.007479 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.007673 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.00764336 +0000 UTC m=+900.700308568 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.007679 4183 projected.go:294] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.007694 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.007726 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.007697712 +0000 UTC m=+900.700363000 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.007744 4183 projected.go:294] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.007755 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.007741973 +0000 UTC m=+900.700406601 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.007848 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.007768434 +0000 UTC m=+900.700433082 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.007890 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.007766 4183 projected.go:200] Error preparing data for projected volume kube-api-access-j7zrh for pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.008004 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.00798148 +0000 UTC m=+900.700646438 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-j7zrh" (UniqueName: "kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.008009 4183 configmap.go:199] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.008135 4183 secret.go:194] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.008195 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.008199 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.008180835 +0000 UTC m=+900.700845883 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.008264 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.008235867 +0000 UTC m=+900.700900535 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.008297 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.008343 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.008349 4183 secret.go:194] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.008400 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.008451 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.008478 4183 configmap.go:199] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.008513 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.008520 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.008510885 +0000 UTC m=+900.701175623 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.008604 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.008587467 +0000 UTC m=+900.701252145 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.008630 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.008618998 +0000 UTC m=+900.701283796 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.008650 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.008640089 +0000 UTC m=+900.701304707 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.008978 4183 projected.go:294] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.009009 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.009084 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.009069591 +0000 UTC m=+900.701734299 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.009116 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.009228 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.009275 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.009394 4183 secret.go:194] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.009470 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.009448982 +0000 UTC m=+900.702114050 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.009500 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.009550 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.009539094 +0000 UTC m=+900.702203912 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.009604 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.009649 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.009690 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.009681438 +0000 UTC m=+900.702346176 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.009719 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.010074 4183 secret.go:194] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.010146 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.010186 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.010163532 +0000 UTC m=+900.702828450 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default-metrics-tls" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.010283 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.010190 4183 configmap.go:199] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.010353 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.010370 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.010354507 +0000 UTC m=+900.703019306 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : object "openshift-service-ca"/"signing-cabundle" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.010411 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.010436 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.010466 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.010479 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.010466141 +0000 UTC m=+900.703130849 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.010543 4183 secret.go:194] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.010618 4183 secret.go:194] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.010680 4183 configmap.go:199] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.010725 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.010714708 +0000 UTC m=+900.703379396 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : object "openshift-dns-operator"/"metrics-tls" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.010750 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls podName:297ab9b6-2186-4d5b-a952-2bfd59af63c4 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.010739388 +0000 UTC m=+900.703404176 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls") pod "machine-config-controller-6df6df6b6b-58shh" (UID: "297ab9b6-2186-4d5b-a952-2bfd59af63c4") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.010859 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.010760249 +0000 UTC m=+900.703424927 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"oauth-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.112476 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.112704 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.113506 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.113475002 +0000 UTC m=+900.806139800 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"image-import-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.113656 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.113752 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.114054 4183 secret.go:194] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.113885 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.114172 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.114149981 +0000 UTC m=+900.806814759 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.114216 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.114200723 +0000 UTC m=+900.806865711 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.114636 4183 secret.go:194] Couldn't get secret openshift-image-registry/installation-pull-secrets: object "openshift-image-registry"/"installation-pull-secrets" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.114752 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.114736638 +0000 UTC m=+900.807401446 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "installation-pull-secrets" (UniqueName: "kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"installation-pull-secrets" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.115068 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.115412 4183 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.115522 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.11550687 +0000 UTC m=+900.808171738 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"client-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.115551 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.115613 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.115852 4183 secret.go:194] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.115913 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.115903241 +0000 UTC m=+900.808567859 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.115931 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.116064 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.116148 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.116127588 +0000 UTC m=+900.808792386 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.117854 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.117958 4183 configmap.go:199] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.118043 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.118027002 +0000 UTC m=+900.810691730 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-images" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.118196 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.118255 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.118321 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.11830789 +0000 UTC m=+900.810972508 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.208951 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.209093 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.209183 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.209195 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.209275 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.209102 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.209470 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.209560 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.209639 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.209749 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.209898 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.209940 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.210078 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.210173 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.219661 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4w8wh\" (UniqueName: \"kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.219908 4183 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.219932 4183 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.219947 4183 projected.go:200] Error preparing data for projected volume kube-api-access-4w8wh for pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.220003 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.219985583 +0000 UTC m=+900.912650341 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-4w8wh" (UniqueName: "kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.220048 4183 projected.go:294] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.220073 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.220133 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.220115147 +0000 UTC m=+900.912779965 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.219931 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.220360 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.220477 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-idp-0-file-data: object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.220504 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.220530 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.220517248 +0000 UTC m=+900.913182066 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-idp-0-file-data" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.220584 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.220661 4183 configmap.go:199] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.220674 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.220689 4183 configmap.go:199] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.220732 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.220716864 +0000 UTC m=+900.913381672 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.220869 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.220884 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.220868368 +0000 UTC m=+900.913533096 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.220929 4183 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.220968 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.220958481 +0000 UTC m=+900.913623099 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.220989 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.221013 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.221063 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.221100 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.221070114 +0000 UTC m=+900.913735362 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.221111 4183 secret.go:194] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.221135 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.221157 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.221148166 +0000 UTC m=+900.913812784 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.221178 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.221166597 +0000 UTC m=+900.913831405 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.221181 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.221274 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.221314 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.221333 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.221344 4183 projected.go:200] Error preparing data for projected volume kube-api-access-ncrf5 for pod openshift-marketplace/certified-operators-7287f: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.221359 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.221377 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5 podName:887d596e-c519-4bfa-af90-3edd9e1b2f0f nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.221366163 +0000 UTC m=+900.914030831 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-ncrf5" (UniqueName: "kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5") pod "certified-operators-7287f" (UID: "887d596e-c519-4bfa-af90-3edd9e1b2f0f") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.221432 4183 secret.go:194] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.221445 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.221482 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.221471486 +0000 UTC m=+900.914136104 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.221519 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.221563 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.221554898 +0000 UTC m=+900.914219516 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.221565 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.221523 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.221675 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.221649361 +0000 UTC m=+900.914314619 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"audit-1" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.221708 4183 secret.go:194] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.221742 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.221734153 +0000 UTC m=+900.914398781 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.221744 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-serving-cert\" (UniqueName: \"kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.221857 4183 secret.go:194] Couldn't get secret openshift-console-operator/webhook-serving-cert: object "openshift-console-operator"/"webhook-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.221893 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.221881467 +0000 UTC m=+900.914546075 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "webhook-serving-cert" (UniqueName: "kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : object "openshift-console-operator"/"webhook-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.222109 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.222211 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.222279 4183 secret.go:194] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.222294 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.222324 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.22231453 +0000 UTC m=+900.914979268 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.222386 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.222425 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.222416213 +0000 UTC m=+900.915080961 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.222457 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.222485 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.222513 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bxr\" (UniqueName: \"kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.222539 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.222549 4183 configmap.go:199] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.222571 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.222590 4183 secret.go:194] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.222609 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.222639 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.222612898 +0000 UTC m=+900.915278156 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"audit" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.222678 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.222708 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.222699821 +0000 UTC m=+900.915364439 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-client" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.222711 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.222720 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.222744 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.222645 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.222680 4183 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.222876 4183 secret.go:194] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.222923 4183 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.222936 4183 projected.go:200] Error preparing data for projected volume kube-api-access-l8bxr for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.222759 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.222748762 +0000 UTC m=+900.915413510 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.223003 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.223036 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.223005269 +0000 UTC m=+900.915670557 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"etcd-serving-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.223056 4183 configmap.go:199] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.223095 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.223085082 +0000 UTC m=+900.915749840 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.223119 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.223108992 +0000 UTC m=+900.915773820 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.223168 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.223156104 +0000 UTC m=+900.915820872 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l8bxr" (UniqueName: "kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.223203 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.223197175 +0000 UTC m=+900.915861773 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.223257 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bm986\" (UniqueName: \"kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.223289 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.223435 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.223560 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wrd8h\" (UniqueName: \"kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.223635 4183 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.223702 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nf4t2\" (UniqueName: \"kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.223729 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.223730 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.223822 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.223765551 +0000 UTC m=+900.916430189 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-session" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.223891 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.223922 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtrv\" (UniqueName: \"kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.223928 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.223919406 +0000 UTC m=+900.916584224 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.223926 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.223952 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.223956 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.223943306 +0000 UTC m=+900.916608074 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.223994 4183 projected.go:294] Couldn't get configMap hostpath-provisioner/kube-root-ca.crt: object "hostpath-provisioner"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.224006 4183 projected.go:294] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.224012 4183 projected.go:294] Couldn't get configMap hostpath-provisioner/openshift-service-ca.crt: object "hostpath-provisioner"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.224018 4183 projected.go:294] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.224032 4183 projected.go:200] Error preparing data for projected volume kube-api-access-vvtrv for pod hostpath-provisioner/csi-hostpathplugin-hvm8g: [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.224036 4183 projected.go:200] Error preparing data for projected volume kube-api-access-nf4t2 for pod openshift-dns-operator/dns-operator-75f687757b-nz2xb: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.223764 4183 secret.go:194] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.224063 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv podName:12e733dd-0939-4f1b-9cbb-13897e093787 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.224054189 +0000 UTC m=+900.916718937 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-vvtrv" (UniqueName: "kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv") pod "csi-hostpathplugin-hvm8g" (UID: "12e733dd-0939-4f1b-9cbb-13897e093787") : [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.224082 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2 podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.22407219 +0000 UTC m=+900.916736778 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-nf4t2" (UniqueName: "kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.224096 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.224136 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.224126661 +0000 UTC m=+900.916791470 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.224050 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.224155 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.224146232 +0000 UTC m=+900.916811110 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.223962 4183 projected.go:200] Error preparing data for projected volume kube-api-access-wrd8h for pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.224305 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxcp\" (UniqueName: \"kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp\") pod \"network-check-source-5c5478f8c-vqvt7\" (UID: \"d0f40333-c860-4c04-8058-a0bf572dcf12\") " pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.224351 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.224334437 +0000 UTC m=+900.916999055 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-wrd8h" (UniqueName: "kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.223673 4183 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.224402 4183 projected.go:200] Error preparing data for projected volume kube-api-access-bm986 for pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.224421 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.224454 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.224492 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986 podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.224465081 +0000 UTC m=+900.917130289 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-bm986" (UniqueName: "kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.224510 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.224354 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.224567 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.224555324 +0000 UTC m=+900.917220292 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.224588 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.224594 4183 projected.go:294] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.224613 4183 projected.go:200] Error preparing data for projected volume kube-api-access-qcxcp for pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.224602 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.224646 4183 secret.go:194] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.224619 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.224665 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp podName:d0f40333-c860-4c04-8058-a0bf572dcf12 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.224648936 +0000 UTC m=+900.917313854 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-qcxcp" (UniqueName: "kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp") pod "network-check-source-5c5478f8c-vqvt7" (UID: "d0f40333-c860-4c04-8058-a0bf572dcf12") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.224751 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.224978 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.225024 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d7ntf\" (UniqueName: \"kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.225023 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.225121 4183 secret.go:194] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.225062 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.225153 4183 projected.go:294] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.225171 4183 projected.go:294] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.225182 4183 projected.go:200] Error preparing data for projected volume kube-api-access-d7ntf for pod openshift-service-ca/service-ca-666f99b6f-vlbxv: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.225064 4183 configmap.go:199] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.225088 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.225078049 +0000 UTC m=+900.917742637 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"metrics-tls" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.225340 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.225257594 +0000 UTC m=+900.917922212 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.225361 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.225353616 +0000 UTC m=+900.918018505 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"openshift-global-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.225376 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.225370147 +0000 UTC m=+900.918034735 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"encryption-config-1" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.225491 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.2254799 +0000 UTC m=+900.918144508 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-d7ntf" (UniqueName: "kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.225638 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.225728 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.225763 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.225857 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.225761968 +0000 UTC m=+900.918426596 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"trusted-ca-bundle" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.225951 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmhq\" (UniqueName: \"kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq\") pod \"migrator-f7c6d88df-q2fnv\" (UID: \"cf1a8966-f594-490a-9fbb-eec5bafd13d3\") " pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.225988 4183 configmap.go:199] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.226016 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.226051 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226095 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.226084517 +0000 UTC m=+900.918749135 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226147 4183 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226161 4183 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226171 4183 projected.go:200] Error preparing data for projected volume kube-api-access-55f7t for pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226202 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.22619228 +0000 UTC m=+900.918857029 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-55f7t" (UniqueName: "kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.226228 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.226271 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tl5kg\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.226298 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2h9\" (UniqueName: \"kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226321 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226328 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226402 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.226383836 +0000 UTC m=+900.919048754 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226407 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226427 4183 projected.go:294] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226437 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hqmhq for pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226445 4183 projected.go:294] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226454 4183 projected.go:200] Error preparing data for projected volume kube-api-access-lx2h9 for pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226317 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226484 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9 podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.226475289 +0000 UTC m=+900.919140117 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-lx2h9" (UniqueName: "kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226502 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226520 4183 projected.go:200] Error preparing data for projected volume kube-api-access-n6sqt for pod openshift-marketplace/community-operators-8jhz6: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226525 4183 configmap.go:199] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226539 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq podName:cf1a8966-f594-490a-9fbb-eec5bafd13d3 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.22652795 +0000 UTC m=+900.919192698 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqmhq" (UniqueName: "kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq") pod "migrator-f7c6d88df-q2fnv" (UID: "cf1a8966-f594-490a-9fbb-eec5bafd13d3") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226565 4183 projected.go:294] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226576 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt podName:3f4dca86-e6ee-4ec9-8324-86aff960225e nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.226559291 +0000 UTC m=+900.919224159 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-n6sqt" (UniqueName: "kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt") pod "community-operators-8jhz6" (UID: "3f4dca86-e6ee-4ec9-8324-86aff960225e") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226403 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226579 4183 projected.go:294] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226650 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.226622883 +0000 UTC m=+900.919287951 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226654 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tl5kg for pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226725 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.226708335 +0000 UTC m=+900.919373353 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-tl5kg" (UniqueName: "kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226963 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.226949922 +0000 UTC m=+900.919614530 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.227122 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.227311 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.227362 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.227387 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.227412 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.227439 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.227481 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8dcvj\" (UniqueName: \"kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.227504 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.227528 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.227554 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kp86\" (UniqueName: \"kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.227582 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hpzhn\" (UniqueName: \"kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.227609 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r8qj9\" (UniqueName: \"kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.227638 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.227671 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.227748 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.227855 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.227771016 +0000 UTC m=+900.920435644 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.227905 4183 secret.go:194] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.227931 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs podName:a702c6d2-4dde-4077-ab8c-0f8df804bf7a nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.22792461 +0000 UTC m=+900.920589218 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs") pod "network-metrics-daemon-qdfr4" (UID: "a702c6d2-4dde-4077-ab8c-0f8df804bf7a") : object "openshift-multus"/"metrics-daemon-secret" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.227981 4183 configmap.go:199] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228004 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.227997002 +0000 UTC m=+900.920661710 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228036 4183 secret.go:194] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228059 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs podName:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.228053564 +0000 UTC m=+900.920718172 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs") pod "multus-admission-controller-6c7c885997-4hbbc" (UID: "d5025cb4-ddb0-4107-88c1-bcbcdb779ac0") : object "openshift-multus"/"multus-admission-controller-secret" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228090 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228112 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.228105755 +0000 UTC m=+900.920770363 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228141 4183 configmap.go:199] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228165 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.228157837 +0000 UTC m=+900.920822455 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228194 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228215 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.228209228 +0000 UTC m=+900.920873836 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"client-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228261 4183 projected.go:294] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228273 4183 projected.go:294] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228281 4183 projected.go:200] Error preparing data for projected volume kube-api-access-8dcvj for pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228305 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.228298471 +0000 UTC m=+900.920963289 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-8dcvj" (UniqueName: "kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228334 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228355 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.228349052 +0000 UTC m=+900.921013660 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228391 4183 secret.go:194] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228412 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.228406364 +0000 UTC m=+900.921070982 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228453 4183 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228465 4183 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228472 4183 projected.go:200] Error preparing data for projected volume kube-api-access-6kp86 for pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228498 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86 podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.228491986 +0000 UTC m=+900.921156594 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kp86" (UniqueName: "kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228539 4183 projected.go:294] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228549 4183 projected.go:294] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228556 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hpzhn for pod openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228581 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.228574319 +0000 UTC m=+900.921239027 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-hpzhn" (UniqueName: "kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228626 4183 projected.go:294] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228637 4183 projected.go:294] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228644 4183 projected.go:200] Error preparing data for projected volume kube-api-access-r8qj9 for pod openshift-apiserver/apiserver-67cbf64bc9-mtx25: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228671 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9 podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.228659891 +0000 UTC m=+900.921324499 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-r8qj9" (UniqueName: "kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228711 4183 configmap.go:199] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228735 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.228728573 +0000 UTC m=+900.921393191 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.329202 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.329264 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.329278 4183 projected.go:200] Error preparing data for projected volume kube-api-access-x5d97 for pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.329357 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97 podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.329338005 +0000 UTC m=+901.022002744 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-x5d97" (UniqueName: "kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.329541 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-x5d97\" (UniqueName: \"kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.329903 4183 secret.go:194] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.329981 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.329966463 +0000 UTC m=+901.022631191 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.329692 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.330253 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5rpl7\" (UniqueName: \"kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.330334 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.330466 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2zpsk\" (UniqueName: \"kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk\") pod \"downloads-65476884b9-9wcvx\" (UID: \"6268b7fe-8910-4505-b404-6f1df638105c\") " pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.330547 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.330543 4183 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.330599 4183 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.330613 4183 projected.go:200] Error preparing data for projected volume kube-api-access-5rpl7 for pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.330668 4183 secret.go:194] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.330686 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7 podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.330664603 +0000 UTC m=+901.023329401 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-5rpl7" (UniqueName: "kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.330705 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.330697124 +0000 UTC m=+901.023361752 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-oauth-config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.330710 4183 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.330751 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.330751 4183 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.330579 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.331014 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.330763 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.331114 4183 projected.go:269] Couldn't get secret openshift-image-registry/image-registry-tls: object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.331128 4183 projected.go:200] Error preparing data for projected volume kube-api-access-ptdrb for pod openshift-marketplace/redhat-operators-f4jkp: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.331137 4183 projected.go:200] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-585546dd8b-v5m4t: object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.331154 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.330876 4183 projected.go:200] Error preparing data for projected volume kube-api-access-2zpsk for pod openshift-console/downloads-65476884b9-9wcvx: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.330894 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.331167 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb podName:4092a9f8-5acc-4932-9e90-ef962eeb301a nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.331156437 +0000 UTC m=+901.023821265 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-ptdrb" (UniqueName: "kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb") pod "redhat-operators-f4jkp" (UID: "4092a9f8-5acc-4932-9e90-ef962eeb301a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.331303 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.331348 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.331325322 +0000 UTC m=+901.023990270 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.331407 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kgvs\" (UniqueName: \"kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.331484 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.331492 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-76gl8\" (UniqueName: \"kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8\") pod \"network-check-target-v54bt\" (UID: \"34a48baf-1bee-4921-8bb2-9b7320e76f79\") " pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.331506 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.331517 4183 projected.go:200] Error preparing data for projected volume kube-api-access-6kgvs for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.331549 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk podName:6268b7fe-8910-4505-b404-6f1df638105c nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.331522768 +0000 UTC m=+901.024187686 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-2zpsk" (UniqueName: "kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk") pod "downloads-65476884b9-9wcvx" (UID: "6268b7fe-8910-4505-b404-6f1df638105c") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.331592 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.331574929 +0000 UTC m=+901.024239927 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.331618 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.331624 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.33160964 +0000 UTC m=+901.024274538 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.331647 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.331665 4183 projected.go:200] Error preparing data for projected volume kube-api-access-76gl8 for pod openshift-network-diagnostics/network-check-target-v54bt: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.331690 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.331728 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8 podName:34a48baf-1bee-4921-8bb2-9b7320e76f79 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.331710023 +0000 UTC m=+901.024374961 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-76gl8" (UniqueName: "kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8") pod "network-check-target-v54bt" (UID: "34a48baf-1bee-4921-8bb2-9b7320e76f79") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.331766 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.331853 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.331866 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tf29r for pod openshift-marketplace/redhat-marketplace-8s8pc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.331902 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r podName:c782cf62-a827-4677-b3c2-6f82c5f09cbb nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.331892878 +0000 UTC m=+901.024557616 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-tf29r" (UniqueName: "kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r") pod "redhat-marketplace-8s8pc" (UID: "c782cf62-a827-4677-b3c2-6f82c5f09cbb") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.331943 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pzb57\" (UniqueName: \"kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.332013 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.332039 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.332094 4183 projected.go:294] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.332094 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.332066413 +0000 UTC m=+901.024731411 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kgvs" (UniqueName: "kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.332108 4183 secret.go:194] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.332148 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.332146 4183 projected.go:294] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.332299 4183 projected.go:200] Error preparing data for projected volume kube-api-access-pzb57 for pod openshift-controller-manager/controller-manager-6ff78978b4-q4vv8: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.332337 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57 podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.332326781 +0000 UTC m=+901.024991529 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-pzb57" (UniqueName: "kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.332336 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-w4r68\" (UniqueName: \"kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.332422 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.332407643 +0000 UTC m=+901.025072371 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.332426 4183 projected.go:294] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.332462 4183 projected.go:294] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.332476 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.332467955 +0000 UTC m=+901.025132543 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.332480 4183 projected.go:200] Error preparing data for projected volume kube-api-access-w4r68 for pod openshift-authentication/oauth-openshift-765b47f944-n2lhl: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.332538 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68 podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.332520526 +0000 UTC m=+901.025185474 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-w4r68" (UniqueName: "kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.332915 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.332967 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.333046 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.333088 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.333131 4183 secret.go:194] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.333138 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.333126874 +0000 UTC m=+901.025791612 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.333204 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.333187085 +0000 UTC m=+901.025851913 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"etcd-client" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.333382 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.333470 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.333483 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.333459553 +0000 UTC m=+901.026124201 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.333608 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.333714 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.33369312 +0000 UTC m=+901.026358068 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"audit-1" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.334037 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hjlhw\" (UniqueName: \"kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.334103 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.334178 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.334213 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.334231 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hjlhw for pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.334268 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tvc4r\" (UniqueName: \"kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.334313 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.334293467 +0000 UTC m=+901.026958395 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-hjlhw" (UniqueName: "kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.334331 4183 configmap.go:199] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.334378 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.334367869 +0000 UTC m=+901.027032467 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"trusted-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.334391 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.334420 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.334438 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tvc4r for pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.334482 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.334497 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.334480572 +0000 UTC m=+901.027145610 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-tvc4r" (UniqueName: "kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.334565 4183 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.334600 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.334609 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.334597986 +0000 UTC m=+901.027262704 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.334649 4183 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.334673 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.334666598 +0000 UTC m=+901.027331216 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-client" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.334674 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9724w\" (UniqueName: \"kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.334720 4183 projected.go:294] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.334731 4183 projected.go:294] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.334741 4183 projected.go:200] Error preparing data for projected volume kube-api-access-9724w for pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.334850 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.33475951 +0000 UTC m=+901.027424138 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-9724w" (UniqueName: "kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.334731 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.334983 4183 configmap.go:199] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.335040 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.335058 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.335035338 +0000 UTC m=+901.027700366 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"service-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.335094 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.335111 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.335123 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.33511498 +0000 UTC m=+901.027779708 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.335196 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.335279 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.335282 4183 secret.go:194] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.335347 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.335330287 +0000 UTC m=+901.027995095 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.335349 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.335423 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.335453 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.335455 4183 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.335482 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.335501 4183 projected.go:200] Error preparing data for projected volume kube-api-access-rg2zg for pod openshift-marketplace/marketplace-operator-8b455464d-f9xdt: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.335470 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.33546074 +0000 UTC m=+901.028125468 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.335467 4183 secret.go:194] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.335430 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.335645 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.335632105 +0000 UTC m=+901.028296823 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-rg2zg" (UniqueName: "kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.335724 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.335704677 +0000 UTC m=+901.028378065 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.335768 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.335758249 +0000 UTC m=+901.028422837 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.335945 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dt5cx\" (UniqueName: \"kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx\") pod \"ingress-canary-2vhcn\" (UID: \"0b5d722a-1123-4935-9740-52a08d018bc9\") " pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.336215 4183 projected.go:294] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.336244 4183 projected.go:294] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.336259 4183 projected.go:200] Error preparing data for projected volume kube-api-access-dt5cx for pod openshift-ingress-canary/ingress-canary-2vhcn: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.336295 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx podName:0b5d722a-1123-4935-9740-52a08d018bc9 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.336286404 +0000 UTC m=+901.028951022 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-dt5cx" (UniqueName: "kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx") pod "ingress-canary-2vhcn" (UID: "0b5d722a-1123-4935-9740-52a08d018bc9") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.336329 4183 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.336347 4183 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.336355 4183 projected.go:200] Error preparing data for projected volume kube-api-access-fqnmc for pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.336386 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.336377506 +0000 UTC m=+901.029042124 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-fqnmc" (UniqueName: "kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.336499 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-fqnmc\" (UniqueName: \"kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.336535 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vhj\" (UniqueName: \"kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.336575 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.336757 4183 projected.go:294] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.336878 4183 projected.go:294] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.336890 4183 projected.go:200] Error preparing data for projected volume kube-api-access-d9vhj for pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.336951 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.336934972 +0000 UTC m=+901.029599590 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-d9vhj" (UniqueName: "kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.337359 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 podName: nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.337347574 +0000 UTC m=+901.030012222 (durationBeforeRetry 2m2s). Error: MountVolume.MountDevice failed for volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.433241 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:52 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:52 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:52 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.433378 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.438884 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lz9qh\" (UniqueName: \"kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.439174 4183 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.439253 4183 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.439270 4183 projected.go:200] Error preparing data for projected volume kube-api-access-lz9qh for pod openshift-console/console-84fccc7b6-mkncc: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.439269 4183 projected.go:294] Couldn't get configMap openshift-kube-controller-manager/kube-root-ca.crt: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.439297 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/revision-pruner-8-crc: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.439367 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.439346057 +0000 UTC m=+901.132010795 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-lz9qh" (UniqueName: "kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.439384 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\") " pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.439395 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access podName:72854c1e-5ae2-4ed6-9e50-ff3bccde2635 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.439385318 +0000 UTC m=+901.132050036 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access") pod "revision-pruner-8-crc" (UID: "72854c1e-5ae2-4ed6-9e50-ff3bccde2635") : object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.439906 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r7dbp\" (UniqueName: \"kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp\") pod \"redhat-marketplace-rmwfn\" (UID: \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\") " pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.440236 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.440307 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.440325 4183 projected.go:200] Error preparing data for projected volume kube-api-access-r7dbp for pod openshift-marketplace/redhat-marketplace-rmwfn: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.440416 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp podName:9ad279b4-d9dc-42a8-a1c8-a002bd063482 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.440395007 +0000 UTC m=+901.133059735 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-r7dbp" (UniqueName: "kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp") pod "redhat-marketplace-rmwfn" (UID: "9ad279b4-d9dc-42a8-a1c8-a002bd063482") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.212448 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.212578 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.212728 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.212906 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.212960 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.212994 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.213138 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.213260 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.213430 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.213435 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.213475 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.213492 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.213527 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.213578 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.213590 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.213600 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.213644 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.213684 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.213708 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.213711 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.213742 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.213911 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.213964 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.214076 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.214129 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.214211 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.214251 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.214317 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.214422 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.214531 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.214572 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.214633 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.214698 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.214733 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.214865 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.214919 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.214942 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.214969 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.214710 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.214743 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.215093 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.215094 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.215228 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.215230 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.215279 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.215395 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.215444 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.215552 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.215609 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.215696 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.215753 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.215763 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.215860 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.215910 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.215976 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.216028 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.216043 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.216098 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.216123 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.216191 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.216267 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.215721 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.216421 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.217017 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.217165 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.217173 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.217278 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.217387 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.217568 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.217687 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.217882 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.217958 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.218119 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.218232 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.218325 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.218411 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.218510 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.218841 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.218961 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.219157 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.219577 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.219909 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.433185 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:53 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:53 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:53 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.433295 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:54 crc kubenswrapper[4183]: I0813 19:56:54.208760 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:54 crc kubenswrapper[4183]: I0813 19:56:54.209229 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:54 crc kubenswrapper[4183]: I0813 19:56:54.209234 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:54 crc kubenswrapper[4183]: I0813 19:56:54.209315 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:54 crc kubenswrapper[4183]: E0813 19:56:54.209532 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:54 crc kubenswrapper[4183]: I0813 19:56:54.209322 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:54 crc kubenswrapper[4183]: I0813 19:56:54.209375 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:54 crc kubenswrapper[4183]: E0813 19:56:54.209665 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:54 crc kubenswrapper[4183]: E0813 19:56:54.209987 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:54 crc kubenswrapper[4183]: E0813 19:56:54.210046 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:54 crc kubenswrapper[4183]: I0813 19:56:54.210082 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:54 crc kubenswrapper[4183]: E0813 19:56:54.210203 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:54 crc kubenswrapper[4183]: E0813 19:56:54.210292 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:54 crc kubenswrapper[4183]: E0813 19:56:54.210473 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:54 crc kubenswrapper[4183]: I0813 19:56:54.433286 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:54 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:54 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:54 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:54 crc kubenswrapper[4183]: I0813 19:56:54.433513 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:54 crc kubenswrapper[4183]: I0813 19:56:54.677470 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 19:56:54 crc kubenswrapper[4183]: I0813 19:56:54.677664 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 19:56:54 crc kubenswrapper[4183]: I0813 19:56:54.677901 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 19:56:54 crc kubenswrapper[4183]: I0813 19:56:54.677967 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 19:56:54 crc kubenswrapper[4183]: I0813 19:56:54.678012 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.220351 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.220903 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.221052 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.221172 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.220407 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.220453 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.221507 4183 scope.go:117] "RemoveContainer" containerID="2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.220578 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.223310 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.223501 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\"" pod="openshift-multus/multus-q88th" podUID="475321a1-8b7e-4033-8f72-b05a8b377347" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.223593 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.223762 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.224019 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.220611 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.220628 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.220659 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.220679 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.220700 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.227027 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.227109 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.227181 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.227116 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.227028 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.227250 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.227260 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.227336 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.227524 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.227501 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.227587 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.227631 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.227713 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.227726 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.227876 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.227961 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.227977 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.228038 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.228045 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.228194 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.228202 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.228283 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.228357 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.228388 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.228360 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.228497 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.229344 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.230307 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.230538 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.230699 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.230851 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.230946 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.230987 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.231045 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.231123 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.231152 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.231234 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.231295 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.231333 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.231373 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.231408 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.231313 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.231493 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.231555 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.231574 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.231698 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.231754 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.231857 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.232080 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.232373 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.232502 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.232603 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.232654 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.232727 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.232762 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.232941 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.233028 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.233117 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.233249 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.233325 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.233393 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.233452 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.233478 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.233740 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.233923 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.234073 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.256464 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.308340 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:47Z\\\",\\\"message\\\":\\\"10.217.4.108:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0813 19:54:47.532432 19713 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:54:47.531652 19713 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532671 19713 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532752 19713 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532927 19713 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:54:47.532945 19713 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:54:47.532868 19713 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532892 19713 reflector.go:295] Stoppin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.338906 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.367109 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.394532 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.417519 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.432527 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:55 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:55 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:55 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.432662 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.439166 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.462444 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.483103 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.503670 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.518104 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.522390 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.542206 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.572258 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.589877 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.607284 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.625441 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.648886 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.670923 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.689145 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.708963 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.727056 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"2025-08-13T19:55:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fbb7e047-911e-45ca-8d98-7b9cca149b61\\\\n2025-08-13T19:55:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fbb7e047-911e-45ca-8d98-7b9cca149b61 to /host/opt/cni/bin/\\\\n2025-08-13T19:55:29Z [verbose] multus-daemon started\\\\n2025-08-13T19:55:29Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:56:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:55:29Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.741697 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.757192 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.776238 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.793569 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.819525 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.841751 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.850661 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.850729 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.850743 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.850767 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.850873 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:55Z","lastTransitionTime":"2025-08-13T19:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.860895 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.867695 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.874961 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.875035 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.875089 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.875110 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.875188 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:55Z","lastTransitionTime":"2025-08-13T19:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.883106 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.892060 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.899521 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.899612 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.899633 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.899727 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.899765 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:55Z","lastTransitionTime":"2025-08-13T19:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.911897 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.921211 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.928921 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.929036 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.929293 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.929323 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.929350 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:55Z","lastTransitionTime":"2025-08-13T19:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.931659 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.949509 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.955321 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.955640 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.955549 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.956297 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.956380 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.956414 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:55Z","lastTransitionTime":"2025-08-13T19:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.976862 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.976953 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.982193 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.002440 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.022208 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.041617 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.062279 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.079343 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.095441 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.112237 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.131963 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.147240 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.169278 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.186660 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.203240 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.211380 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:56 crc kubenswrapper[4183]: E0813 19:56:56.211598 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.211648 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.211895 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.212091 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.212236 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.212270 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.212102 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:56 crc kubenswrapper[4183]: E0813 19:56:56.211896 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:56 crc kubenswrapper[4183]: E0813 19:56:56.212160 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:56 crc kubenswrapper[4183]: E0813 19:56:56.212641 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:56 crc kubenswrapper[4183]: E0813 19:56:56.212857 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:56 crc kubenswrapper[4183]: E0813 19:56:56.212997 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:56 crc kubenswrapper[4183]: E0813 19:56:56.213071 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.215348 4183 scope.go:117] "RemoveContainer" containerID="2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5" Aug 13 19:56:56 crc kubenswrapper[4183]: E0813 19:56:56.216416 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.221740 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.240602 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.258093 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.276651 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.295134 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.316546 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.337316 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.361370 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.382947 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.397817 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.418685 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.432578 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:56 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:56 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:56 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.432737 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.439396 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.458045 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.476726 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.492223 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.511080 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6b2db3637481270955ecfaf63f08f80ee970eeaa15bd54430df884620e38ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:56:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:56:16Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.531878 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.550300 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.570007 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.585535 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.602639 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.619589 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.208473 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.208586 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.208632 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.208751 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.208858 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.208942 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.208958 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.208965 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.209000 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.208635 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.209077 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.209094 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.209140 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.209107 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.209237 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.209243 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.209308 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.209246 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.209394 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.209422 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.209434 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.209534 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.209699 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.209740 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.209750 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.209701 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.209877 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.209746 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.209919 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.209937 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.209949 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.209979 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.209995 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.210021 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.210024 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.210055 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.210066 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.210090 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.210129 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.210090 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.210136 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.210266 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.210393 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.210430 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.210431 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.210511 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.210637 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.210877 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.210876 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.210958 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.210968 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.211087 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.211226 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.211402 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.211465 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.211492 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.211519 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.211565 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.211607 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.211715 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.211730 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.211764 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.211961 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.211985 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.212090 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.212112 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.212214 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.212261 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.212273 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.212295 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.212422 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.212483 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.212514 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.212542 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.212574 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.212628 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.212685 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.212702 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.212726 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.212748 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.212967 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.213244 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.433093 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:57 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:57 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:57 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.433217 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:58 crc kubenswrapper[4183]: I0813 19:56:58.209191 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:58 crc kubenswrapper[4183]: E0813 19:56:58.209528 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:58 crc kubenswrapper[4183]: I0813 19:56:58.210031 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:58 crc kubenswrapper[4183]: I0813 19:56:58.210070 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:58 crc kubenswrapper[4183]: I0813 19:56:58.210083 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:58 crc kubenswrapper[4183]: I0813 19:56:58.210181 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:58 crc kubenswrapper[4183]: E0813 19:56:58.211262 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:58 crc kubenswrapper[4183]: E0813 19:56:58.210269 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:58 crc kubenswrapper[4183]: I0813 19:56:58.210459 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:58 crc kubenswrapper[4183]: E0813 19:56:58.211273 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:58 crc kubenswrapper[4183]: E0813 19:56:58.210469 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:58 crc kubenswrapper[4183]: I0813 19:56:58.210569 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:58 crc kubenswrapper[4183]: E0813 19:56:58.212025 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:58 crc kubenswrapper[4183]: E0813 19:56:58.212086 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:58 crc kubenswrapper[4183]: I0813 19:56:58.432538 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:58 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:58 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:58 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:58 crc kubenswrapper[4183]: I0813 19:56:58.433036 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209079 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209888 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209900 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209245 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.210031 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.210048 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209272 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209237 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209310 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209326 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209342 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.210188 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.210218 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209360 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209398 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209400 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209447 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209470 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209468 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.210354 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.210401 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209484 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209511 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209506 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209535 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209532 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.210483 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209542 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.210548 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209566 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209581 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209594 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.210618 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209599 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209620 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.210695 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209626 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209641 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209660 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.210866 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209656 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209691 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209694 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209689 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209720 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.211010 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209728 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209748 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209750 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209758 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209297 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.211307 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.211403 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.211439 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.211595 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.211719 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.211939 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.212010 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.212108 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.212197 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.212277 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.212583 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.212911 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.212991 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.213136 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.213197 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.213254 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.213310 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.213387 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.213463 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.213567 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.213636 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.213732 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.213916 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.214126 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.214148 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.214272 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.214382 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.214495 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.214639 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.216074 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.217427 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.433400 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:59 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:59 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:59 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.433499 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:00 crc kubenswrapper[4183]: I0813 19:57:00.208202 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:57:00 crc kubenswrapper[4183]: E0813 19:57:00.208525 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:57:00 crc kubenswrapper[4183]: I0813 19:57:00.208732 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:57:00 crc kubenswrapper[4183]: E0813 19:57:00.208967 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:57:00 crc kubenswrapper[4183]: I0813 19:57:00.209117 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:57:00 crc kubenswrapper[4183]: E0813 19:57:00.209250 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:57:00 crc kubenswrapper[4183]: I0813 19:57:00.209425 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:57:00 crc kubenswrapper[4183]: E0813 19:57:00.209564 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:57:00 crc kubenswrapper[4183]: I0813 19:57:00.209666 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:57:00 crc kubenswrapper[4183]: E0813 19:57:00.209911 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:57:00 crc kubenswrapper[4183]: I0813 19:57:00.210030 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:57:00 crc kubenswrapper[4183]: I0813 19:57:00.210133 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:57:00 crc kubenswrapper[4183]: E0813 19:57:00.210239 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:57:00 crc kubenswrapper[4183]: E0813 19:57:00.210331 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:57:00 crc kubenswrapper[4183]: I0813 19:57:00.434039 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:00 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:00 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:00 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:00 crc kubenswrapper[4183]: I0813 19:57:00.434164 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:00 crc kubenswrapper[4183]: E0813 19:57:00.520077 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.209197 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.209235 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.209313 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.209271 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.209349 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.209438 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.209452 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.209466 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.209483 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.209547 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.209658 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.209751 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.209967 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.209964 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.210080 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.210155 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.210196 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.210225 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.210261 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.210288 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.210309 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.210375 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.210458 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.210521 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.210550 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.210573 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.210669 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.210761 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.210900 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.210967 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.211021 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.211095 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.211128 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.211194 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.211245 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.211319 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.211321 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.211348 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.211413 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.211442 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.211508 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.211537 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.211576 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.211626 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.211654 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.211698 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.211752 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.212176 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.212214 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.212240 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.212216 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.212309 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.212394 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.212427 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.212467 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.212488 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.212468 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.212595 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.212637 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.212652 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.212701 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.212937 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.213060 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.213175 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.213259 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.213306 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.213422 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.213475 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.213564 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.213679 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.214047 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.214162 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.214222 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.214270 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.214366 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.214417 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.214490 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.214591 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.214642 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.214732 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.214891 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.214974 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.432910 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:01 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:01 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:01 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.433091 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:02 crc kubenswrapper[4183]: I0813 19:57:02.209207 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:57:02 crc kubenswrapper[4183]: E0813 19:57:02.209747 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:57:02 crc kubenswrapper[4183]: I0813 19:57:02.209249 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:57:02 crc kubenswrapper[4183]: E0813 19:57:02.210257 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:57:02 crc kubenswrapper[4183]: I0813 19:57:02.209278 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:57:02 crc kubenswrapper[4183]: I0813 19:57:02.209308 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:57:02 crc kubenswrapper[4183]: I0813 19:57:02.209344 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:57:02 crc kubenswrapper[4183]: I0813 19:57:02.209375 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:57:02 crc kubenswrapper[4183]: I0813 19:57:02.210019 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:57:02 crc kubenswrapper[4183]: E0813 19:57:02.210957 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:57:02 crc kubenswrapper[4183]: E0813 19:57:02.211223 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:57:02 crc kubenswrapper[4183]: E0813 19:57:02.211353 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:57:02 crc kubenswrapper[4183]: E0813 19:57:02.211538 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:57:02 crc kubenswrapper[4183]: E0813 19:57:02.211612 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:57:02 crc kubenswrapper[4183]: I0813 19:57:02.433625 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:02 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:02 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:02 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:02 crc kubenswrapper[4183]: I0813 19:57:02.433761 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208191 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208379 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.209104 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.209226 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208405 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208403 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.209443 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208434 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208440 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208441 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208462 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208484 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208491 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.209568 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208508 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208525 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.209702 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.209750 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208523 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208531 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.209947 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208558 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.210083 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208558 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.210162 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208563 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.210246 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208583 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.210330 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208594 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.210418 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208598 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208605 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208618 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208634 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.210552 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.210604 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208639 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.210688 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208640 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.210762 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208668 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.210930 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208666 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.211029 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208669 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.211132 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208697 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.211236 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.211346 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208694 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208700 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208703 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208719 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208735 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208736 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208739 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208750 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208869 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208769 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208870 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208928 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.209343 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.211400 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.211468 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.211539 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.211623 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.211693 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.211764 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.211919 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.211996 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.212082 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.212149 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.212217 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.212278 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.212343 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.212406 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.212487 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.212559 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.212629 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.212687 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.212748 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.432538 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:03 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:03 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:03 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.432657 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:04 crc kubenswrapper[4183]: I0813 19:57:04.208526 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:57:04 crc kubenswrapper[4183]: I0813 19:57:04.208597 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:57:04 crc kubenswrapper[4183]: I0813 19:57:04.208558 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:57:04 crc kubenswrapper[4183]: I0813 19:57:04.208684 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:57:04 crc kubenswrapper[4183]: I0813 19:57:04.208724 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:57:04 crc kubenswrapper[4183]: E0813 19:57:04.208961 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:57:04 crc kubenswrapper[4183]: I0813 19:57:04.209003 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:57:04 crc kubenswrapper[4183]: E0813 19:57:04.209129 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:57:04 crc kubenswrapper[4183]: E0813 19:57:04.209255 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:57:04 crc kubenswrapper[4183]: I0813 19:57:04.209264 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:57:04 crc kubenswrapper[4183]: E0813 19:57:04.209338 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:57:04 crc kubenswrapper[4183]: E0813 19:57:04.209418 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:57:04 crc kubenswrapper[4183]: E0813 19:57:04.209670 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:57:04 crc kubenswrapper[4183]: E0813 19:57:04.210439 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:57:04 crc kubenswrapper[4183]: I0813 19:57:04.432401 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:04 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:04 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:04 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:04 crc kubenswrapper[4183]: I0813 19:57:04.432498 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.208644 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.208739 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.208742 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.208942 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.208865 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.208996 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.209117 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.209139 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.209199 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.209207 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.209335 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.209345 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.209390 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.209448 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.209498 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.209521 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.209543 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.209569 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.209595 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.209634 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.209657 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.209639 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.209693 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.209875 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.209877 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.209961 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.209929 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.210057 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.210084 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.210167 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.210208 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.210276 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.210295 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.210278 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.210309 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.210400 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.210431 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.210586 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.210615 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.210676 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.210750 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.210763 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.210846 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.210899 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.210907 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.210914 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.211062 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.211067 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.211292 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.211454 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.211518 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.211559 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.211587 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.211700 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.211917 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.212060 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.212233 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.212428 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.212503 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.212573 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.212606 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.212703 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.212764 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.212932 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.212966 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.213045 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.213129 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.213179 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.213258 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.213297 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.213363 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.213464 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.213555 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.214097 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.214333 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.214731 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.215212 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.215432 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.215909 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.216060 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.216225 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.216304 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.233012 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.251386 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.276211 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.308609 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.328322 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.346304 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.370438 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.389476 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"2025-08-13T19:55:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fbb7e047-911e-45ca-8d98-7b9cca149b61\\\\n2025-08-13T19:55:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fbb7e047-911e-45ca-8d98-7b9cca149b61 to /host/opt/cni/bin/\\\\n2025-08-13T19:55:29Z [verbose] multus-daemon started\\\\n2025-08-13T19:55:29Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:56:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:55:29Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.411373 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.426415 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.431598 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:05 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:05 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:05 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.431712 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.444541 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.462377 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.480582 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.497925 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.514520 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.521723 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.532124 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.551449 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.572145 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.588510 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.608414 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.624842 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.645997 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.663466 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.683937 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.704973 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.726118 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.747703 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.768918 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.791175 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.808263 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.825492 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.849322 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.866211 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.884022 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.901126 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.920748 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.937371 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.954083 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.973215 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.991405 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.011450 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.027856 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.049470 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.068168 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.090284 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.106888 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.127328 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6b2db3637481270955ecfaf63f08f80ee970eeaa15bd54430df884620e38ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:56:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:56:16Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.148762 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.167735 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.187452 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.205447 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.208991 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.209033 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.209081 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.209124 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.209174 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.209196 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:57:06 crc kubenswrapper[4183]: E0813 19:57:06.209293 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.209338 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:57:06 crc kubenswrapper[4183]: E0813 19:57:06.209408 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:57:06 crc kubenswrapper[4183]: E0813 19:57:06.209513 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:57:06 crc kubenswrapper[4183]: E0813 19:57:06.209650 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:57:06 crc kubenswrapper[4183]: E0813 19:57:06.209918 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:57:06 crc kubenswrapper[4183]: E0813 19:57:06.209989 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:57:06 crc kubenswrapper[4183]: E0813 19:57:06.210098 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.265709 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.299008 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.299370 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.299521 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.299623 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.299722 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:57:06Z","lastTransitionTime":"2025-08-13T19:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.314215 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.338715 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: E0813 19:57:06.339053 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.345370 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.345449 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.345467 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.345485 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.345505 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:57:06Z","lastTransitionTime":"2025-08-13T19:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:57:06 crc kubenswrapper[4183]: E0813 19:57:06.364440 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.376755 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.378718 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.378909 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.378934 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.378961 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.378999 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:57:06Z","lastTransitionTime":"2025-08-13T19:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:57:06 crc kubenswrapper[4183]: E0813 19:57:06.400566 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.406093 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.406175 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.406200 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.406227 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.406252 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:57:06Z","lastTransitionTime":"2025-08-13T19:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.411665 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:47Z\\\",\\\"message\\\":\\\"10.217.4.108:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0813 19:54:47.532432 19713 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:54:47.531652 19713 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532671 19713 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532752 19713 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532927 19713 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:54:47.532945 19713 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:54:47.532868 19713 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532892 19713 reflector.go:295] Stoppin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: E0813 19:57:06.422309 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.426488 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.426589 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.426612 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.426641 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.426678 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:57:06Z","lastTransitionTime":"2025-08-13T19:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.432381 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:06 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:06 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:06 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.432490 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.434738 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: E0813 19:57:06.443050 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: E0813 19:57:06.443135 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.450719 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.468276 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.486451 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.504126 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.523291 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.542000 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.562857 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.577177 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.599599 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.616132 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.208543 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.208625 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.208864 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.209083 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.209088 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.209108 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.209149 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.209182 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.209282 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.209292 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.209322 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.209287 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.209288 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.208573 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.209427 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.209436 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.209441 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.209471 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.209501 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.209579 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.209580 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.209588 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.209638 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.209674 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.209698 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.209710 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.209882 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.209913 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.209923 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.209931 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.210067 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.210105 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.210130 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.210222 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.210236 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.210280 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.210293 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.210369 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.210386 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.210415 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.210483 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.210486 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.210624 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.210679 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.210682 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.210736 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.210767 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.211009 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.211113 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.211194 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.211297 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.211361 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.211395 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.211443 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.211522 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.211562 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.211598 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.211634 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.211655 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.211721 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.211919 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.212678 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.212763 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.212894 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.213966 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.213995 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.214038 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.214070 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.214171 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.214215 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.214278 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.214416 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.214605 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.214723 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.214964 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.215062 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.215303 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.215432 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.215572 4183 scope.go:117] "RemoveContainer" containerID="2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.215730 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.215580 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.215949 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.216091 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.216154 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\"" pod="openshift-multus/multus-q88th" podUID="475321a1-8b7e-4033-8f72-b05a8b377347" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.432008 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:07 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:07 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:07 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.432121 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:08 crc kubenswrapper[4183]: I0813 19:57:08.208694 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:57:08 crc kubenswrapper[4183]: I0813 19:57:08.209149 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:57:08 crc kubenswrapper[4183]: I0813 19:57:08.209200 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:57:08 crc kubenswrapper[4183]: I0813 19:57:08.208753 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:57:08 crc kubenswrapper[4183]: I0813 19:57:08.208878 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:57:08 crc kubenswrapper[4183]: I0813 19:57:08.208923 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:57:08 crc kubenswrapper[4183]: E0813 19:57:08.209352 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:57:08 crc kubenswrapper[4183]: I0813 19:57:08.209440 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:57:08 crc kubenswrapper[4183]: E0813 19:57:08.209601 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:57:08 crc kubenswrapper[4183]: E0813 19:57:08.209870 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:57:08 crc kubenswrapper[4183]: E0813 19:57:08.209963 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:57:08 crc kubenswrapper[4183]: E0813 19:57:08.210088 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:57:08 crc kubenswrapper[4183]: E0813 19:57:08.210231 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:57:08 crc kubenswrapper[4183]: E0813 19:57:08.210331 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:57:08 crc kubenswrapper[4183]: I0813 19:57:08.210454 4183 scope.go:117] "RemoveContainer" containerID="2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5" Aug 13 19:57:08 crc kubenswrapper[4183]: E0813 19:57:08.212519 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:57:08 crc kubenswrapper[4183]: I0813 19:57:08.432324 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:08 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:08 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:08 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:08 crc kubenswrapper[4183]: I0813 19:57:08.432413 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.209183 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.209260 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.209312 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.209359 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.209376 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.209434 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.209460 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.209469 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.209491 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.209220 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.209586 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.209616 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.209627 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.209642 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.209678 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.209683 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.209889 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.209924 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.209986 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.210074 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.210102 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.210174 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.210261 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.210266 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.210303 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.210285 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.210394 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.210406 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.210427 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.210475 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.210527 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.210539 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.210562 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.210615 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.210640 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.210676 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.210680 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.210907 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.210988 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.211067 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.211129 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.211184 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.211264 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.211310 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.211362 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.211436 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.211535 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.211628 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.211706 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.211762 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.211910 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.211988 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.212100 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.212101 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.212191 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.212222 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.212262 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.212317 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.212350 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.212372 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.213296 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.213597 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.214177 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.214390 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.214460 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.214576 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.214743 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.215166 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.215224 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.215310 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.215470 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.215579 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.215678 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.215761 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.215961 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.215990 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.216091 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.216205 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.216425 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.216585 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.216686 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.216893 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.432312 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:09 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:09 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:09 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.432422 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.144195 4183 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.144311 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.144370 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.145382 4183 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665"} pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.145608 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" containerID="cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665" gracePeriod=600 Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.208479 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.208483 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.208483 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:57:10 crc kubenswrapper[4183]: E0813 19:57:10.209643 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.208526 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.208552 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.208551 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.208577 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:57:10 crc kubenswrapper[4183]: E0813 19:57:10.209696 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:57:10 crc kubenswrapper[4183]: E0813 19:57:10.210553 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:57:10 crc kubenswrapper[4183]: E0813 19:57:10.210569 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:57:10 crc kubenswrapper[4183]: E0813 19:57:10.210579 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:57:10 crc kubenswrapper[4183]: E0813 19:57:10.210599 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:57:10 crc kubenswrapper[4183]: E0813 19:57:10.212723 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.433145 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:10 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:10 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:10 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.433281 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:10 crc kubenswrapper[4183]: E0813 19:57:10.524136 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.577107 4183 generic.go:334] "Generic (PLEG): container finished" podID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerID="f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665" exitCode=0 Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.577246 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerDied","Data":"f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665"} Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.577516 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerStarted","Data":"afce55cdf18c49434707644f949a34b08fce40dba18e4191658cbc7d2bfeb9fc"} Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.577545 4183 scope.go:117] "RemoveContainer" containerID="9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.601156 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.620676 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"2025-08-13T19:55:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fbb7e047-911e-45ca-8d98-7b9cca149b61\\\\n2025-08-13T19:55:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fbb7e047-911e-45ca-8d98-7b9cca149b61 to /host/opt/cni/bin/\\\\n2025-08-13T19:55:29Z [verbose] multus-daemon started\\\\n2025-08-13T19:55:29Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:56:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:55:29Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.638035 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.653861 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.672057 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.689895 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.708523 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.727279 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.741845 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.757649 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afce55cdf18c49434707644f949a34b08fce40dba18e4191658cbc7d2bfeb9fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:57:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:57:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.777734 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.798631 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.817019 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.833558 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.849947 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.893332 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.910649 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.936523 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.957273 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.976704 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.998571 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.020229 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.038163 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.068755 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.087412 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.103659 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.127081 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.145394 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.164353 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.183260 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.199646 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.208908 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.209179 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.208931 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.209300 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.208968 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.208997 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.209431 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.209441 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.209481 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.209501 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.209548 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.209578 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.209637 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.209645 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.209691 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.209583 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.209024 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.209055 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.209090 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.209090 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.209081 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.209944 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.209981 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.209120 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.210024 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.210064 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.209115 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.209127 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.209142 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.209152 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.209416 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.210149 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.210225 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.209025 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.210396 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.210344 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.210444 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.210227 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.210559 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.210565 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.210632 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.210637 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.210715 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.210855 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.210894 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.210924 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.211040 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.211121 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.211224 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.211240 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.211300 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.211345 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.211433 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.211453 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.211453 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.211601 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.211615 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.211746 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.211897 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.212011 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.212168 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.212312 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.212384 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.212462 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.212530 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.212579 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.212692 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.212889 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.212902 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.213097 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.213141 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.213171 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.213311 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.213411 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.213529 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.213714 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.213902 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.213907 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.214165 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.214299 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.214404 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.214544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.224234 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.243038 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.258305 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.274488 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.289129 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.309220 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.326294 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.345580 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.365715 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.383415 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.407496 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6b2db3637481270955ecfaf63f08f80ee970eeaa15bd54430df884620e38ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:56:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:56:16Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.424740 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.431965 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:11 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:11 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:11 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.432058 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.444002 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.465876 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.482488 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.500487 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.518607 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.535174 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.559292 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:47Z\\\",\\\"message\\\":\\\"10.217.4.108:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0813 19:54:47.532432 19713 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:54:47.531652 19713 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532671 19713 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532752 19713 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532927 19713 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:54:47.532945 19713 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:54:47.532868 19713 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532892 19713 reflector.go:295] Stoppin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.579424 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.600041 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.614910 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.631988 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.649929 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.667192 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.684139 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.702689 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.716183 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.736082 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.781503 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.809968 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.833429 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.861241 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.891271 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.913689 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.934654 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:12 crc kubenswrapper[4183]: I0813 19:57:12.209196 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:57:12 crc kubenswrapper[4183]: I0813 19:57:12.209296 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:57:12 crc kubenswrapper[4183]: E0813 19:57:12.210033 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:57:12 crc kubenswrapper[4183]: I0813 19:57:12.209321 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:57:12 crc kubenswrapper[4183]: I0813 19:57:12.209352 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:57:12 crc kubenswrapper[4183]: E0813 19:57:12.210348 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:57:12 crc kubenswrapper[4183]: I0813 19:57:12.209374 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:57:12 crc kubenswrapper[4183]: I0813 19:57:12.209417 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:57:12 crc kubenswrapper[4183]: I0813 19:57:12.209435 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:57:12 crc kubenswrapper[4183]: E0813 19:57:12.210842 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:57:12 crc kubenswrapper[4183]: E0813 19:57:12.210974 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:57:12 crc kubenswrapper[4183]: E0813 19:57:12.211319 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:57:12 crc kubenswrapper[4183]: E0813 19:57:12.211353 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:57:12 crc kubenswrapper[4183]: E0813 19:57:12.211439 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:57:12 crc kubenswrapper[4183]: I0813 19:57:12.435671 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:12 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:12 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:12 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:12 crc kubenswrapper[4183]: I0813 19:57:12.435765 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.209005 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.209117 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.209183 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.209224 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.209042 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.209424 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.209445 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.209460 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.209479 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.209532 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.209554 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.209450 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.209680 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.209689 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.209707 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.209728 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.209536 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.209770 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.209925 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.209951 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.209998 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.210053 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.210053 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.210080 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.210203 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.210259 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.210293 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.210332 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.210368 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.209101 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.210437 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.210444 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.210471 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.210505 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.210398 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.210556 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.210660 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.210681 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.210758 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.210880 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.210883 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.210936 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.211019 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.211057 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.211224 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.211258 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.211338 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.211364 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.211341 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.211441 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.211480 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.211503 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.211642 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.211731 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.211906 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.211957 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.212031 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.212126 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.212128 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.212147 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.212199 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.212262 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.212346 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.212384 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.212450 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.212479 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.212550 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.212682 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.212753 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.212991 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.213109 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.213171 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.213257 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.213696 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.213811 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.214022 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.214167 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.214376 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.214472 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.214537 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.214605 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.215003 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.434236 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:13 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:13 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:13 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.434379 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:14 crc kubenswrapper[4183]: I0813 19:57:14.209167 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:57:14 crc kubenswrapper[4183]: I0813 19:57:14.209225 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:57:14 crc kubenswrapper[4183]: E0813 19:57:14.209629 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:57:14 crc kubenswrapper[4183]: E0813 19:57:14.209731 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:57:14 crc kubenswrapper[4183]: I0813 19:57:14.209267 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:57:14 crc kubenswrapper[4183]: I0813 19:57:14.209317 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:57:14 crc kubenswrapper[4183]: I0813 19:57:14.209354 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:57:14 crc kubenswrapper[4183]: E0813 19:57:14.210077 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:57:14 crc kubenswrapper[4183]: E0813 19:57:14.210153 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:57:14 crc kubenswrapper[4183]: I0813 19:57:14.209356 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:57:14 crc kubenswrapper[4183]: I0813 19:57:14.209378 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:57:14 crc kubenswrapper[4183]: E0813 19:57:14.210550 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:57:14 crc kubenswrapper[4183]: E0813 19:57:14.210923 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:57:14 crc kubenswrapper[4183]: E0813 19:57:14.210934 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:57:14 crc kubenswrapper[4183]: I0813 19:57:14.433529 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:14 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:14 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:14 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:14 crc kubenswrapper[4183]: I0813 19:57:14.433636 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.208739 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.208876 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.208923 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.208927 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.208876 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.209022 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.209035 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.209070 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.209151 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.209182 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.209248 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.209252 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.209298 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.209341 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.209347 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.209408 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.209457 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.209475 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.209503 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.209520 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.209571 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.209656 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.209675 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.209695 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.209704 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.209892 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.209895 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.209953 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.209896 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.210005 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.210019 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.210080 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.210133 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.210186 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.210217 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.210220 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.210240 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.210300 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.210134 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.210370 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.210406 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.210466 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.210488 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.210499 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.210569 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.210616 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.210737 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.210861 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.210897 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.210925 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.211046 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.211125 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.211211 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.211233 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.211284 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.211313 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.211324 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.211386 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.211724 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.211886 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.212092 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.212267 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.212361 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.212485 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.212529 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.212560 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.212584 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.212613 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.212679 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.212750 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.212881 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.212949 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.212503 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.213099 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.213217 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.213324 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.213401 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.213639 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.213764 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.213897 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.213959 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.214205 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.432269 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:15 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:15 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:15 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.432388 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.526372 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:57:16 crc kubenswrapper[4183]: I0813 19:57:16.208474 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:57:16 crc kubenswrapper[4183]: I0813 19:57:16.208972 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:57:16 crc kubenswrapper[4183]: E0813 19:57:16.209251 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:57:16 crc kubenswrapper[4183]: I0813 19:57:16.209427 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:57:16 crc kubenswrapper[4183]: I0813 19:57:16.209481 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:57:16 crc kubenswrapper[4183]: I0813 19:57:16.209489 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:57:16 crc kubenswrapper[4183]: I0813 19:57:16.209508 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:57:16 crc kubenswrapper[4183]: E0813 19:57:16.209884 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:57:16 crc kubenswrapper[4183]: E0813 19:57:16.210024 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:57:16 crc kubenswrapper[4183]: E0813 19:57:16.210102 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:57:16 crc kubenswrapper[4183]: E0813 19:57:16.210581 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:57:16 crc kubenswrapper[4183]: E0813 19:57:16.211194 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:57:16 crc kubenswrapper[4183]: I0813 19:57:16.211386 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:57:16 crc kubenswrapper[4183]: E0813 19:57:16.211696 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:57:16 crc kubenswrapper[4183]: I0813 19:57:16.434755 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:16 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:16 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:16 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:16 crc kubenswrapper[4183]: I0813 19:57:16.434974 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:16 crc kubenswrapper[4183]: I0813 19:57:16.718121 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:57:16 crc kubenswrapper[4183]: I0813 19:57:16.718647 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:57:16 crc kubenswrapper[4183]: I0813 19:57:16.718858 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:57:16 crc kubenswrapper[4183]: I0813 19:57:16.719002 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:57:16 crc kubenswrapper[4183]: I0813 19:57:16.719105 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:57:16Z","lastTransitionTime":"2025-08-13T19:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.099408 4183 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.125052 4183 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.208235 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.208293 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.208250 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.208392 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.208424 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.208462 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.208475 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.208464 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.208502 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.208537 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.208447 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.208649 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.208715 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.208852 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.208889 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.208952 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.208957 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.209024 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.209039 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.209070 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.209123 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.209131 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.209165 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.209182 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.209244 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.209252 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.209291 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.209318 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.209351 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.209393 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.209447 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.209485 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.209500 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.209521 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.209391 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.209602 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.209608 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.209655 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.209687 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.209730 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.209851 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.209893 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.209944 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.209948 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.209983 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.210036 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.210086 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.210117 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.210216 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.210286 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.210353 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.210431 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.210466 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.210544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.210606 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.210678 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.211032 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.211123 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.211172 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.211173 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.211245 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.211307 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.211363 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.211431 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.211478 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.211599 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.211660 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.211732 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.211863 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.211952 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.212007 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.212058 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.212177 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.212241 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.212359 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.212443 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.212515 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.212819 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.213281 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.213437 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.213904 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.214153 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.432718 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:17 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:17 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:17 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.432908 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:18 crc kubenswrapper[4183]: I0813 19:57:18.209078 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:57:18 crc kubenswrapper[4183]: I0813 19:57:18.209120 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:57:18 crc kubenswrapper[4183]: I0813 19:57:18.209190 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:57:18 crc kubenswrapper[4183]: I0813 19:57:18.209282 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:57:18 crc kubenswrapper[4183]: E0813 19:57:18.209370 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:57:18 crc kubenswrapper[4183]: E0813 19:57:18.209581 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:57:18 crc kubenswrapper[4183]: E0813 19:57:18.209663 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:57:18 crc kubenswrapper[4183]: E0813 19:57:18.210034 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:57:18 crc kubenswrapper[4183]: I0813 19:57:18.210181 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:57:18 crc kubenswrapper[4183]: I0813 19:57:18.210261 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:57:18 crc kubenswrapper[4183]: I0813 19:57:18.210406 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:57:18 crc kubenswrapper[4183]: E0813 19:57:18.210613 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:57:18 crc kubenswrapper[4183]: E0813 19:57:18.211024 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:57:18 crc kubenswrapper[4183]: E0813 19:57:18.211224 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:57:18 crc kubenswrapper[4183]: I0813 19:57:18.321547 4183 csr.go:261] certificate signing request csr-6mdrh is approved, waiting to be issued Aug 13 19:57:18 crc kubenswrapper[4183]: I0813 19:57:18.338156 4183 csr.go:257] certificate signing request csr-6mdrh is issued Aug 13 19:57:18 crc kubenswrapper[4183]: I0813 19:57:18.432251 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:18 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:18 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:18 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:18 crc kubenswrapper[4183]: I0813 19:57:18.432335 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.209688 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.210189 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.210282 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.210324 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.210456 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.209693 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.210087 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.210623 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.210670 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.211661 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.211916 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.212128 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.211860 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.212222 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.212259 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.212322 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.212328 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.212394 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.212401 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.212421 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.212475 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.212485 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.212506 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.212535 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.212579 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.212621 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.212681 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.212695 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.212753 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.212757 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.212877 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.212881 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.212984 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.213030 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.213034 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.213070 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.213078 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.213128 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.213134 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.213176 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.213219 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.213265 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.212936 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.213290 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.213337 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.213343 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.213387 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.213418 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.213434 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.213508 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.213573 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.213585 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.213620 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.213671 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.213755 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.213902 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.213913 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.213994 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.214071 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.214227 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.214264 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.214336 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.214412 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.214481 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.214516 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.214600 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.214628 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.214695 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.214761 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.215032 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.215116 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.215179 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.215253 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.215605 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.215716 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.215910 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.215992 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.216160 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.216238 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.216325 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.216393 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.216453 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.340423 4183 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-06-27 13:05:20 +0000 UTC, rotation deadline is 2026-04-29 11:41:58.636711427 +0000 UTC Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.340502 4183 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6207h44m39.296215398s for next certificate rotation Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.432000 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:19 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:19 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:19 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.432079 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:20 crc kubenswrapper[4183]: I0813 19:57:20.208455 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:57:20 crc kubenswrapper[4183]: E0813 19:57:20.208853 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:57:20 crc kubenswrapper[4183]: I0813 19:57:20.209088 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:57:20 crc kubenswrapper[4183]: I0813 19:57:20.209177 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:57:20 crc kubenswrapper[4183]: E0813 19:57:20.209252 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:57:20 crc kubenswrapper[4183]: I0813 19:57:20.209361 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:57:20 crc kubenswrapper[4183]: I0813 19:57:20.209438 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:57:20 crc kubenswrapper[4183]: E0813 19:57:20.209501 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:57:20 crc kubenswrapper[4183]: I0813 19:57:20.209721 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:57:20 crc kubenswrapper[4183]: I0813 19:57:20.211693 4183 scope.go:117] "RemoveContainer" containerID="2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5" Aug 13 19:57:20 crc kubenswrapper[4183]: E0813 19:57:20.212273 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:57:20 crc kubenswrapper[4183]: I0813 19:57:20.212492 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:57:20 crc kubenswrapper[4183]: E0813 19:57:20.221584 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:57:20 crc kubenswrapper[4183]: E0813 19:57:20.221890 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:57:20 crc kubenswrapper[4183]: E0813 19:57:20.222101 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:57:20 crc kubenswrapper[4183]: E0813 19:57:20.222298 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:57:20 crc kubenswrapper[4183]: I0813 19:57:20.341232 4183 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-06-27 13:05:20 +0000 UTC, rotation deadline is 2026-04-29 00:37:29.51445257 +0000 UTC Aug 13 19:57:20 crc kubenswrapper[4183]: I0813 19:57:20.341283 4183 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6196h40m9.173174313s for next certificate rotation Aug 13 19:57:20 crc kubenswrapper[4183]: I0813 19:57:20.435956 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:20 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:20 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:20 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:20 crc kubenswrapper[4183]: I0813 19:57:20.436048 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:20 crc kubenswrapper[4183]: E0813 19:57:20.528200 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208157 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208581 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208608 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208582 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208241 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208267 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208271 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208294 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208316 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208332 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208339 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208355 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208376 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208379 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208380 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208752 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208396 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208392 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208397 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208412 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208413 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208423 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208432 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208433 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208445 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208457 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208468 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208481 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208492 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208505 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208521 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208529 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208528 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208536 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208557 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208561 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208566 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208568 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208209 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.210300 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.210540 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.210648 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.210950 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.211163 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.212140 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.212276 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.212397 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.212533 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.212734 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.212557 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.213307 4183 scope.go:117] "RemoveContainer" containerID="2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.213378 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.213502 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.213613 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.213737 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.213936 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.214103 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.214207 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.214266 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.214363 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.214464 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.214586 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.214619 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.214640 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\"" pod="openshift-multus/multus-q88th" podUID="475321a1-8b7e-4033-8f72-b05a8b377347" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.214756 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.214942 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.215047 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.215129 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.215200 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.215402 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.215501 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.215516 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.215678 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.215764 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.215766 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.215889 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.215931 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.216007 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.216191 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.216273 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.216432 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.216573 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.216679 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.216858 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.432040 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:21 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:21 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:21 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.432151 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.611965 4183 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"crc\": StorageError: invalid object, Code: 4, Key: /kubernetes.io/leases/kube-node-lease/crc, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 705b8cea-b0fa-4d4c-9420-d8b3e9b05fb1, UID in object meta: " Aug 13 19:57:22 crc kubenswrapper[4183]: I0813 19:57:22.209301 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:57:22 crc kubenswrapper[4183]: I0813 19:57:22.209404 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:57:22 crc kubenswrapper[4183]: E0813 19:57:22.209562 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:57:22 crc kubenswrapper[4183]: I0813 19:57:22.209563 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:57:22 crc kubenswrapper[4183]: I0813 19:57:22.209624 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:57:22 crc kubenswrapper[4183]: I0813 19:57:22.209639 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:57:22 crc kubenswrapper[4183]: E0813 19:57:22.209729 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:57:22 crc kubenswrapper[4183]: E0813 19:57:22.209904 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:57:22 crc kubenswrapper[4183]: I0813 19:57:22.210017 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:57:22 crc kubenswrapper[4183]: E0813 19:57:22.210109 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:57:22 crc kubenswrapper[4183]: E0813 19:57:22.210181 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:57:22 crc kubenswrapper[4183]: I0813 19:57:22.210262 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:57:22 crc kubenswrapper[4183]: E0813 19:57:22.210346 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:57:22 crc kubenswrapper[4183]: E0813 19:57:22.210425 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:57:22 crc kubenswrapper[4183]: I0813 19:57:22.433563 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:22 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:22 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:22 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:22 crc kubenswrapper[4183]: I0813 19:57:22.433664 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.208546 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.208607 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.208562 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.208753 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.209161 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.209517 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.209733 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.209927 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.210009 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.210130 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.210176 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.210243 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.210348 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.210426 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.210499 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.210604 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.210679 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.210746 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.210931 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.210985 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.211047 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.211148 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.211198 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.211264 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.211376 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.211434 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.211499 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.211522 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.211605 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.211649 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.211820 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.211963 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.212049 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.212078 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.212114 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.210399 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.212195 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.212219 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.212250 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.212225 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.212336 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.212406 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.212421 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.212533 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.212552 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.212588 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.212619 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.212650 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.212689 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.212891 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.212954 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.213009 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.213176 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.213009 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.213057 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.213272 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.213085 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.213361 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.213453 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.213533 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.213631 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.213716 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.213980 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.214139 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.214186 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.214193 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.214288 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.214393 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.214474 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.214546 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.214586 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.214650 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.214723 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.214881 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.215031 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.215119 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.215176 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.215282 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.215309 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.215389 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.215479 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.215601 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.431727 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:23 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:23 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:23 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.431938 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:24 crc kubenswrapper[4183]: I0813 19:57:24.208706 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:57:24 crc kubenswrapper[4183]: I0813 19:57:24.208819 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:57:24 crc kubenswrapper[4183]: I0813 19:57:24.208707 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:57:24 crc kubenswrapper[4183]: I0813 19:57:24.208739 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:57:24 crc kubenswrapper[4183]: E0813 19:57:24.209017 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:57:24 crc kubenswrapper[4183]: I0813 19:57:24.209110 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:57:24 crc kubenswrapper[4183]: E0813 19:57:24.209205 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:57:24 crc kubenswrapper[4183]: I0813 19:57:24.209214 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:57:24 crc kubenswrapper[4183]: I0813 19:57:24.209235 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:57:24 crc kubenswrapper[4183]: E0813 19:57:24.209292 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:57:24 crc kubenswrapper[4183]: E0813 19:57:24.209357 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:57:24 crc kubenswrapper[4183]: E0813 19:57:24.209428 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:57:24 crc kubenswrapper[4183]: E0813 19:57:24.209511 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:57:24 crc kubenswrapper[4183]: E0813 19:57:24.209596 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:57:24 crc kubenswrapper[4183]: I0813 19:57:24.431766 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:24 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:24 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:24 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:24 crc kubenswrapper[4183]: I0813 19:57:24.431938 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.208689 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.208889 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.208720 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.208745 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.211114 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.211160 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.211114 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.211264 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.211266 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.211304 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.211329 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.211414 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.211461 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.211520 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.211650 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.211703 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.211768 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.212166 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.212222 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.212286 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.212376 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.212444 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.212534 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.212663 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.212856 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.212935 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.213166 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.213287 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.213336 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.213442 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.213526 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.213600 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.213648 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.213701 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.213745 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.213890 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.214036 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.214154 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.214163 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.214195 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.214692 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.214743 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.214874 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.214903 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.215366 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.215430 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.214927 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.214976 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.215000 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.215640 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.215003 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.215034 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.215762 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.214965 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.215069 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.215098 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.215275 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.216122 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.216329 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.216400 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.216405 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.216496 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.216582 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.216673 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.216754 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.216919 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.217020 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.217323 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.217454 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.217497 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.217547 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.217596 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.217471 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.217646 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.218089 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.218266 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.218411 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.218723 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.218938 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.219016 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.219118 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.219277 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.433163 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:25 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:25 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:25 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.433272 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.530038 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:57:26 crc kubenswrapper[4183]: I0813 19:57:26.208542 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:57:26 crc kubenswrapper[4183]: I0813 19:57:26.208571 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:57:26 crc kubenswrapper[4183]: I0813 19:57:26.208607 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:57:26 crc kubenswrapper[4183]: E0813 19:57:26.209602 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:57:26 crc kubenswrapper[4183]: E0813 19:57:26.209687 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:57:26 crc kubenswrapper[4183]: I0813 19:57:26.208628 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:57:26 crc kubenswrapper[4183]: E0813 19:57:26.209906 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:57:26 crc kubenswrapper[4183]: I0813 19:57:26.208677 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:57:26 crc kubenswrapper[4183]: E0813 19:57:26.210019 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:57:26 crc kubenswrapper[4183]: I0813 19:57:26.208686 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:57:26 crc kubenswrapper[4183]: I0813 19:57:26.208765 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:57:26 crc kubenswrapper[4183]: E0813 19:57:26.209250 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:57:26 crc kubenswrapper[4183]: E0813 19:57:26.210096 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:57:26 crc kubenswrapper[4183]: E0813 19:57:26.210370 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:57:26 crc kubenswrapper[4183]: I0813 19:57:26.434189 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:26 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:26 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:26 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:26 crc kubenswrapper[4183]: I0813 19:57:26.434368 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.208927 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.208990 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.209036 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.209098 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.209129 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.209190 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.209214 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.209248 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.208937 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.208959 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.209353 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.209361 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.209374 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.209357 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.209400 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.209467 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.209481 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.209550 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.209550 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.209760 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.209861 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.209769 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.209919 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.210052 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.210158 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.210181 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.210234 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.210250 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.210158 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.210312 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.210316 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.210345 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.210358 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.210406 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.210415 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.210443 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.210487 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.210530 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.210545 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.210590 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.210591 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.210642 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.210642 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.210662 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.210728 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.210822 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.210847 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.210872 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.210921 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.210946 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.210984 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.211033 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.211055 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.211063 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.211179 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.211203 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.211221 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.211241 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.211360 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.211475 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.211502 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.211572 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.211728 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.211757 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.211953 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.211968 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.212055 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.212208 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.212494 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.212529 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.212604 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.212614 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.212670 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.212729 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.212874 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.212986 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.213192 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.213227 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.213343 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.213612 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.213696 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.213925 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.433656 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:27 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:27 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:27 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.433849 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:28 crc kubenswrapper[4183]: I0813 19:57:28.212108 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:57:28 crc kubenswrapper[4183]: I0813 19:57:28.212235 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:57:28 crc kubenswrapper[4183]: I0813 19:57:28.212254 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:57:28 crc kubenswrapper[4183]: I0813 19:57:28.212302 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:57:28 crc kubenswrapper[4183]: I0813 19:57:28.212326 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:57:28 crc kubenswrapper[4183]: I0813 19:57:28.212191 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:57:28 crc kubenswrapper[4183]: E0813 19:57:28.212429 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:57:28 crc kubenswrapper[4183]: E0813 19:57:28.212555 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:57:28 crc kubenswrapper[4183]: E0813 19:57:28.212664 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:57:28 crc kubenswrapper[4183]: E0813 19:57:28.212859 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:57:28 crc kubenswrapper[4183]: E0813 19:57:28.212955 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:57:28 crc kubenswrapper[4183]: E0813 19:57:28.213017 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:57:28 crc kubenswrapper[4183]: I0813 19:57:28.212919 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:57:28 crc kubenswrapper[4183]: E0813 19:57:28.213130 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:57:28 crc kubenswrapper[4183]: I0813 19:57:28.432307 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:28 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:28 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:28 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:28 crc kubenswrapper[4183]: I0813 19:57:28.432407 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.209210 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.209442 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.209659 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.209740 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.209901 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.209959 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.210088 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.210099 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.210168 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.210213 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.210309 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.210361 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.210455 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.210482 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.210594 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.210690 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.210698 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.210934 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.211030 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.211069 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.211081 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.211127 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.211164 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.211165 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.211182 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.211209 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.211283 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.211293 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.211285 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.211334 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.211376 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.211397 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.211408 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.211493 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.211678 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.211721 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.211902 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.211945 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.212090 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.212289 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.212417 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.212430 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.212484 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.212560 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.212586 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.212647 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.212714 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.212870 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.212905 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.212951 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.213017 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.213081 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.213167 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.213331 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.213336 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.213364 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.213365 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.213398 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.213480 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.213538 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.213574 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.213681 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.213741 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.213767 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.213819 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.213930 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.213996 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.214037 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.214137 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.214223 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.214299 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.214548 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.214648 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.214720 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.214868 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.214915 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.215002 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.215068 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.215146 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.215217 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.215297 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.215358 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.433632 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:29 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:29 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:29 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.433961 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:30 crc kubenswrapper[4183]: I0813 19:57:30.209463 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:57:30 crc kubenswrapper[4183]: I0813 19:57:30.209590 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:57:30 crc kubenswrapper[4183]: I0813 19:57:30.209681 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:57:30 crc kubenswrapper[4183]: I0813 19:57:30.209463 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:57:30 crc kubenswrapper[4183]: I0813 19:57:30.209520 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:57:30 crc kubenswrapper[4183]: E0813 19:57:30.210030 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:57:30 crc kubenswrapper[4183]: I0813 19:57:30.210118 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:57:30 crc kubenswrapper[4183]: E0813 19:57:30.210276 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:57:30 crc kubenswrapper[4183]: I0813 19:57:30.210362 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:57:30 crc kubenswrapper[4183]: E0813 19:57:30.210469 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:57:30 crc kubenswrapper[4183]: E0813 19:57:30.210569 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:57:30 crc kubenswrapper[4183]: E0813 19:57:30.210660 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:57:30 crc kubenswrapper[4183]: E0813 19:57:30.211015 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:57:30 crc kubenswrapper[4183]: E0813 19:57:30.211226 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:57:30 crc kubenswrapper[4183]: I0813 19:57:30.431976 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:30 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:30 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:30 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:30 crc kubenswrapper[4183]: I0813 19:57:30.432089 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:30 crc kubenswrapper[4183]: E0813 19:57:30.531284 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.209521 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.209619 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.210033 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.210131 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.210143 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.210265 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.210328 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.210442 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.210509 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.210592 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.210676 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.210759 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.210864 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.210931 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.211029 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.210596 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.211190 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.210624 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.210640 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.210659 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.211143 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.211511 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.211993 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.212058 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.212196 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.212284 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.212356 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.212412 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.212480 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.212544 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.212646 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.212698 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.212898 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.212993 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.213059 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.212099 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.213191 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.213296 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.213384 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.213465 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.213527 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.213613 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.213693 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.213755 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.213887 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.212136 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.212163 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.213975 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.214041 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.214046 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.214087 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.214092 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.214154 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.214163 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.214203 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.214211 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.214326 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.214387 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.214472 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.214557 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.214656 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.214724 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.214891 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.214999 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.215067 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.215118 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.215154 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.215204 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.215299 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.215462 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.215675 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.215736 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.215895 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.215974 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.216011 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.216113 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.216238 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.216285 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.216356 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.216417 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.217064 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.217258 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.432319 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:31 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:31 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:31 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.432470 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:32 crc kubenswrapper[4183]: I0813 19:57:32.208899 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:57:32 crc kubenswrapper[4183]: I0813 19:57:32.209007 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:57:32 crc kubenswrapper[4183]: I0813 19:57:32.209054 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:57:32 crc kubenswrapper[4183]: E0813 19:57:32.209128 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:57:32 crc kubenswrapper[4183]: I0813 19:57:32.209182 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:57:32 crc kubenswrapper[4183]: I0813 19:57:32.208898 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:57:32 crc kubenswrapper[4183]: E0813 19:57:32.209297 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:57:32 crc kubenswrapper[4183]: I0813 19:57:32.208939 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:57:32 crc kubenswrapper[4183]: E0813 19:57:32.209394 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:57:32 crc kubenswrapper[4183]: I0813 19:57:32.209409 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:57:32 crc kubenswrapper[4183]: E0813 19:57:32.209713 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:57:32 crc kubenswrapper[4183]: E0813 19:57:32.209723 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:57:32 crc kubenswrapper[4183]: E0813 19:57:32.209883 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:57:32 crc kubenswrapper[4183]: E0813 19:57:32.209965 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:57:32 crc kubenswrapper[4183]: I0813 19:57:32.431318 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:32 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:32 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:32 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:32 crc kubenswrapper[4183]: I0813 19:57:32.431441 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.209496 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.209897 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.210115 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.210141 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.210211 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.210230 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.210256 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.210295 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.210124 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.210413 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.210488 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.210635 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.210699 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.210862 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.210865 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.210935 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.210998 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.211004 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.211079 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.211138 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.211161 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.211192 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.211288 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.211333 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.211339 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.211411 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.211498 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.211584 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.211653 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.211714 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.211752 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.211931 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.212039 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212081 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212055 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212068 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212158 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212183 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.212212 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212219 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212245 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212272 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.212295 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212304 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212323 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212353 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212373 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212398 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.212400 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212417 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212435 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212473 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.212546 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212567 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212600 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212630 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.212650 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.212758 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212880 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212230 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212909 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212927 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212991 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.213208 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.213323 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.213581 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.213695 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.213978 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.214016 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.214102 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.214429 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.214560 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.214710 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.214896 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.215068 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.215284 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.215371 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.215426 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.215673 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.215758 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.215897 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.215984 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.217270 4183 scope.go:117] "RemoveContainer" containerID="2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.433584 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:33 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:33 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:33 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.433982 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.688295 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovnkube-controller/5.log" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.692328 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"a12818978287aa2891509aac46a2dffcb4a4895e9ad613cdd64b4d713d4507b9"} Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.692941 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:57:34 crc kubenswrapper[4183]: I0813 19:57:34.209161 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:57:34 crc kubenswrapper[4183]: I0813 19:57:34.209275 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:57:34 crc kubenswrapper[4183]: E0813 19:57:34.209412 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:57:34 crc kubenswrapper[4183]: E0813 19:57:34.209677 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:57:34 crc kubenswrapper[4183]: I0813 19:57:34.209744 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:57:34 crc kubenswrapper[4183]: I0813 19:57:34.209687 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:57:34 crc kubenswrapper[4183]: E0813 19:57:34.210096 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:57:34 crc kubenswrapper[4183]: E0813 19:57:34.210151 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:57:34 crc kubenswrapper[4183]: I0813 19:57:34.209724 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:57:34 crc kubenswrapper[4183]: E0813 19:57:34.210268 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:57:34 crc kubenswrapper[4183]: I0813 19:57:34.210561 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:57:34 crc kubenswrapper[4183]: E0813 19:57:34.211064 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:57:34 crc kubenswrapper[4183]: I0813 19:57:34.211375 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:57:34 crc kubenswrapper[4183]: E0813 19:57:34.211912 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:57:34 crc kubenswrapper[4183]: I0813 19:57:34.433020 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:34 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:34 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:34 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:34 crc kubenswrapper[4183]: I0813 19:57:34.433150 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.208635 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.209057 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.209157 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.209182 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.209087 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.208885 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.208896 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.208916 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.208882 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.208970 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.208987 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.209012 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.208684 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.212346 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.212469 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.212470 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.212490 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.212501 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.212502 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.212520 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.212525 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.212539 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.212552 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.212566 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.212592 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.213272 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.213431 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.213487 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.214025 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.214029 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.214034 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.214065 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.214095 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.214149 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.214150 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.214187 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.214197 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.214216 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.214221 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.214199 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.214246 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.214255 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.214261 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.214302 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.214314 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.214319 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.214429 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.214983 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.215118 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.215511 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.215682 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.215695 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.215910 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.215855 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.216083 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.216201 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.216298 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.216428 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.216501 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.216544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.216672 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.216900 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.217311 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.217494 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.217954 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.218031 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.218070 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.218104 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.218142 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.218210 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.218226 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.218279 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.218417 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.218567 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.218714 4183 scope.go:117] "RemoveContainer" containerID="2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.218730 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.219001 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.219172 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.219395 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.219424 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.219507 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.217738 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.219631 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.433259 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:35 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:35 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:35 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.433551 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.533656 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:57:36 crc kubenswrapper[4183]: I0813 19:57:36.208755 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:57:36 crc kubenswrapper[4183]: E0813 19:57:36.209055 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:57:36 crc kubenswrapper[4183]: I0813 19:57:36.209071 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:57:36 crc kubenswrapper[4183]: I0813 19:57:36.209147 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:57:36 crc kubenswrapper[4183]: E0813 19:57:36.209358 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:57:36 crc kubenswrapper[4183]: I0813 19:57:36.209661 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:57:36 crc kubenswrapper[4183]: E0813 19:57:36.209859 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:57:36 crc kubenswrapper[4183]: I0813 19:57:36.210086 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:57:36 crc kubenswrapper[4183]: I0813 19:57:36.210164 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:57:36 crc kubenswrapper[4183]: E0813 19:57:36.210274 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:57:36 crc kubenswrapper[4183]: I0813 19:57:36.210646 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:57:36 crc kubenswrapper[4183]: E0813 19:57:36.210903 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:57:36 crc kubenswrapper[4183]: E0813 19:57:36.211114 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:57:36 crc kubenswrapper[4183]: E0813 19:57:36.211356 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:57:36 crc kubenswrapper[4183]: I0813 19:57:36.437962 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:36 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:36 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:36 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:36 crc kubenswrapper[4183]: I0813 19:57:36.438098 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:36 crc kubenswrapper[4183]: I0813 19:57:36.721268 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/4.log" Aug 13 19:57:36 crc kubenswrapper[4183]: I0813 19:57:36.721371 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q88th" event={"ID":"475321a1-8b7e-4033-8f72-b05a8b377347","Type":"ContainerStarted","Data":"f7be0e9008401c6756f1bf4076bb89596e4b26b5733f27692dcb45eff8e4fa5e"} Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.212437 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.212748 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.212761 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.212959 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.213048 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.213054 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.213165 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.213166 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.213255 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.213280 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.213310 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.213506 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.213511 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.213543 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.213507 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.213608 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.213680 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.213646 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.213711 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.213758 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.213883 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.213887 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.213767 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.214004 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.214013 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.214110 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.214119 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.214269 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.214409 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.214422 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.214463 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.214547 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.214645 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.214708 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.214900 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.215007 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.215047 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.215106 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.215178 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.215244 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.215300 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.215384 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.215439 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.215557 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.215638 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.215681 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.215729 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.216130 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.216197 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.216259 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.216334 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.216456 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.216509 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.216526 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.216597 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.216601 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.216704 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.216902 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.216985 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.217045 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.217145 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.217209 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.217271 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.217355 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.217482 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.217706 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.217891 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.217986 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.218064 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.218156 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.218193 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.218329 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.218451 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.218522 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.218602 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.218677 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.218919 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.223959 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.224409 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.224651 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.225589 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.225977 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.434638 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:37 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:37 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:37 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.435052 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:38 crc kubenswrapper[4183]: I0813 19:57:38.208708 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:57:38 crc kubenswrapper[4183]: I0813 19:57:38.208762 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:57:38 crc kubenswrapper[4183]: E0813 19:57:38.210319 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:57:38 crc kubenswrapper[4183]: I0813 19:57:38.208980 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:57:38 crc kubenswrapper[4183]: E0813 19:57:38.211051 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:57:38 crc kubenswrapper[4183]: I0813 19:57:38.209062 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:57:38 crc kubenswrapper[4183]: E0813 19:57:38.211440 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:57:38 crc kubenswrapper[4183]: I0813 19:57:38.209114 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:57:38 crc kubenswrapper[4183]: E0813 19:57:38.211589 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:57:38 crc kubenswrapper[4183]: I0813 19:57:38.209142 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:57:38 crc kubenswrapper[4183]: E0813 19:57:38.211693 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:57:38 crc kubenswrapper[4183]: E0813 19:57:38.210509 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:57:38 crc kubenswrapper[4183]: I0813 19:57:38.208919 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:57:38 crc kubenswrapper[4183]: E0813 19:57:38.211931 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:57:38 crc kubenswrapper[4183]: I0813 19:57:38.433769 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:38 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:38 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:38 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:38 crc kubenswrapper[4183]: I0813 19:57:38.434416 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.209046 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.209324 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.209580 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.209697 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.210028 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.210163 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.210329 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.210453 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.210618 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.210726 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.210999 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.211145 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.211329 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.211447 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.211633 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.211942 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.212123 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.212225 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.212402 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.212527 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.212577 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.212681 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.212741 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.212747 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.212942 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.212982 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.213074 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.213090 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.213215 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.213349 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.213488 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.213566 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.213889 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.213912 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.213987 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.213994 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.214027 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.214106 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.214113 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.214159 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.214195 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.214205 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.214241 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.214250 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.214311 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.214350 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.214301 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.214451 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.214577 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.214716 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.214766 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.214887 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.214910 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.214815 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.214959 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.215100 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.215237 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.215253 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.215284 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.215379 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.215487 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.215542 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.215602 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.215713 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.215851 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.215861 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.215995 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.216021 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.216112 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.216193 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.216212 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.216282 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.216350 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.216408 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.216439 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.216491 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.216554 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.216613 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.216701 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.216755 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.217158 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.217197 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.436366 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:39 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:39 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:39 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.437058 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:40 crc kubenswrapper[4183]: I0813 19:57:40.211265 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:57:40 crc kubenswrapper[4183]: I0813 19:57:40.212059 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:57:40 crc kubenswrapper[4183]: I0813 19:57:40.212149 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:57:40 crc kubenswrapper[4183]: E0813 19:57:40.212269 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:57:40 crc kubenswrapper[4183]: E0813 19:57:40.212598 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:57:40 crc kubenswrapper[4183]: I0813 19:57:40.212883 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:57:40 crc kubenswrapper[4183]: I0813 19:57:40.212961 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:57:40 crc kubenswrapper[4183]: E0813 19:57:40.213051 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:57:40 crc kubenswrapper[4183]: I0813 19:57:40.213066 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:57:40 crc kubenswrapper[4183]: I0813 19:57:40.213156 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:57:40 crc kubenswrapper[4183]: E0813 19:57:40.213247 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:57:40 crc kubenswrapper[4183]: E0813 19:57:40.213347 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:57:40 crc kubenswrapper[4183]: E0813 19:57:40.213714 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:57:40 crc kubenswrapper[4183]: E0813 19:57:40.214076 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:57:40 crc kubenswrapper[4183]: I0813 19:57:40.266574 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:57:40 crc kubenswrapper[4183]: I0813 19:57:40.432590 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:40 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:40 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:40 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:40 crc kubenswrapper[4183]: I0813 19:57:40.432865 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.209050 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.209745 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.210352 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.210407 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.210482 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.210552 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.210935 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.211161 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.211388 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.210370 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.211084 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.211463 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.211490 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.211562 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.211690 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.211747 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.211883 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.211912 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.215150 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.215865 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.211011 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.215947 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.221505 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.222895 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.223083 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.215911 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.216010 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.228028 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.216046 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.229269 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.229562 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.215977 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.241601 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.242056 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.242067 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.242113 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.242336 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.242411 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.242501 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.242535 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.242550 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.241610 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.242696 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.242706 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.242721 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.242902 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.243032 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-kpdvz" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.243183 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.243256 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.243304 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console-operator"/"webhook-serving-cert" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.243353 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.243418 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.243625 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.243639 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.243650 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.243691 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.243759 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.243762 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.244175 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.244274 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.246346 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.247889 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.248196 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.250203 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.243256 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.252738 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.256433 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.257146 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.258172 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.258243 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.258606 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.258700 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-dwn4s" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.258764 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.258966 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.259106 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.259175 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.259199 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.259241 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.259245 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.259285 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.259362 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.259422 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.259435 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.259478 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.259494 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.259526 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.259548 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.261272 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.261681 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.264082 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.269514 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.307591 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.308505 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.309621 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.309967 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.310290 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.310582 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.310883 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.311166 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.311376 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.311464 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.311691 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.311910 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.312199 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.312374 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.312658 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.313111 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.313469 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.311752 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.313112 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.314268 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.314444 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.314669 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.315003 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.314447 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.315365 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.310983 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.314133 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.314064 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.314550 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.314611 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.316354 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-sv888" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.314289 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.317420 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.317867 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.318034 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.318037 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.318165 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.318298 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.318346 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.318896 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-dl9g2" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.320540 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.320732 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.321535 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.322249 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.322443 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.322640 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.323503 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.323947 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.320545 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.335763 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.373275 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.377125 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.377867 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.378103 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.380902 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.382316 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.380925 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.392298 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.761730 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.771421 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:41 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:41 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:41 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.772021 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.773384 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.773751 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.775921 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.778358 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.782116 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.782176 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-twmwc" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.782323 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.782358 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.782478 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.782508 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.782516 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.782613 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.782644 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.782866 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.782919 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.783210 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.783263 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.787909 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.798160 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.208297 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.208358 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.208409 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.208414 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.208459 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.208476 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.208506 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.212364 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.213613 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.219195 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.220254 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.220488 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.220649 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.220732 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.221293 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.221356 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.221537 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.222323 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.222449 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.222589 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.222762 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.224049 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.224403 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.225661 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.225720 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.225962 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.226365 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.233581 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.253567 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.275679 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.304066 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.314169 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.434430 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:42 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:42 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:42 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.434547 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:43 crc kubenswrapper[4183]: I0813 19:57:43.432432 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:43 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:43 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:43 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:43 crc kubenswrapper[4183]: I0813 19:57:43.432531 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:44 crc kubenswrapper[4183]: I0813 19:57:44.432188 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:44 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:44 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:44 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:44 crc kubenswrapper[4183]: I0813 19:57:44.432304 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:45 crc kubenswrapper[4183]: I0813 19:57:45.432995 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:45 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:45 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:45 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:45 crc kubenswrapper[4183]: I0813 19:57:45.433130 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:46 crc kubenswrapper[4183]: I0813 19:57:46.433813 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:46 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:46 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:46 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:46 crc kubenswrapper[4183]: I0813 19:57:46.433992 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:47 crc kubenswrapper[4183]: I0813 19:57:47.353241 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeReady" Aug 13 19:57:47 crc kubenswrapper[4183]: I0813 19:57:47.433148 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:47 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:47 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:47 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:47 crc kubenswrapper[4183]: I0813 19:57:47.433633 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.197613 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-k9qqb"] Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.197747 4183 topology_manager.go:215] "Topology Admit Handler" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" podNamespace="openshift-marketplace" podName="community-operators-k9qqb" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.199300 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k9qqb" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.259669 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccdf38cf-634a-41a2-9c8b-74bb86af80a7-catalog-content\") pod \"community-operators-k9qqb\" (UID: \"ccdf38cf-634a-41a2-9c8b-74bb86af80a7\") " pod="openshift-marketplace/community-operators-k9qqb" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.260237 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccdf38cf-634a-41a2-9c8b-74bb86af80a7-utilities\") pod \"community-operators-k9qqb\" (UID: \"ccdf38cf-634a-41a2-9c8b-74bb86af80a7\") " pod="openshift-marketplace/community-operators-k9qqb" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.260552 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n59fs\" (UniqueName: \"kubernetes.io/projected/ccdf38cf-634a-41a2-9c8b-74bb86af80a7-kube-api-access-n59fs\") pod \"community-operators-k9qqb\" (UID: \"ccdf38cf-634a-41a2-9c8b-74bb86af80a7\") " pod="openshift-marketplace/community-operators-k9qqb" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.363416 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccdf38cf-634a-41a2-9c8b-74bb86af80a7-catalog-content\") pod \"community-operators-k9qqb\" (UID: \"ccdf38cf-634a-41a2-9c8b-74bb86af80a7\") " pod="openshift-marketplace/community-operators-k9qqb" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.363500 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccdf38cf-634a-41a2-9c8b-74bb86af80a7-utilities\") pod \"community-operators-k9qqb\" (UID: \"ccdf38cf-634a-41a2-9c8b-74bb86af80a7\") " pod="openshift-marketplace/community-operators-k9qqb" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.363691 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-n59fs\" (UniqueName: \"kubernetes.io/projected/ccdf38cf-634a-41a2-9c8b-74bb86af80a7-kube-api-access-n59fs\") pod \"community-operators-k9qqb\" (UID: \"ccdf38cf-634a-41a2-9c8b-74bb86af80a7\") " pod="openshift-marketplace/community-operators-k9qqb" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.364212 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccdf38cf-634a-41a2-9c8b-74bb86af80a7-catalog-content\") pod \"community-operators-k9qqb\" (UID: \"ccdf38cf-634a-41a2-9c8b-74bb86af80a7\") " pod="openshift-marketplace/community-operators-k9qqb" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.364231 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccdf38cf-634a-41a2-9c8b-74bb86af80a7-utilities\") pod \"community-operators-k9qqb\" (UID: \"ccdf38cf-634a-41a2-9c8b-74bb86af80a7\") " pod="openshift-marketplace/community-operators-k9qqb" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.424550 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dcqzh"] Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.424707 4183 topology_manager.go:215] "Topology Admit Handler" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" podNamespace="openshift-marketplace" podName="redhat-operators-dcqzh" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.425866 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dcqzh" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.428554 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-g4v97"] Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.428689 4183 topology_manager.go:215] "Topology Admit Handler" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" podNamespace="openshift-marketplace" podName="certified-operators-g4v97" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.429911 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g4v97" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.432870 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-7cbd5666ff-bbfrf"] Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.433017 4183 topology_manager.go:215] "Topology Admit Handler" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" podNamespace="openshift-image-registry" podName="image-registry-7cbd5666ff-bbfrf" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.433729 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.436674 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9"] Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.437013 4183 topology_manager.go:215] "Topology Admit Handler" podUID="8500d7bd-50fb-4ca6-af41-b7a24cae43cd" podNamespace="openshift-operator-lifecycle-manager" podName="collect-profiles-29251905-zmjv9" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.437705 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.436687 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-q786x" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.441216 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.444276 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-45g9d" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.451169 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:48 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:48 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:48 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.451289 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.493579 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k9qqb"] Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.720542 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dcqzh"] Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.723559 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g4v97"] Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.737102 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9"] Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.756056 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-n59fs\" (UniqueName: \"kubernetes.io/projected/ccdf38cf-634a-41a2-9c8b-74bb86af80a7-kube-api-access-n59fs\") pod \"community-operators-k9qqb\" (UID: \"ccdf38cf-634a-41a2-9c8b-74bb86af80a7\") " pod="openshift-marketplace/community-operators-k9qqb" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.816858 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k9qqb" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.981515 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/42b6a393-6194-4620-bf8f-7e4b6cbe5679-ca-trust-extracted\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.982108 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nrgl\" (UniqueName: \"kubernetes.io/projected/8500d7bd-50fb-4ca6-af41-b7a24cae43cd-kube-api-access-5nrgl\") pod \"collect-profiles-29251905-zmjv9\" (UID: \"8500d7bd-50fb-4ca6-af41-b7a24cae43cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.982213 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42b6a393-6194-4620-bf8f-7e4b6cbe5679-trusted-ca\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.982516 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/42b6a393-6194-4620-bf8f-7e4b6cbe5679-registry-tls\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.982633 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f9ss\" (UniqueName: \"kubernetes.io/projected/42b6a393-6194-4620-bf8f-7e4b6cbe5679-kube-api-access-4f9ss\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.982895 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8500d7bd-50fb-4ca6-af41-b7a24cae43cd-secret-volume\") pod \"collect-profiles-29251905-zmjv9\" (UID: \"8500d7bd-50fb-4ca6-af41-b7a24cae43cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.983994 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8500d7bd-50fb-4ca6-af41-b7a24cae43cd-config-volume\") pod \"collect-profiles-29251905-zmjv9\" (UID: \"8500d7bd-50fb-4ca6-af41-b7a24cae43cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.984246 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6db26b71-4e04-4688-a0c0-00e06e8c888d-utilities\") pod \"redhat-operators-dcqzh\" (UID: \"6db26b71-4e04-4688-a0c0-00e06e8c888d\") " pod="openshift-marketplace/redhat-operators-dcqzh" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.984410 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6db26b71-4e04-4688-a0c0-00e06e8c888d-catalog-content\") pod \"redhat-operators-dcqzh\" (UID: \"6db26b71-4e04-4688-a0c0-00e06e8c888d\") " pod="openshift-marketplace/redhat-operators-dcqzh" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.984449 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb917686-edfb-4158-86ad-6fce0abec64c-catalog-content\") pod \"certified-operators-g4v97\" (UID: \"bb917686-edfb-4158-86ad-6fce0abec64c\") " pod="openshift-marketplace/certified-operators-g4v97" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.984701 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/42b6a393-6194-4620-bf8f-7e4b6cbe5679-registry-certificates\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.984987 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwzcr\" (UniqueName: \"kubernetes.io/projected/bb917686-edfb-4158-86ad-6fce0abec64c-kube-api-access-mwzcr\") pod \"certified-operators-g4v97\" (UID: \"bb917686-edfb-4158-86ad-6fce0abec64c\") " pod="openshift-marketplace/certified-operators-g4v97" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.985149 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzb4s\" (UniqueName: \"kubernetes.io/projected/6db26b71-4e04-4688-a0c0-00e06e8c888d-kube-api-access-nzb4s\") pod \"redhat-operators-dcqzh\" (UID: \"6db26b71-4e04-4688-a0c0-00e06e8c888d\") " pod="openshift-marketplace/redhat-operators-dcqzh" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.985556 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/42b6a393-6194-4620-bf8f-7e4b6cbe5679-bound-sa-token\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.986030 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb917686-edfb-4158-86ad-6fce0abec64c-utilities\") pod \"certified-operators-g4v97\" (UID: \"bb917686-edfb-4158-86ad-6fce0abec64c\") " pod="openshift-marketplace/certified-operators-g4v97" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.986310 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/42b6a393-6194-4620-bf8f-7e4b6cbe5679-installation-pull-secrets\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.087352 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8500d7bd-50fb-4ca6-af41-b7a24cae43cd-secret-volume\") pod \"collect-profiles-29251905-zmjv9\" (UID: \"8500d7bd-50fb-4ca6-af41-b7a24cae43cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.087993 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8500d7bd-50fb-4ca6-af41-b7a24cae43cd-config-volume\") pod \"collect-profiles-29251905-zmjv9\" (UID: \"8500d7bd-50fb-4ca6-af41-b7a24cae43cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.088206 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6db26b71-4e04-4688-a0c0-00e06e8c888d-utilities\") pod \"redhat-operators-dcqzh\" (UID: \"6db26b71-4e04-4688-a0c0-00e06e8c888d\") " pod="openshift-marketplace/redhat-operators-dcqzh" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.088407 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6db26b71-4e04-4688-a0c0-00e06e8c888d-catalog-content\") pod \"redhat-operators-dcqzh\" (UID: \"6db26b71-4e04-4688-a0c0-00e06e8c888d\") " pod="openshift-marketplace/redhat-operators-dcqzh" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.088469 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb917686-edfb-4158-86ad-6fce0abec64c-catalog-content\") pod \"certified-operators-g4v97\" (UID: \"bb917686-edfb-4158-86ad-6fce0abec64c\") " pod="openshift-marketplace/certified-operators-g4v97" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.088618 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/42b6a393-6194-4620-bf8f-7e4b6cbe5679-registry-certificates\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.088721 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-mwzcr\" (UniqueName: \"kubernetes.io/projected/bb917686-edfb-4158-86ad-6fce0abec64c-kube-api-access-mwzcr\") pod \"certified-operators-g4v97\" (UID: \"bb917686-edfb-4158-86ad-6fce0abec64c\") " pod="openshift-marketplace/certified-operators-g4v97" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.088913 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nzb4s\" (UniqueName: \"kubernetes.io/projected/6db26b71-4e04-4688-a0c0-00e06e8c888d-kube-api-access-nzb4s\") pod \"redhat-operators-dcqzh\" (UID: \"6db26b71-4e04-4688-a0c0-00e06e8c888d\") " pod="openshift-marketplace/redhat-operators-dcqzh" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.088951 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6db26b71-4e04-4688-a0c0-00e06e8c888d-utilities\") pod \"redhat-operators-dcqzh\" (UID: \"6db26b71-4e04-4688-a0c0-00e06e8c888d\") " pod="openshift-marketplace/redhat-operators-dcqzh" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.089277 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/42b6a393-6194-4620-bf8f-7e4b6cbe5679-bound-sa-token\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.089332 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb917686-edfb-4158-86ad-6fce0abec64c-catalog-content\") pod \"certified-operators-g4v97\" (UID: \"bb917686-edfb-4158-86ad-6fce0abec64c\") " pod="openshift-marketplace/certified-operators-g4v97" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.089423 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/42b6a393-6194-4620-bf8f-7e4b6cbe5679-installation-pull-secrets\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.089452 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb917686-edfb-4158-86ad-6fce0abec64c-utilities\") pod \"certified-operators-g4v97\" (UID: \"bb917686-edfb-4158-86ad-6fce0abec64c\") " pod="openshift-marketplace/certified-operators-g4v97" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.089987 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb917686-edfb-4158-86ad-6fce0abec64c-utilities\") pod \"certified-operators-g4v97\" (UID: \"bb917686-edfb-4158-86ad-6fce0abec64c\") " pod="openshift-marketplace/certified-operators-g4v97" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.090317 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/42b6a393-6194-4620-bf8f-7e4b6cbe5679-ca-trust-extracted\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.090496 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5nrgl\" (UniqueName: \"kubernetes.io/projected/8500d7bd-50fb-4ca6-af41-b7a24cae43cd-kube-api-access-5nrgl\") pod \"collect-profiles-29251905-zmjv9\" (UID: \"8500d7bd-50fb-4ca6-af41-b7a24cae43cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.090536 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8500d7bd-50fb-4ca6-af41-b7a24cae43cd-config-volume\") pod \"collect-profiles-29251905-zmjv9\" (UID: \"8500d7bd-50fb-4ca6-af41-b7a24cae43cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.090613 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42b6a393-6194-4620-bf8f-7e4b6cbe5679-trusted-ca\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.090872 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/42b6a393-6194-4620-bf8f-7e4b6cbe5679-registry-tls\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.090979 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4f9ss\" (UniqueName: \"kubernetes.io/projected/42b6a393-6194-4620-bf8f-7e4b6cbe5679-kube-api-access-4f9ss\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.091057 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/42b6a393-6194-4620-bf8f-7e4b6cbe5679-ca-trust-extracted\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.091318 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/42b6a393-6194-4620-bf8f-7e4b6cbe5679-registry-certificates\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.092134 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6db26b71-4e04-4688-a0c0-00e06e8c888d-catalog-content\") pod \"redhat-operators-dcqzh\" (UID: \"6db26b71-4e04-4688-a0c0-00e06e8c888d\") " pod="openshift-marketplace/redhat-operators-dcqzh" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.092477 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42b6a393-6194-4620-bf8f-7e4b6cbe5679-trusted-ca\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.095720 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/42b6a393-6194-4620-bf8f-7e4b6cbe5679-installation-pull-secrets\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.097461 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8500d7bd-50fb-4ca6-af41-b7a24cae43cd-secret-volume\") pod \"collect-profiles-29251905-zmjv9\" (UID: \"8500d7bd-50fb-4ca6-af41-b7a24cae43cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.104405 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/42b6a393-6194-4620-bf8f-7e4b6cbe5679-registry-tls\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.336484 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-7cbd5666ff-bbfrf"] Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.342516 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/42b6a393-6194-4620-bf8f-7e4b6cbe5679-bound-sa-token\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.362020 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nrgl\" (UniqueName: \"kubernetes.io/projected/8500d7bd-50fb-4ca6-af41-b7a24cae43cd-kube-api-access-5nrgl\") pod \"collect-profiles-29251905-zmjv9\" (UID: \"8500d7bd-50fb-4ca6-af41-b7a24cae43cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.368744 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-4f9ss\" (UniqueName: \"kubernetes.io/projected/42b6a393-6194-4620-bf8f-7e4b6cbe5679-kube-api-access-4f9ss\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.378023 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.382516 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzb4s\" (UniqueName: \"kubernetes.io/projected/6db26b71-4e04-4688-a0c0-00e06e8c888d-kube-api-access-nzb4s\") pod \"redhat-operators-dcqzh\" (UID: \"6db26b71-4e04-4688-a0c0-00e06e8c888d\") " pod="openshift-marketplace/redhat-operators-dcqzh" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.388390 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwzcr\" (UniqueName: \"kubernetes.io/projected/bb917686-edfb-4158-86ad-6fce0abec64c-kube-api-access-mwzcr\") pod \"certified-operators-g4v97\" (UID: \"bb917686-edfb-4158-86ad-6fce0abec64c\") " pod="openshift-marketplace/certified-operators-g4v97" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.434101 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:49 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:49 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:49 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.434603 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.646975 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dcqzh" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.656723 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g4v97" Aug 13 19:57:50 crc kubenswrapper[4183]: I0813 19:57:50.103073 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k9qqb"] Aug 13 19:57:50 crc kubenswrapper[4183]: I0813 19:57:50.163072 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9"] Aug 13 19:57:50 crc kubenswrapper[4183]: I0813 19:57:50.438628 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:50 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:50 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:50 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:50 crc kubenswrapper[4183]: I0813 19:57:50.439249 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:50 crc kubenswrapper[4183]: I0813 19:57:50.806934 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k9qqb" event={"ID":"ccdf38cf-634a-41a2-9c8b-74bb86af80a7","Type":"ContainerStarted","Data":"ac543dfbb4577c159abff74fe63750ec6557d4198d6572a7497b3fc598fd6350"} Aug 13 19:57:50 crc kubenswrapper[4183]: I0813 19:57:50.808905 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9" event={"ID":"8500d7bd-50fb-4ca6-af41-b7a24cae43cd","Type":"ContainerStarted","Data":"a00abbf09803bc3f3a22a86887914ba2fa3026aff021086131cdf33906d7fb2c"} Aug 13 19:57:50 crc kubenswrapper[4183]: I0813 19:57:50.808974 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9" event={"ID":"8500d7bd-50fb-4ca6-af41-b7a24cae43cd","Type":"ContainerStarted","Data":"8eb40cf57cd40846ea6dd7cdfaa7418bcec66df8537c43111850207e05e4b998"} Aug 13 19:57:51 crc kubenswrapper[4183]: I0813 19:57:51.138891 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g4v97"] Aug 13 19:57:51 crc kubenswrapper[4183]: I0813 19:57:51.159169 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dcqzh"] Aug 13 19:57:51 crc kubenswrapper[4183]: W0813 19:57:51.164371 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb917686_edfb_4158_86ad_6fce0abec64c.slice/crio-2c30e71c46910d59824a916398858a98e2a14b68aeaa558e0e34e08a82403761 WatchSource:0}: Error finding container 2c30e71c46910d59824a916398858a98e2a14b68aeaa558e0e34e08a82403761: Status 404 returned error can't find the container with id 2c30e71c46910d59824a916398858a98e2a14b68aeaa558e0e34e08a82403761 Aug 13 19:57:51 crc kubenswrapper[4183]: I0813 19:57:51.433543 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:51 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:51 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:51 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:51 crc kubenswrapper[4183]: I0813 19:57:51.433646 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:51 crc kubenswrapper[4183]: I0813 19:57:51.828714 4183 generic.go:334] "Generic (PLEG): container finished" podID="6db26b71-4e04-4688-a0c0-00e06e8c888d" containerID="d14340d88bbcb0bdafcdb676bdd527fc02a2314081fa0355609f2faf4fe6c57a" exitCode=0 Aug 13 19:57:51 crc kubenswrapper[4183]: I0813 19:57:51.828863 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dcqzh" event={"ID":"6db26b71-4e04-4688-a0c0-00e06e8c888d","Type":"ContainerDied","Data":"d14340d88bbcb0bdafcdb676bdd527fc02a2314081fa0355609f2faf4fe6c57a"} Aug 13 19:57:51 crc kubenswrapper[4183]: I0813 19:57:51.828914 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dcqzh" event={"ID":"6db26b71-4e04-4688-a0c0-00e06e8c888d","Type":"ContainerStarted","Data":"fd8d1d12d982e02597a295d2f3337ac4df705e6c16a1c44fe5fb982976562a45"} Aug 13 19:57:51 crc kubenswrapper[4183]: I0813 19:57:51.831070 4183 generic.go:334] "Generic (PLEG): container finished" podID="bb917686-edfb-4158-86ad-6fce0abec64c" containerID="1e5547d2ec134d919f281661be1d8428aa473dba5709d51d784bbe4bf125231a" exitCode=0 Aug 13 19:57:51 crc kubenswrapper[4183]: I0813 19:57:51.831131 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g4v97" event={"ID":"bb917686-edfb-4158-86ad-6fce0abec64c","Type":"ContainerDied","Data":"1e5547d2ec134d919f281661be1d8428aa473dba5709d51d784bbe4bf125231a"} Aug 13 19:57:51 crc kubenswrapper[4183]: I0813 19:57:51.831166 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g4v97" event={"ID":"bb917686-edfb-4158-86ad-6fce0abec64c","Type":"ContainerStarted","Data":"2c30e71c46910d59824a916398858a98e2a14b68aeaa558e0e34e08a82403761"} Aug 13 19:57:51 crc kubenswrapper[4183]: I0813 19:57:51.834334 4183 generic.go:334] "Generic (PLEG): container finished" podID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" containerID="aeb0e68fe787546cea2b489f1fad4768a18174f8e337cc1ad4994c7300f24101" exitCode=0 Aug 13 19:57:51 crc kubenswrapper[4183]: I0813 19:57:51.834419 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k9qqb" event={"ID":"ccdf38cf-634a-41a2-9c8b-74bb86af80a7","Type":"ContainerDied","Data":"aeb0e68fe787546cea2b489f1fad4768a18174f8e337cc1ad4994c7300f24101"} Aug 13 19:57:51 crc kubenswrapper[4183]: I0813 19:57:51.837609 4183 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Aug 13 19:57:52 crc kubenswrapper[4183]: E0813 19:57:52.040207 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/community-operator-index:v4.16" Aug 13 19:57:52 crc kubenswrapper[4183]: E0813 19:57:52.040326 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/community-operator-index:v4.16" Aug 13 19:57:52 crc kubenswrapper[4183]: E0813 19:57:52.040670 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-n59fs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-k9qqb_openshift-marketplace(ccdf38cf-634a-41a2-9c8b-74bb86af80a7): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:57:52 crc kubenswrapper[4183]: E0813 19:57:52.040988 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/community-operators-k9qqb" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" Aug 13 19:57:52 crc kubenswrapper[4183]: I0813 19:57:52.432494 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:52 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:52 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:52 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:52 crc kubenswrapper[4183]: I0813 19:57:52.432613 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:52 crc kubenswrapper[4183]: E0813 19:57:52.846579 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"\"" pod="openshift-marketplace/community-operators-k9qqb" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" Aug 13 19:57:52 crc kubenswrapper[4183]: E0813 19:57:52.947723 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-operator-index:v4.16" Aug 13 19:57:52 crc kubenswrapper[4183]: E0813 19:57:52.948212 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-operator-index:v4.16" Aug 13 19:57:52 crc kubenswrapper[4183]: E0813 19:57:52.948646 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-nzb4s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-dcqzh_openshift-marketplace(6db26b71-4e04-4688-a0c0-00e06e8c888d): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:57:52 crc kubenswrapper[4183]: E0813 19:57:52.948878 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/redhat-operators-dcqzh" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" Aug 13 19:57:52 crc kubenswrapper[4183]: E0813 19:57:52.953627 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/certified-operator-index:v4.16" Aug 13 19:57:52 crc kubenswrapper[4183]: E0813 19:57:52.953856 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/certified-operator-index:v4.16" Aug 13 19:57:52 crc kubenswrapper[4183]: E0813 19:57:52.954051 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-mwzcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-g4v97_openshift-marketplace(bb917686-edfb-4158-86ad-6fce0abec64c): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:57:52 crc kubenswrapper[4183]: E0813 19:57:52.954225 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/certified-operators-g4v97" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" Aug 13 19:57:53 crc kubenswrapper[4183]: I0813 19:57:53.095396 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9" podStartSLOduration=475.095328315 podStartE2EDuration="7m55.095328315s" podCreationTimestamp="2025-08-13 19:49:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 19:57:52.022866401 +0000 UTC m=+838.715531419" watchObservedRunningTime="2025-08-13 19:57:53.095328315 +0000 UTC m=+839.787992933" Aug 13 19:57:53 crc kubenswrapper[4183]: I0813 19:57:53.432381 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:53 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:53 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:53 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:53 crc kubenswrapper[4183]: I0813 19:57:53.432503 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:54 crc kubenswrapper[4183]: I0813 19:57:54.433767 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:54 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:54 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:54 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:54 crc kubenswrapper[4183]: I0813 19:57:54.433956 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:54 crc kubenswrapper[4183]: I0813 19:57:54.678312 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 19:57:54 crc kubenswrapper[4183]: I0813 19:57:54.678447 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 19:57:54 crc kubenswrapper[4183]: I0813 19:57:54.678541 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 19:57:54 crc kubenswrapper[4183]: I0813 19:57:54.678575 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 19:57:54 crc kubenswrapper[4183]: I0813 19:57:54.678636 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 19:57:55 crc kubenswrapper[4183]: I0813 19:57:55.435181 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:55 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:55 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:55 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:55 crc kubenswrapper[4183]: I0813 19:57:55.436485 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:55 crc kubenswrapper[4183]: I0813 19:57:55.859189 4183 generic.go:334] "Generic (PLEG): container finished" podID="8500d7bd-50fb-4ca6-af41-b7a24cae43cd" containerID="a00abbf09803bc3f3a22a86887914ba2fa3026aff021086131cdf33906d7fb2c" exitCode=0 Aug 13 19:57:55 crc kubenswrapper[4183]: I0813 19:57:55.859276 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9" event={"ID":"8500d7bd-50fb-4ca6-af41-b7a24cae43cd","Type":"ContainerDied","Data":"a00abbf09803bc3f3a22a86887914ba2fa3026aff021086131cdf33906d7fb2c"} Aug 13 19:57:56 crc kubenswrapper[4183]: I0813 19:57:56.432581 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:56 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:56 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:56 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:56 crc kubenswrapper[4183]: I0813 19:57:56.433008 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:57 crc kubenswrapper[4183]: I0813 19:57:57.076399 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9" Aug 13 19:57:57 crc kubenswrapper[4183]: I0813 19:57:57.214729 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8500d7bd-50fb-4ca6-af41-b7a24cae43cd-secret-volume\") pod \"8500d7bd-50fb-4ca6-af41-b7a24cae43cd\" (UID: \"8500d7bd-50fb-4ca6-af41-b7a24cae43cd\") " Aug 13 19:57:57 crc kubenswrapper[4183]: I0813 19:57:57.214952 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nrgl\" (UniqueName: \"kubernetes.io/projected/8500d7bd-50fb-4ca6-af41-b7a24cae43cd-kube-api-access-5nrgl\") pod \"8500d7bd-50fb-4ca6-af41-b7a24cae43cd\" (UID: \"8500d7bd-50fb-4ca6-af41-b7a24cae43cd\") " Aug 13 19:57:57 crc kubenswrapper[4183]: I0813 19:57:57.214984 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8500d7bd-50fb-4ca6-af41-b7a24cae43cd-config-volume\") pod \"8500d7bd-50fb-4ca6-af41-b7a24cae43cd\" (UID: \"8500d7bd-50fb-4ca6-af41-b7a24cae43cd\") " Aug 13 19:57:57 crc kubenswrapper[4183]: I0813 19:57:57.216641 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8500d7bd-50fb-4ca6-af41-b7a24cae43cd-config-volume" (OuterVolumeSpecName: "config-volume") pod "8500d7bd-50fb-4ca6-af41-b7a24cae43cd" (UID: "8500d7bd-50fb-4ca6-af41-b7a24cae43cd"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 19:57:57 crc kubenswrapper[4183]: I0813 19:57:57.223045 4183 reconciler_common.go:300] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8500d7bd-50fb-4ca6-af41-b7a24cae43cd-config-volume\") on node \"crc\" DevicePath \"\"" Aug 13 19:57:57 crc kubenswrapper[4183]: I0813 19:57:57.232093 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8500d7bd-50fb-4ca6-af41-b7a24cae43cd-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8500d7bd-50fb-4ca6-af41-b7a24cae43cd" (UID: "8500d7bd-50fb-4ca6-af41-b7a24cae43cd"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 19:57:57 crc kubenswrapper[4183]: I0813 19:57:57.240859 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8500d7bd-50fb-4ca6-af41-b7a24cae43cd-kube-api-access-5nrgl" (OuterVolumeSpecName: "kube-api-access-5nrgl") pod "8500d7bd-50fb-4ca6-af41-b7a24cae43cd" (UID: "8500d7bd-50fb-4ca6-af41-b7a24cae43cd"). InnerVolumeSpecName "kube-api-access-5nrgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 19:57:57 crc kubenswrapper[4183]: I0813 19:57:57.330182 4183 reconciler_common.go:300] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8500d7bd-50fb-4ca6-af41-b7a24cae43cd-secret-volume\") on node \"crc\" DevicePath \"\"" Aug 13 19:57:57 crc kubenswrapper[4183]: I0813 19:57:57.330247 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-5nrgl\" (UniqueName: \"kubernetes.io/projected/8500d7bd-50fb-4ca6-af41-b7a24cae43cd-kube-api-access-5nrgl\") on node \"crc\" DevicePath \"\"" Aug 13 19:57:57 crc kubenswrapper[4183]: I0813 19:57:57.433681 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:57 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:57 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:57 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:57 crc kubenswrapper[4183]: I0813 19:57:57.433851 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:57 crc kubenswrapper[4183]: I0813 19:57:57.868510 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9" event={"ID":"8500d7bd-50fb-4ca6-af41-b7a24cae43cd","Type":"ContainerDied","Data":"8eb40cf57cd40846ea6dd7cdfaa7418bcec66df8537c43111850207e05e4b998"} Aug 13 19:57:57 crc kubenswrapper[4183]: I0813 19:57:57.868624 4183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8eb40cf57cd40846ea6dd7cdfaa7418bcec66df8537c43111850207e05e4b998" Aug 13 19:57:57 crc kubenswrapper[4183]: I0813 19:57:57.868702 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9" Aug 13 19:57:58 crc kubenswrapper[4183]: I0813 19:57:58.432042 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:58 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:58 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:58 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:58 crc kubenswrapper[4183]: I0813 19:57:58.432152 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:59 crc kubenswrapper[4183]: I0813 19:57:59.433562 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:59 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:59 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:59 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:59 crc kubenswrapper[4183]: I0813 19:57:59.433719 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:58:00 crc kubenswrapper[4183]: I0813 19:58:00.431964 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:58:00 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:58:00 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:58:00 crc kubenswrapper[4183]: healthz check failed Aug 13 19:58:00 crc kubenswrapper[4183]: I0813 19:58:00.432051 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:58:01 crc kubenswrapper[4183]: I0813 19:58:01.434217 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:58:01 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:58:01 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:58:01 crc kubenswrapper[4183]: healthz check failed Aug 13 19:58:01 crc kubenswrapper[4183]: I0813 19:58:01.434297 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:58:02 crc kubenswrapper[4183]: I0813 19:58:02.436078 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:58:02 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:58:02 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:58:02 crc kubenswrapper[4183]: healthz check failed Aug 13 19:58:02 crc kubenswrapper[4183]: I0813 19:58:02.436184 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:58:03 crc kubenswrapper[4183]: I0813 19:58:03.434049 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:58:03 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:58:03 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:58:03 crc kubenswrapper[4183]: healthz check failed Aug 13 19:58:03 crc kubenswrapper[4183]: I0813 19:58:03.434158 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:58:04 crc kubenswrapper[4183]: I0813 19:58:04.431247 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:58:04 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:58:04 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:58:04 crc kubenswrapper[4183]: healthz check failed Aug 13 19:58:04 crc kubenswrapper[4183]: I0813 19:58:04.433048 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:58:05 crc kubenswrapper[4183]: I0813 19:58:05.433205 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:58:05 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:58:05 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:58:05 crc kubenswrapper[4183]: healthz check failed Aug 13 19:58:05 crc kubenswrapper[4183]: I0813 19:58:05.433339 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:58:06 crc kubenswrapper[4183]: E0813 19:58:06.337633 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/community-operator-index:v4.16" Aug 13 19:58:06 crc kubenswrapper[4183]: E0813 19:58:06.337723 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/community-operator-index:v4.16" Aug 13 19:58:06 crc kubenswrapper[4183]: E0813 19:58:06.338150 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-n59fs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-k9qqb_openshift-marketplace(ccdf38cf-634a-41a2-9c8b-74bb86af80a7): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:58:06 crc kubenswrapper[4183]: E0813 19:58:06.338265 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/community-operators-k9qqb" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" Aug 13 19:58:06 crc kubenswrapper[4183]: I0813 19:58:06.435695 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:58:06 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:58:06 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:58:06 crc kubenswrapper[4183]: healthz check failed Aug 13 19:58:06 crc kubenswrapper[4183]: I0813 19:58:06.436073 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:58:07 crc kubenswrapper[4183]: I0813 19:58:07.434455 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:58:07 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:58:07 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:58:07 crc kubenswrapper[4183]: healthz check failed Aug 13 19:58:07 crc kubenswrapper[4183]: I0813 19:58:07.434626 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:58:08 crc kubenswrapper[4183]: E0813 19:58:08.318713 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-operator-index:v4.16" Aug 13 19:58:08 crc kubenswrapper[4183]: E0813 19:58:08.320372 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-operator-index:v4.16" Aug 13 19:58:08 crc kubenswrapper[4183]: E0813 19:58:08.320732 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-nzb4s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-dcqzh_openshift-marketplace(6db26b71-4e04-4688-a0c0-00e06e8c888d): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:58:08 crc kubenswrapper[4183]: E0813 19:58:08.321019 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/redhat-operators-dcqzh" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" Aug 13 19:58:08 crc kubenswrapper[4183]: E0813 19:58:08.320478 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/certified-operator-index:v4.16" Aug 13 19:58:08 crc kubenswrapper[4183]: E0813 19:58:08.324305 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/certified-operator-index:v4.16" Aug 13 19:58:08 crc kubenswrapper[4183]: E0813 19:58:08.324482 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-mwzcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-g4v97_openshift-marketplace(bb917686-edfb-4158-86ad-6fce0abec64c): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:58:08 crc kubenswrapper[4183]: E0813 19:58:08.324587 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/certified-operators-g4v97" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" Aug 13 19:58:08 crc kubenswrapper[4183]: I0813 19:58:08.434303 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:58:08 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:58:08 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:58:08 crc kubenswrapper[4183]: healthz check failed Aug 13 19:58:08 crc kubenswrapper[4183]: I0813 19:58:08.434446 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:58:09 crc kubenswrapper[4183]: I0813 19:58:09.438110 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:58:09 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:58:09 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:58:09 crc kubenswrapper[4183]: healthz check failed Aug 13 19:58:09 crc kubenswrapper[4183]: I0813 19:58:09.438240 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:58:10 crc kubenswrapper[4183]: I0813 19:58:10.432062 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:58:10 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:58:10 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:58:10 crc kubenswrapper[4183]: healthz check failed Aug 13 19:58:10 crc kubenswrapper[4183]: I0813 19:58:10.432208 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:58:11 crc kubenswrapper[4183]: I0813 19:58:11.433134 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:58:11 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:58:11 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:58:11 crc kubenswrapper[4183]: healthz check failed Aug 13 19:58:11 crc kubenswrapper[4183]: I0813 19:58:11.433293 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:58:12 crc kubenswrapper[4183]: I0813 19:58:12.433039 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:58:12 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:58:12 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:58:12 crc kubenswrapper[4183]: healthz check failed Aug 13 19:58:12 crc kubenswrapper[4183]: I0813 19:58:12.433197 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:58:13 crc kubenswrapper[4183]: I0813 19:58:13.432221 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:58:13 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:58:13 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:58:13 crc kubenswrapper[4183]: healthz check failed Aug 13 19:58:13 crc kubenswrapper[4183]: I0813 19:58:13.432940 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:58:14 crc kubenswrapper[4183]: I0813 19:58:14.432003 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:58:14 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:58:14 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:58:14 crc kubenswrapper[4183]: healthz check failed Aug 13 19:58:14 crc kubenswrapper[4183]: I0813 19:58:14.432115 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:58:15 crc kubenswrapper[4183]: I0813 19:58:15.434366 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:58:15 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:58:15 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:58:15 crc kubenswrapper[4183]: healthz check failed Aug 13 19:58:15 crc kubenswrapper[4183]: I0813 19:58:15.434536 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:58:16 crc kubenswrapper[4183]: I0813 19:58:16.433911 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:58:16 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:58:16 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:58:16 crc kubenswrapper[4183]: healthz check failed Aug 13 19:58:16 crc kubenswrapper[4183]: I0813 19:58:16.434117 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:58:16 crc kubenswrapper[4183]: I0813 19:58:16.434269 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 19:58:16 crc kubenswrapper[4183]: I0813 19:58:16.435901 4183 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"6b6b2db3637481270955ecfaf63f08f80ee970eeaa15bd54430df884620e38ac"} pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" containerMessage="Container router failed startup probe, will be restarted" Aug 13 19:58:16 crc kubenswrapper[4183]: I0813 19:58:16.435988 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" containerID="cri-o://6b6b2db3637481270955ecfaf63f08f80ee970eeaa15bd54430df884620e38ac" gracePeriod=3600 Aug 13 19:58:21 crc kubenswrapper[4183]: E0813 19:58:21.211747 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"\"" pod="openshift-marketplace/community-operators-k9qqb" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" Aug 13 19:58:22 crc kubenswrapper[4183]: E0813 19:58:22.211080 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-operators-dcqzh" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" Aug 13 19:58:23 crc kubenswrapper[4183]: E0813 19:58:23.210866 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"\"" pod="openshift-marketplace/certified-operators-g4v97" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" Aug 13 19:58:32 crc kubenswrapper[4183]: E0813 19:58:32.354289 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/community-operator-index:v4.16" Aug 13 19:58:32 crc kubenswrapper[4183]: E0813 19:58:32.354912 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/community-operator-index:v4.16" Aug 13 19:58:32 crc kubenswrapper[4183]: E0813 19:58:32.355202 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-n59fs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-k9qqb_openshift-marketplace(ccdf38cf-634a-41a2-9c8b-74bb86af80a7): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:58:32 crc kubenswrapper[4183]: E0813 19:58:32.355269 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/community-operators-k9qqb" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" Aug 13 19:58:34 crc kubenswrapper[4183]: E0813 19:58:34.313227 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/certified-operator-index:v4.16" Aug 13 19:58:34 crc kubenswrapper[4183]: E0813 19:58:34.313316 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/certified-operator-index:v4.16" Aug 13 19:58:34 crc kubenswrapper[4183]: E0813 19:58:34.313602 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-mwzcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-g4v97_openshift-marketplace(bb917686-edfb-4158-86ad-6fce0abec64c): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:58:34 crc kubenswrapper[4183]: E0813 19:58:34.313672 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/certified-operators-g4v97" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" Aug 13 19:58:34 crc kubenswrapper[4183]: E0813 19:58:34.314935 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-operator-index:v4.16" Aug 13 19:58:34 crc kubenswrapper[4183]: E0813 19:58:34.314991 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-operator-index:v4.16" Aug 13 19:58:34 crc kubenswrapper[4183]: E0813 19:58:34.315100 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-nzb4s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-dcqzh_openshift-marketplace(6db26b71-4e04-4688-a0c0-00e06e8c888d): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:58:34 crc kubenswrapper[4183]: E0813 19:58:34.315148 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/redhat-operators-dcqzh" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" Aug 13 19:58:46 crc kubenswrapper[4183]: E0813 19:58:46.213435 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"\"" pod="openshift-marketplace/certified-operators-g4v97" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" Aug 13 19:58:46 crc kubenswrapper[4183]: E0813 19:58:46.214118 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"\"" pod="openshift-marketplace/community-operators-k9qqb" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" Aug 13 19:58:47 crc kubenswrapper[4183]: E0813 19:58:47.211121 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-operators-dcqzh" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.080127 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.080216 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.080259 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.080316 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.080425 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.080465 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.080501 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.080567 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j7zrh\" (UniqueName: \"kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.080612 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.080649 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.080824 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.080995 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.081066 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.081121 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.081186 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.081251 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.081320 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.081356 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.081397 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.081433 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.082031 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.082076 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.082112 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.082150 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.082187 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.097046 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.098249 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.098579 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.100112 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.100300 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.100465 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.100595 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.100720 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.100903 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.100963 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.100738 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.101123 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.101188 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.101134 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.101288 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.101366 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.101482 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.101562 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.101485 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.101434 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.102433 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.102486 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.102574 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.104960 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.106550 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.106853 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.109448 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.115525 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.118523 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.120930 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.121983 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.125282 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.125352 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.125507 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.126536 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.129603 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.132968 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.133390 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.133558 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.133718 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.133767 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.133918 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.134768 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.135522 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.136703 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.137371 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7zrh\" (UniqueName: \"kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.140741 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.141097 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.142731 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.184422 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.184966 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.185619 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.185953 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.186153 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.186467 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.186944 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.187109 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.187445 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.193122 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.199636 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.201391 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.202267 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.204150 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.204993 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.205435 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.206269 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.210386 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.214730 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.216506 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.218405 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.220324 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.221533 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.224521 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.238013 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.238136 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.248146 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.290025 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.290237 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.290272 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.290456 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bm986\" (UniqueName: \"kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.290515 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.290589 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.290677 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wrd8h\" (UniqueName: \"kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.290827 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nf4t2\" (UniqueName: \"kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.290898 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.290934 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtrv\" (UniqueName: \"kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.290974 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.291006 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxcp\" (UniqueName: \"kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp\") pod \"network-check-source-5c5478f8c-vqvt7\" (UID: \"d0f40333-c860-4c04-8058-a0bf572dcf12\") " pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.291053 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.291088 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.291121 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.291160 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d7ntf\" (UniqueName: \"kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.291291 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.291330 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.291353 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.291392 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.291427 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.291457 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmhq\" (UniqueName: \"kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq\") pod \"migrator-f7c6d88df-q2fnv\" (UID: \"cf1a8966-f594-490a-9fbb-eec5bafd13d3\") " pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.291487 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.291516 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.291560 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.291588 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tl5kg\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.291614 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.291670 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2h9\" (UniqueName: \"kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.291935 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.291992 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292034 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292072 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292108 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292154 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292183 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292213 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292247 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kp86\" (UniqueName: \"kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292276 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8dcvj\" (UniqueName: \"kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292302 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hpzhn\" (UniqueName: \"kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292327 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r8qj9\" (UniqueName: \"kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292352 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292386 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292422 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4w8wh\" (UniqueName: \"kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292450 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292480 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292508 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292532 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292594 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292618 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292654 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292681 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292750 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292866 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292912 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292939 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292982 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.293008 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-serving-cert\" (UniqueName: \"kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.293031 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.293058 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.293094 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.293162 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.293199 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bxr\" (UniqueName: \"kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.293232 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.293271 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.293360 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.294860 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.299637 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrd8h\" (UniqueName: \"kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.301252 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.302211 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.302283 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.302424 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.307601 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.308881 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.309144 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.309362 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.310231 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.313753 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.314121 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console-operator"/"webhook-serving-cert" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.314221 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.315010 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.315064 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.315190 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.315243 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.315508 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.315681 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.317130 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.318242 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.320458 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.321129 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.321459 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.321901 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.325876 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.325991 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.326425 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.327555 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.328657 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.331008 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.331503 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.331983 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.332229 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.332639 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.338987 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.339016 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.335887 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.339947 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.341726 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.336054 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.336159 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.336223 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.336368 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.336530 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.336626 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.336744 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.336972 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.337111 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.337240 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.337373 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.337436 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.337619 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.337677 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.337693 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.348095 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.348521 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.355957 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.362342 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.363214 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.363612 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.400108 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.400259 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.402007 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-bm986\" (UniqueName: \"kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.367088 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kp86\" (UniqueName: \"kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.368167 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.368302 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.369592 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.370043 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.372150 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.372521 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.373311 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.409707 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.321217 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.390198 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.384106 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"webhook-serving-cert\" (UniqueName: \"kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.395368 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.395563 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.321494 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.395669 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.393617 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.396453 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.396729 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.416231 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.397048 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.397222 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.397500 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.397694 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.398396 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.395664 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.417493 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.417534 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.417702 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.418067 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.420976 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.422725 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.431009 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.439899 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.440300 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.440377 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.441131 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.442587 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.421919 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.443403 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vhj\" (UniqueName: \"kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.443710 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.443991 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.444208 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-fqnmc\" (UniqueName: \"kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.444393 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-x5d97\" (UniqueName: \"kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.447106 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8qj9\" (UniqueName: \"kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:58:54 crc kubenswrapper[4183]: E0813 19:58:54.448506 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 podName: nodeName:}" failed. No retries permitted until 2025-08-13 20:00:56.448363925 +0000 UTC m=+1023.141028744 (durationBeforeRetry 2m2s). Error: MountVolume.MountDevice failed for volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.450060 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.450766 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.451757 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.451936 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dcvj\" (UniqueName: \"kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.452496 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.454480 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.454555 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.454642 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5rpl7\" (UniqueName: \"kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.454686 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.454721 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2zpsk\" (UniqueName: \"kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk\") pod \"downloads-65476884b9-9wcvx\" (UID: \"6268b7fe-8910-4505-b404-6f1df638105c\") " pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.454744 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.454853 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.454889 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.454916 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.454943 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-76gl8\" (UniqueName: \"kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8\") pod \"network-check-target-v54bt\" (UID: \"34a48baf-1bee-4921-8bb2-9b7320e76f79\") " pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.454969 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kgvs\" (UniqueName: \"kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.454993 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455021 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pzb57\" (UniqueName: \"kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455045 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455083 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455114 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-w4r68\" (UniqueName: \"kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455146 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lz9qh\" (UniqueName: \"kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455178 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\") " pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455232 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455264 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455315 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455339 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455384 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r7dbp\" (UniqueName: \"kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp\") pod \"redhat-marketplace-rmwfn\" (UID: \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\") " pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455411 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hjlhw\" (UniqueName: \"kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455444 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455487 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tvc4r\" (UniqueName: \"kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455533 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9724w\" (UniqueName: \"kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455570 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455597 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455624 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455657 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455682 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455718 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455748 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455871 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455899 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455975 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dt5cx\" (UniqueName: \"kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx\") pod \"ingress-canary-2vhcn\" (UID: \"0b5d722a-1123-4935-9740-52a08d018bc9\") " pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.457042 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.457406 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.457463 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.457575 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-lx2h9\" (UniqueName: \"kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.464222 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.465881 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-nf4t2\" (UniqueName: \"kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.466387 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.471110 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.471856 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.472186 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.472991 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.475297 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.476227 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.476432 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.476713 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.488593 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.489082 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.493037 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.493886 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.495258 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.497182 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.497293 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.503497 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.510602 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjlhw\" (UniqueName: \"kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.512317 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.512639 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.512928 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.513074 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.513259 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.513276 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.513425 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.513479 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7dbp\" (UniqueName: \"kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp\") pod \"redhat-marketplace-rmwfn\" (UID: \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\") " pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.513585 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.513994 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.514134 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.514230 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcxcp\" (UniqueName: \"kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp\") pod \"network-check-source-5c5478f8c-vqvt7\" (UID: \"d0f40333-c860-4c04-8058-a0bf572dcf12\") " pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.514270 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.514464 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.514484 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.514690 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.514954 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.515130 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.516452 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.521692 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.522016 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.522394 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.522642 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-76gl8\" (UniqueName: \"kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8\") pod \"network-check-target-v54bt\" (UID: \"34a48baf-1bee-4921-8bb2-9b7320e76f79\") " pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.523288 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvtrv\" (UniqueName: \"kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.523771 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvc4r\" (UniqueName: \"kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.524764 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5d97\" (UniqueName: \"kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.525530 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.526728 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.527908 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpzhn\" (UniqueName: \"kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.529986 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqmhq\" (UniqueName: \"kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq\") pod \"migrator-f7c6d88df-q2fnv\" (UID: \"cf1a8966-f594-490a-9fbb-eec5bafd13d3\") " pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.530150 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.530339 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.531438 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.532171 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.532502 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.533421 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7ntf\" (UniqueName: \"kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.535007 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.535185 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.535903 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.537752 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8bxr\" (UniqueName: \"kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.538292 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\") " pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.539487 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.539883 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.540175 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.540439 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.540740 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-4w8wh\" (UniqueName: \"kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.542768 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-tl5kg\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.542907 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.545213 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.557110 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.558604 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.564140 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.564514 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.568286 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.572614 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.579070 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.588214 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.588667 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.597455 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.602158 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.607672 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.608537 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.621518 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.623956 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.635440 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.647748 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.652661 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.668527 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.670606 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.670688 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.672019 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.681257 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.681384 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.681426 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.681481 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.681503 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.686996 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.687358 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.687616 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.698272 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.702768 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.706755 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.713401 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.717365 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.724723 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.724718 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.725372 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.744518 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.745493 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.760719 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rpl7\" (UniqueName: \"kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.763596 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.764477 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.775288 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.778455 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqnmc\" (UniqueName: \"kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.794056 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4r68\" (UniqueName: \"kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.795378 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.797673 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.799550 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.804231 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.804981 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.826227 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zpsk\" (UniqueName: \"kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk\") pod \"downloads-65476884b9-9wcvx\" (UID: \"6268b7fe-8910-4505-b404-6f1df638105c\") " pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.828321 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.838267 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.839614 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9vhj\" (UniqueName: \"kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.839765 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.854303 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.863165 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzb57\" (UniqueName: \"kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.869181 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.870553 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.881145 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.886198 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.890507 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-lz9qh\" (UniqueName: \"kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.892445 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.904768 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kgvs\" (UniqueName: \"kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.908429 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-dt5cx\" (UniqueName: \"kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx\") pod \"ingress-canary-2vhcn\" (UID: \"0b5d722a-1123-4935-9740-52a08d018bc9\") " pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.917146 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-kpdvz" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.935259 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.935682 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-dl9g2" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.936047 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.936096 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.936354 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-9724w\" (UniqueName: \"kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.936461 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.948120 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.972746 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:58:55 crc kubenswrapper[4183]: I0813 19:58:55.017116 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:58:55 crc kubenswrapper[4183]: I0813 19:58:55.017340 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:58:55 crc kubenswrapper[4183]: I0813 19:58:55.017900 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:55 crc kubenswrapper[4183]: I0813 19:58:55.203144 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:58:55 crc kubenswrapper[4183]: I0813 19:58:55.203212 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:58:56 crc kubenswrapper[4183]: I0813 19:58:56.183104 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" event={"ID":"297ab9b6-2186-4d5b-a952-2bfd59af63c4","Type":"ContainerStarted","Data":"a3a061a59b867b60a3e6a1a13d08ce968a7bfbe260f6cd0b17972429364f2dff"} Aug 13 19:58:56 crc kubenswrapper[4183]: I0813 19:58:56.198351 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" event={"ID":"120b38dc-8236-4fa6-a452-642b8ad738ee","Type":"ContainerStarted","Data":"cb33d2fb758e44ea5d6c5308cf6a0c2e4f669470cf12ebbac204a7dbd9719cdb"} Aug 13 19:58:56 crc kubenswrapper[4183]: W0813 19:58:56.443884 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd556935_a077_45df_ba3f_d42c39326ccd.slice/crio-3a1adfc54f586eb717d23524f11a70a1c368ae7c720306a0e33e3393d7584219 WatchSource:0}: Error finding container 3a1adfc54f586eb717d23524f11a70a1c368ae7c720306a0e33e3393d7584219: Status 404 returned error can't find the container with id 3a1adfc54f586eb717d23524f11a70a1c368ae7c720306a0e33e3393d7584219 Aug 13 19:58:56 crc kubenswrapper[4183]: W0813 19:58:56.457129 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda702c6d2_4dde_4077_ab8c_0f8df804bf7a.slice/crio-2680ced3658686e640e351a3342c799f7707f03bca3c8f776b22a7e838d68fd5 WatchSource:0}: Error finding container 2680ced3658686e640e351a3342c799f7707f03bca3c8f776b22a7e838d68fd5: Status 404 returned error can't find the container with id 2680ced3658686e640e351a3342c799f7707f03bca3c8f776b22a7e838d68fd5 Aug 13 19:58:56 crc kubenswrapper[4183]: W0813 19:58:56.870876 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63eb7413_02c3_4d6e_bb48_e5ffe5ce15be.slice/crio-51987a02e71ec4003b940a6bd7b8959747a906e94602c62bbc671c8b26623724 WatchSource:0}: Error finding container 51987a02e71ec4003b940a6bd7b8959747a906e94602c62bbc671c8b26623724: Status 404 returned error can't find the container with id 51987a02e71ec4003b940a6bd7b8959747a906e94602c62bbc671c8b26623724 Aug 13 19:58:56 crc kubenswrapper[4183]: W0813 19:58:56.887154 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f4dca86_e6ee_4ec9_8324_86aff960225e.slice/crio-042b00f269188506965ca0b8217a4771ff1a78f7f3244b92c9aa64e154290933 WatchSource:0}: Error finding container 042b00f269188506965ca0b8217a4771ff1a78f7f3244b92c9aa64e154290933: Status 404 returned error can't find the container with id 042b00f269188506965ca0b8217a4771ff1a78f7f3244b92c9aa64e154290933 Aug 13 19:58:57 crc kubenswrapper[4183]: W0813 19:58:57.173735 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4092a9f8_5acc_4932_9e90_ef962eeb301a.slice/crio-40aef0eb1bbaaf5556252dcc2b75e214706ba3a0320e40aaa6997926ec4cf748 WatchSource:0}: Error finding container 40aef0eb1bbaaf5556252dcc2b75e214706ba3a0320e40aaa6997926ec4cf748: Status 404 returned error can't find the container with id 40aef0eb1bbaaf5556252dcc2b75e214706ba3a0320e40aaa6997926ec4cf748 Aug 13 19:58:57 crc kubenswrapper[4183]: I0813 19:58:57.210363 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" event={"ID":"4f8aa612-9da0-4a2b-911e-6a1764a4e74e","Type":"ContainerStarted","Data":"caf64d49987c99e4ea9efe593e0798b0aa755d8fdf7441c0156e1863763a7aa0"} Aug 13 19:58:57 crc kubenswrapper[4183]: W0813 19:58:57.222952 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf1a8966_f594_490a_9fbb_eec5bafd13d3.slice/crio-44f5ef3518ac6b9316c8964c76fdb446b6ab5fa88b9a56316e56f0b8cd21e4d2 WatchSource:0}: Error finding container 44f5ef3518ac6b9316c8964c76fdb446b6ab5fa88b9a56316e56f0b8cd21e4d2: Status 404 returned error can't find the container with id 44f5ef3518ac6b9316c8964c76fdb446b6ab5fa88b9a56316e56f0b8cd21e4d2 Aug 13 19:58:57 crc kubenswrapper[4183]: I0813 19:58:57.268604 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" event={"ID":"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be","Type":"ContainerStarted","Data":"51987a02e71ec4003b940a6bd7b8959747a906e94602c62bbc671c8b26623724"} Aug 13 19:58:57 crc kubenswrapper[4183]: I0813 19:58:57.268665 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" event={"ID":"45a8038e-e7f2-4d93-a6f5-7753aa54e63f","Type":"ContainerStarted","Data":"2e8f0bacebafcab5bbf3b42b7e4297638b1e6acfcc74bfc10076897a7be4d368"} Aug 13 19:58:57 crc kubenswrapper[4183]: I0813 19:58:57.268703 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" event={"ID":"a702c6d2-4dde-4077-ab8c-0f8df804bf7a","Type":"ContainerStarted","Data":"2680ced3658686e640e351a3342c799f7707f03bca3c8f776b22a7e838d68fd5"} Aug 13 19:58:57 crc kubenswrapper[4183]: I0813 19:58:57.268728 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" event={"ID":"3482be94-0cdb-4e2a-889b-e5fac59fdbf5","Type":"ContainerStarted","Data":"1f2d8ae3277a5b2f175e31e08d91633d08f596d9399c619715c2f8b9fe7a9cf2"} Aug 13 19:58:57 crc kubenswrapper[4183]: I0813 19:58:57.337665 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" event={"ID":"71af81a9-7d43-49b2-9287-c375900aa905","Type":"ContainerStarted","Data":"07c341dd7186a1b00e23f13a401a9b19e5d1744c38a4a91d135cf6cc1891fe61"} Aug 13 19:58:57 crc kubenswrapper[4183]: I0813 19:58:57.719372 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" event={"ID":"0f394926-bdb9-425c-b36e-264d7fd34550","Type":"ContainerStarted","Data":"489c96bd95d523f4b7e59e72e928433dfb6870d719899f788f393fc315d5c1f5"} Aug 13 19:58:57 crc kubenswrapper[4183]: I0813 19:58:57.741147 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7287f" event={"ID":"887d596e-c519-4bfa-af90-3edd9e1b2f0f","Type":"ContainerStarted","Data":"9ed66fef0dec7ca57bc8a1a3ccbadd74658c15ad523b6b56b58becdb98c703e8"} Aug 13 19:58:58 crc kubenswrapper[4183]: I0813 19:58:58.206658 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" event={"ID":"bd556935-a077-45df-ba3f-d42c39326ccd","Type":"ContainerStarted","Data":"3a1adfc54f586eb717d23524f11a70a1c368ae7c720306a0e33e3393d7584219"} Aug 13 19:58:58 crc kubenswrapper[4183]: I0813 19:58:58.236049 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4jkp" event={"ID":"4092a9f8-5acc-4932-9e90-ef962eeb301a","Type":"ContainerStarted","Data":"40aef0eb1bbaaf5556252dcc2b75e214706ba3a0320e40aaa6997926ec4cf748"} Aug 13 19:58:58 crc kubenswrapper[4183]: I0813 19:58:58.358206 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" event={"ID":"8a5ae51d-d173-4531-8975-f164c975ce1f","Type":"ContainerStarted","Data":"861ac63b0e0c6ab1fc9beb841998e0e5dd2860ed632f8f364e94f575b406c884"} Aug 13 19:58:58 crc kubenswrapper[4183]: I0813 19:58:58.361406 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" event={"ID":"ed024e5d-8fc2-4c22-803d-73f3c9795f19","Type":"ContainerStarted","Data":"76a23bcc5261ffef3e87aed770d502891d5cf930ce8f5608091c10c4c2f8355e"} Aug 13 19:58:58 crc kubenswrapper[4183]: I0813 19:58:58.432246 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jhz6" event={"ID":"3f4dca86-e6ee-4ec9-8324-86aff960225e","Type":"ContainerStarted","Data":"042b00f269188506965ca0b8217a4771ff1a78f7f3244b92c9aa64e154290933"} Aug 13 19:58:59 crc kubenswrapper[4183]: I0813 19:58:59.500593 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" event={"ID":"378552fd-5e53-4882-87ff-95f3d9198861","Type":"ContainerStarted","Data":"fbf310c9137d2862f3313bbe4210058a1015f75db6cabbd845d64c247c4ee039"} Aug 13 19:58:59 crc kubenswrapper[4183]: I0813 19:58:59.506781 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-84fccc7b6-mkncc" event={"ID":"b233d916-bfe3-4ae5-ae39-6b574d1aa05e","Type":"ContainerStarted","Data":"e6ed8c1e93f8bc476d05eff439933a75e91865b1b913300d2de272ffc970fd9f"} Aug 13 19:58:59 crc kubenswrapper[4183]: I0813 19:58:59.512361 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" event={"ID":"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0","Type":"ContainerStarted","Data":"88c60b5e25b2ce016efe1942b67b182d4d9c87cf3eb10c9dc1468dc3abce4e98"} Aug 13 19:58:59 crc kubenswrapper[4183]: I0813 19:58:59.527693 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" event={"ID":"cf1a8966-f594-490a-9fbb-eec5bafd13d3","Type":"ContainerStarted","Data":"44f5ef3518ac6b9316c8964c76fdb446b6ab5fa88b9a56316e56f0b8cd21e4d2"} Aug 13 19:58:59 crc kubenswrapper[4183]: I0813 19:58:59.539266 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" event={"ID":"b54e8941-2fc4-432a-9e51-39684df9089e","Type":"ContainerStarted","Data":"fe503da15decef9b50942972e3f741dba12102460aee1b1db682f945b69c1239"} Aug 13 19:58:59 crc kubenswrapper[4183]: I0813 19:58:59.545732 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" event={"ID":"10603adc-d495-423c-9459-4caa405960bb","Type":"ContainerStarted","Data":"20a42c53825c9180dbf4c0a948617094d91e080fc40247547ca99c537257a821"} Aug 13 19:58:59 crc kubenswrapper[4183]: E0813 19:58:59.842138 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-operators-dcqzh" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" Aug 13 19:58:59 crc kubenswrapper[4183]: E0813 19:58:59.842286 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"\"" pod="openshift-marketplace/community-operators-k9qqb" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" Aug 13 19:59:00 crc kubenswrapper[4183]: I0813 19:59:00.718280 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" event={"ID":"f728c15e-d8de-4a9a-a3ea-fdcead95cb91","Type":"ContainerStarted","Data":"2c45b735c45341a1d77370cd8823760353056c6e1eff59259f19fde659c543fb"} Aug 13 19:59:00 crc kubenswrapper[4183]: I0813 19:59:00.740672 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" event={"ID":"120b38dc-8236-4fa6-a452-642b8ad738ee","Type":"ContainerStarted","Data":"ffa2ba8d5c39d98cd54f79874d44a75e8535b740b4e7b22d06c01c67e926ca36"} Aug 13 19:59:00 crc kubenswrapper[4183]: W0813 19:59:00.755194 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b5c38ff_1fa8_4219_994d_15776acd4a4d.slice/crio-2cacd5e0efb1ce8b67d9c8c51dfbe105553c3a82ee16c3fc685a1e74f7194892 WatchSource:0}: Error finding container 2cacd5e0efb1ce8b67d9c8c51dfbe105553c3a82ee16c3fc685a1e74f7194892: Status 404 returned error can't find the container with id 2cacd5e0efb1ce8b67d9c8c51dfbe105553c3a82ee16c3fc685a1e74f7194892 Aug 13 19:59:00 crc kubenswrapper[4183]: W0813 19:59:00.761219 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13ad7555_5f28_4555_a563_892713a8433a.slice/crio-8266ab3300c992b59b23d4fcd1c7a7c7c8c97e041b449a5bbd87fb5e57084141 WatchSource:0}: Error finding container 8266ab3300c992b59b23d4fcd1c7a7c7c8c97e041b449a5bbd87fb5e57084141: Status 404 returned error can't find the container with id 8266ab3300c992b59b23d4fcd1c7a7c7c8c97e041b449a5bbd87fb5e57084141 Aug 13 19:59:00 crc kubenswrapper[4183]: I0813 19:59:00.877647 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" event={"ID":"45a8038e-e7f2-4d93-a6f5-7753aa54e63f","Type":"ContainerStarted","Data":"cde7b91dcd48d4e06df4d6dec59646da2d7b63ba4245f33286ad238c06706436"} Aug 13 19:59:00 crc kubenswrapper[4183]: W0813 19:59:00.927578 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13045510_8717_4a71_ade4_be95a76440a7.slice/crio-63f14f64c728127421ed63e84871dff5b193c951f7847a6c42411c5c4d4deedc WatchSource:0}: Error finding container 63f14f64c728127421ed63e84871dff5b193c951f7847a6c42411c5c4d4deedc: Status 404 returned error can't find the container with id 63f14f64c728127421ed63e84871dff5b193c951f7847a6c42411c5c4d4deedc Aug 13 19:59:01 crc kubenswrapper[4183]: W0813 19:59:01.027943 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d67253e_2acd_4bc1_8185_793587da4f17.slice/crio-282af480c29eba88e80ad94d58f4ba7eb51ae6c6558514585728acae3448d722 WatchSource:0}: Error finding container 282af480c29eba88e80ad94d58f4ba7eb51ae6c6558514585728acae3448d722: Status 404 returned error can't find the container with id 282af480c29eba88e80ad94d58f4ba7eb51ae6c6558514585728acae3448d722 Aug 13 19:59:01 crc kubenswrapper[4183]: E0813 19:59:01.219981 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"\"" pod="openshift-marketplace/certified-operators-g4v97" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" Aug 13 19:59:02 crc kubenswrapper[4183]: I0813 19:59:02.429123 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-gbw49" event={"ID":"13045510-8717-4a71-ade4-be95a76440a7","Type":"ContainerStarted","Data":"63f14f64c728127421ed63e84871dff5b193c951f7847a6c42411c5c4d4deedc"} Aug 13 19:59:02 crc kubenswrapper[4183]: I0813 19:59:02.542201 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" event={"ID":"0b5c38ff-1fa8-4219-994d-15776acd4a4d","Type":"ContainerStarted","Data":"2cacd5e0efb1ce8b67d9c8c51dfbe105553c3a82ee16c3fc685a1e74f7194892"} Aug 13 19:59:02 crc kubenswrapper[4183]: I0813 19:59:02.635732 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" event={"ID":"6d67253e-2acd-4bc1-8185-793587da4f17","Type":"ContainerStarted","Data":"282af480c29eba88e80ad94d58f4ba7eb51ae6c6558514585728acae3448d722"} Aug 13 19:59:02 crc kubenswrapper[4183]: I0813 19:59:02.804379 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" event={"ID":"13ad7555-5f28-4555-a563-892713a8433a","Type":"ContainerStarted","Data":"8266ab3300c992b59b23d4fcd1c7a7c7c8c97e041b449a5bbd87fb5e57084141"} Aug 13 19:59:02 crc kubenswrapper[4183]: I0813 19:59:02.933327 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" event={"ID":"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf","Type":"ContainerStarted","Data":"2aed5bade7f294b09e25840fe64b91ca7e8460e350e656827bd2648f0721976d"} Aug 13 19:59:04 crc kubenswrapper[4183]: I0813 19:59:04.191186 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" event={"ID":"297ab9b6-2186-4d5b-a952-2bfd59af63c4","Type":"ContainerStarted","Data":"a7b73c0ecb48e250899c582dd00bb24b7714077ab1f62727343c931aaa84b579"} Aug 13 19:59:04 crc kubenswrapper[4183]: I0813 19:59:04.265525 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" event={"ID":"bd556935-a077-45df-ba3f-d42c39326ccd","Type":"ContainerStarted","Data":"3137e2c39453dcdeff67eb557e1f28db273455a3b55a18b79757d9f183fde4e9"} Aug 13 19:59:04 crc kubenswrapper[4183]: I0813 19:59:04.268364 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:59:04 crc kubenswrapper[4183]: I0813 19:59:04.284147 4183 patch_prober.go:28] interesting pod/packageserver-8464bcc55b-sjnqz container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.43:5443/healthz\": dial tcp 10.217.0.43:5443: connect: connection refused" start-of-body= Aug 13 19:59:04 crc kubenswrapper[4183]: I0813 19:59:04.284445 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.43:5443/healthz\": dial tcp 10.217.0.43:5443: connect: connection refused" Aug 13 19:59:04 crc kubenswrapper[4183]: I0813 19:59:04.299428 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" event={"ID":"8a5ae51d-d173-4531-8975-f164c975ce1f","Type":"ContainerStarted","Data":"2a3de049472dc73b116b7c97ddeb21440fd8f50594e5e9dd726a1c1cfe0bf588"} Aug 13 19:59:04 crc kubenswrapper[4183]: I0813 19:59:04.300463 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:59:04 crc kubenswrapper[4183]: I0813 19:59:04.302653 4183 patch_prober.go:28] interesting pod/catalog-operator-857456c46-7f5wf container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Aug 13 19:59:04 crc kubenswrapper[4183]: I0813 19:59:04.302736 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Aug 13 19:59:04 crc kubenswrapper[4183]: I0813 19:59:04.307569 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" event={"ID":"4f8aa612-9da0-4a2b-911e-6a1764a4e74e","Type":"ContainerStarted","Data":"96c6df9a2045ea9da57200221317b32730a7efb228b812d5bc7a5eef696963f6"} Aug 13 19:59:04 crc kubenswrapper[4183]: I0813 19:59:04.528566 4183 patch_prober.go:28] interesting pod/catalog-operator-857456c46-7f5wf container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Aug 13 19:59:04 crc kubenswrapper[4183]: I0813 19:59:04.529978 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Aug 13 19:59:04 crc kubenswrapper[4183]: I0813 19:59:04.528729 4183 patch_prober.go:28] interesting pod/catalog-operator-857456c46-7f5wf container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Aug 13 19:59:04 crc kubenswrapper[4183]: I0813 19:59:04.530099 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Aug 13 19:59:04 crc kubenswrapper[4183]: I0813 19:59:04.538973 4183 patch_prober.go:28] interesting pod/packageserver-8464bcc55b-sjnqz container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.43:5443/healthz\": dial tcp 10.217.0.43:5443: connect: connection refused" start-of-body= Aug 13 19:59:04 crc kubenswrapper[4183]: I0813 19:59:04.539071 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.43:5443/healthz\": dial tcp 10.217.0.43:5443: connect: connection refused" Aug 13 19:59:04 crc kubenswrapper[4183]: I0813 19:59:04.541196 4183 patch_prober.go:28] interesting pod/packageserver-8464bcc55b-sjnqz container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.43:5443/healthz\": dial tcp 10.217.0.43:5443: connect: connection refused" start-of-body= Aug 13 19:59:04 crc kubenswrapper[4183]: I0813 19:59:04.541284 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.43:5443/healthz\": dial tcp 10.217.0.43:5443: connect: connection refused" Aug 13 19:59:04 crc kubenswrapper[4183]: I0813 19:59:04.818165 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rmwfn"] Aug 13 19:59:05 crc kubenswrapper[4183]: W0813 19:59:05.099673 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ad279b4_d9dc_42a8_a1c8_a002bd063482.slice/crio-9218677c9aa0f218ae58b4990048c486cef74452f639e5a303ac08e79a2c31d7 WatchSource:0}: Error finding container 9218677c9aa0f218ae58b4990048c486cef74452f639e5a303ac08e79a2c31d7: Status 404 returned error can't find the container with id 9218677c9aa0f218ae58b4990048c486cef74452f639e5a303ac08e79a2c31d7 Aug 13 19:59:05 crc kubenswrapper[4183]: I0813 19:59:05.361704 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" event={"ID":"c085412c-b875-46c9-ae3e-e6b0d8067091","Type":"ContainerStarted","Data":"7c70e17033c682195efbddb8b127b02b239fc67e597936ebf8283a79edea04e3"} Aug 13 19:59:05 crc kubenswrapper[4183]: I0813 19:59:05.428931 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" event={"ID":"34a48baf-1bee-4921-8bb2-9b7320e76f79","Type":"ContainerStarted","Data":"5aa1911bfbbdddf05ac698792baebff15593339de601d73adeab5547c57d456a"} Aug 13 19:59:05 crc kubenswrapper[4183]: I0813 19:59:05.442340 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" event={"ID":"5bacb25d-97b6-4491-8fb4-99feae1d802a","Type":"ContainerStarted","Data":"b27ef0e5311849c50317136877d704c05729518c9dcec03ecef2bf1dc575fbe7"} Aug 13 19:59:05 crc kubenswrapper[4183]: I0813 19:59:05.452974 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" event={"ID":"9ad279b4-d9dc-42a8-a1c8-a002bd063482","Type":"ContainerStarted","Data":"9218677c9aa0f218ae58b4990048c486cef74452f639e5a303ac08e79a2c31d7"} Aug 13 19:59:05 crc kubenswrapper[4183]: I0813 19:59:05.469059 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" event={"ID":"af6b67a3-a2bd-4051-9adc-c208a5a65d79","Type":"ContainerStarted","Data":"893b4f9b5ed27072046f833f87a3b5c0ae52bb015f77a4268cf775d1c39b6dcf"} Aug 13 19:59:05 crc kubenswrapper[4183]: I0813 19:59:05.700655 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" event={"ID":"378552fd-5e53-4882-87ff-95f3d9198861","Type":"ContainerStarted","Data":"47fe4a48f20f31be64ae9b101ef8f82942a11a5dc253da7cd8d82bea357cc9c7"} Aug 13 19:59:05 crc kubenswrapper[4183]: I0813 19:59:05.737738 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-84fccc7b6-mkncc" event={"ID":"b233d916-bfe3-4ae5-ae39-6b574d1aa05e","Type":"ContainerStarted","Data":"a4a4a30f20f748c27de48f589b297456dbde26c9c06b9c1e843ce69a376e85a9"} Aug 13 19:59:05 crc kubenswrapper[4183]: I0813 19:59:05.748648 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" event={"ID":"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e","Type":"ContainerStarted","Data":"906e45421a720cb9e49c934ec2f44b74221d2f79757d98a1581d6bf6a1fc5f31"} Aug 13 19:59:05 crc kubenswrapper[4183]: I0813 19:59:05.755641 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" event={"ID":"c782cf62-a827-4677-b3c2-6f82c5f09cbb","Type":"ContainerStarted","Data":"10cfef5f94c814cc8355e17d7fdcccd543ac26c393e3a7c8452af1219913ea3a"} Aug 13 19:59:05 crc kubenswrapper[4183]: I0813 19:59:05.780538 4183 generic.go:334] "Generic (PLEG): container finished" podID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" containerID="79c283f99efa65aebdd5c70a860e4be8de07c70a02e110724c8d177e28696649" exitCode=0 Aug 13 19:59:05 crc kubenswrapper[4183]: I0813 19:59:05.782330 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7287f" event={"ID":"887d596e-c519-4bfa-af90-3edd9e1b2f0f","Type":"ContainerDied","Data":"79c283f99efa65aebdd5c70a860e4be8de07c70a02e110724c8d177e28696649"} Aug 13 19:59:05 crc kubenswrapper[4183]: I0813 19:59:05.808228 4183 generic.go:334] "Generic (PLEG): container finished" podID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerID="6b6b2db3637481270955ecfaf63f08f80ee970eeaa15bd54430df884620e38ac" exitCode=0 Aug 13 19:59:05 crc kubenswrapper[4183]: I0813 19:59:05.808611 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" event={"ID":"aa90b3c2-febd-4588-a063-7fbbe82f00c1","Type":"ContainerDied","Data":"6b6b2db3637481270955ecfaf63f08f80ee970eeaa15bd54430df884620e38ac"} Aug 13 19:59:05 crc kubenswrapper[4183]: I0813 19:59:05.808772 4183 scope.go:117] "RemoveContainer" containerID="4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02" Aug 13 19:59:05 crc kubenswrapper[4183]: I0813 19:59:05.862679 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" event={"ID":"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be","Type":"ContainerStarted","Data":"11a119fa806fd94f2b3718680e62c440fc53a5fd0df6934b156abf3171c59e5b"} Aug 13 19:59:06 crc kubenswrapper[4183]: I0813 19:59:06.002575 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" event={"ID":"43ae1c37-047b-4ee2-9fee-41e337dd4ac8","Type":"ContainerStarted","Data":"526dc34c7f0224642660d74a0d2dc6ff8a8ffcb683f16dcb88b66dd5d2363e0a"} Aug 13 19:59:06 crc kubenswrapper[4183]: I0813 19:59:06.137683 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-8-crc"] Aug 13 19:59:06 crc kubenswrapper[4183]: E0813 19:59:06.220277 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/certified-operator-index:v4.16" Aug 13 19:59:06 crc kubenswrapper[4183]: E0813 19:59:06.220400 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/certified-operator-index:v4.16" Aug 13 19:59:06 crc kubenswrapper[4183]: E0813 19:59:06.220580 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ncrf5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-7287f_openshift-marketplace(887d596e-c519-4bfa-af90-3edd9e1b2f0f): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:59:06 crc kubenswrapper[4183]: E0813 19:59:06.220642 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:59:06 crc kubenswrapper[4183]: I0813 19:59:06.221163 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" event={"ID":"a702c6d2-4dde-4077-ab8c-0f8df804bf7a","Type":"ContainerStarted","Data":"ae65970c89fa0f40e01774098114a6c64c75a67483be88aef59477e78bbb3f33"} Aug 13 19:59:06 crc kubenswrapper[4183]: I0813 19:59:06.516774 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" event={"ID":"0f394926-bdb9-425c-b36e-264d7fd34550","Type":"ContainerStarted","Data":"30bf5390313371a8f7b0bd5cd736b789b0d1779681e69eff1d8e1c6c5c72d56d"} Aug 13 19:59:06 crc kubenswrapper[4183]: I0813 19:59:06.546937 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" event={"ID":"0b5d722a-1123-4935-9740-52a08d018bc9","Type":"ContainerStarted","Data":"4146ac88f77df20ec1239010fef77264fc27e17e8819d70b5707697a50193ca3"} Aug 13 19:59:06 crc kubenswrapper[4183]: I0813 19:59:06.553253 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-65476884b9-9wcvx" event={"ID":"6268b7fe-8910-4505-b404-6f1df638105c","Type":"ContainerStarted","Data":"aab926f26907ff6a0818e2560ab90daa29fc5dd04e9bc7ca22bafece60120f4d"} Aug 13 19:59:06 crc kubenswrapper[4183]: I0813 19:59:06.625622 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" event={"ID":"cf1a8966-f594-490a-9fbb-eec5bafd13d3","Type":"ContainerStarted","Data":"078835e6e37f63907310c41b225ef71d7be13426f87f8b32c57e6b2e8c13a5a8"} Aug 13 19:59:06 crc kubenswrapper[4183]: I0813 19:59:06.626522 4183 patch_prober.go:28] interesting pod/packageserver-8464bcc55b-sjnqz container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.43:5443/healthz\": dial tcp 10.217.0.43:5443: connect: connection refused" start-of-body= Aug 13 19:59:06 crc kubenswrapper[4183]: I0813 19:59:06.626623 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.43:5443/healthz\": dial tcp 10.217.0.43:5443: connect: connection refused" Aug 13 19:59:06 crc kubenswrapper[4183]: I0813 19:59:06.649644 4183 patch_prober.go:28] interesting pod/catalog-operator-857456c46-7f5wf container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Aug 13 19:59:06 crc kubenswrapper[4183]: I0813 19:59:06.649752 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Aug 13 19:59:07 crc kubenswrapper[4183]: W0813 19:59:06.994479 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9127708_ccfd_4891_8a3a_f0cacb77e0f4.slice/crio-0e119602de1750a507b4e3fbbc37af9db215cdfe171f58b23acd54302144e238 WatchSource:0}: Error finding container 0e119602de1750a507b4e3fbbc37af9db215cdfe171f58b23acd54302144e238: Status 404 returned error can't find the container with id 0e119602de1750a507b4e3fbbc37af9db215cdfe171f58b23acd54302144e238 Aug 13 19:59:07 crc kubenswrapper[4183]: W0813 19:59:07.069131 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ae0dfbb_a0a9_45bb_85b5_cd9f94f64fe7.slice/crio-717e351e369b4a5799931814fac4e486642f405706a608624e022a6e952b8ef5 WatchSource:0}: Error finding container 717e351e369b4a5799931814fac4e486642f405706a608624e022a6e952b8ef5: Status 404 returned error can't find the container with id 717e351e369b4a5799931814fac4e486642f405706a608624e022a6e952b8ef5 Aug 13 19:59:07 crc kubenswrapper[4183]: W0813 19:59:07.241660 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d51f445_054a_4e4f_a67b_a828f5a32511.slice/crio-22d48c9fe60d97ed13552f5aeeaa6d1d74f506bd913cdde4ceede42e8c963eed WatchSource:0}: Error finding container 22d48c9fe60d97ed13552f5aeeaa6d1d74f506bd913cdde4ceede42e8c963eed: Status 404 returned error can't find the container with id 22d48c9fe60d97ed13552f5aeeaa6d1d74f506bd913cdde4ceede42e8c963eed Aug 13 19:59:07 crc kubenswrapper[4183]: I0813 19:59:07.687314 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" event={"ID":"f728c15e-d8de-4a9a-a3ea-fdcead95cb91","Type":"ContainerStarted","Data":"cd3ef5d43082d2ea06ff8ebdc73d431372f8a376212f30c5008a7b9579df7014"} Aug 13 19:59:07 crc kubenswrapper[4183]: I0813 19:59:07.708549 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" event={"ID":"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab","Type":"ContainerStarted","Data":"961449f5e5e8534f4a0d9f39c1853d25bd56415cac128d936d114b63d80904dc"} Aug 13 19:59:07 crc kubenswrapper[4183]: I0813 19:59:07.778736 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" event={"ID":"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e","Type":"ContainerStarted","Data":"2c4363bf35c3850ea69697df9035284b39acfc987f5b168c9bf3f20002f44039"} Aug 13 19:59:07 crc kubenswrapper[4183]: I0813 19:59:07.789641 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" event={"ID":"7d51f445-054a-4e4f-a67b-a828f5a32511","Type":"ContainerStarted","Data":"22d48c9fe60d97ed13552f5aeeaa6d1d74f506bd913cdde4ceede42e8c963eed"} Aug 13 19:59:07 crc kubenswrapper[4183]: I0813 19:59:07.867302 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" event={"ID":"87df87f4-ba66-4137-8e41-1fa632ad4207","Type":"ContainerStarted","Data":"4916f2a17d27bbf013c1e13f025d2cdf51127409f1a28c8a620b14bc4225ba0f"} Aug 13 19:59:07 crc kubenswrapper[4183]: I0813 19:59:07.914018 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" event={"ID":"ed024e5d-8fc2-4c22-803d-73f3c9795f19","Type":"ContainerStarted","Data":"20a713ea366c19c1b427548e8b8ab979d2ae1d350c086fe1a4874181f4de7687"} Aug 13 19:59:07 crc kubenswrapper[4183]: I0813 19:59:07.984149 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" event={"ID":"71af81a9-7d43-49b2-9287-c375900aa905","Type":"ContainerStarted","Data":"e2ed40c9bc30c8fdbb04088362ef76212a522ea5070f999ce3dc603f8c7a487e"} Aug 13 19:59:08 crc kubenswrapper[4183]: I0813 19:59:08.082174 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" event={"ID":"59748b9b-c309-4712-aa85-bb38d71c4915","Type":"ContainerStarted","Data":"a10fd87b4b9fef36cf95839340b0ecf97070241659beb7fea58a63794a40a007"} Aug 13 19:59:08 crc kubenswrapper[4183]: I0813 19:59:08.130544 4183 generic.go:334] "Generic (PLEG): container finished" podID="4092a9f8-5acc-4932-9e90-ef962eeb301a" containerID="30f87fc063214351a2d7f693b5af7355f78f438f8ce6d39d48f6177dfb07e5e8" exitCode=0 Aug 13 19:59:08 crc kubenswrapper[4183]: I0813 19:59:08.130656 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4jkp" event={"ID":"4092a9f8-5acc-4932-9e90-ef962eeb301a","Type":"ContainerDied","Data":"30f87fc063214351a2d7f693b5af7355f78f438f8ce6d39d48f6177dfb07e5e8"} Aug 13 19:59:08 crc kubenswrapper[4183]: I0813 19:59:08.206688 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" event={"ID":"72854c1e-5ae2-4ed6-9e50-ff3bccde2635","Type":"ContainerStarted","Data":"d84dd6581e40beedee68c638bafabbf5843141ec2068acac7cb06e5af3360877"} Aug 13 19:59:08 crc kubenswrapper[4183]: I0813 19:59:08.259460 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" event={"ID":"530553aa-0a1d-423e-8a22-f5eb4bdbb883","Type":"ContainerStarted","Data":"d3db60615905e44dc8f118e1544f7eb252e9b396f1af3b926339817c7ce1ed71"} Aug 13 19:59:08 crc kubenswrapper[4183]: I0813 19:59:08.313212 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" event={"ID":"d0f40333-c860-4c04-8058-a0bf572dcf12","Type":"ContainerStarted","Data":"97418fd7ce5644b997f128bada5bb6c90d375c9d7626fb1d5981b09a8d6771d7"} Aug 13 19:59:08 crc kubenswrapper[4183]: I0813 19:59:08.326680 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" event={"ID":"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7","Type":"ContainerStarted","Data":"717e351e369b4a5799931814fac4e486642f405706a608624e022a6e952b8ef5"} Aug 13 19:59:08 crc kubenswrapper[4183]: E0813 19:59:08.399579 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-operator-index:v4.16" Aug 13 19:59:08 crc kubenswrapper[4183]: E0813 19:59:08.399704 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-operator-index:v4.16" Aug 13 19:59:08 crc kubenswrapper[4183]: E0813 19:59:08.400079 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ptdrb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-f4jkp_openshift-marketplace(4092a9f8-5acc-4932-9e90-ef962eeb301a): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:59:08 crc kubenswrapper[4183]: E0813 19:59:08.400136 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:59:08 crc kubenswrapper[4183]: I0813 19:59:08.467595 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" event={"ID":"12e733dd-0939-4f1b-9cbb-13897e093787","Type":"ContainerStarted","Data":"ce1a5d3596103f2604e3421cb68ffd62e530298f3c2a7b8074896c2e7152c621"} Aug 13 19:59:08 crc kubenswrapper[4183]: I0813 19:59:08.612595 4183 generic.go:334] "Generic (PLEG): container finished" podID="3f4dca86-e6ee-4ec9-8324-86aff960225e" containerID="96a85267c5ac9e1059a54b9538ada7b67633a30ca7adf1d4d16cf6033471c5f4" exitCode=0 Aug 13 19:59:08 crc kubenswrapper[4183]: I0813 19:59:08.613514 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jhz6" event={"ID":"3f4dca86-e6ee-4ec9-8324-86aff960225e","Type":"ContainerDied","Data":"96a85267c5ac9e1059a54b9538ada7b67633a30ca7adf1d4d16cf6033471c5f4"} Aug 13 19:59:08 crc kubenswrapper[4183]: I0813 19:59:08.716179 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" event={"ID":"3482be94-0cdb-4e2a-889b-e5fac59fdbf5","Type":"ContainerStarted","Data":"7b2c6478f4940bab46ab22fb59aeffb640ce0f0e8ccd61b80c50a3afdd842157"} Aug 13 19:59:08 crc kubenswrapper[4183]: I0813 19:59:08.718077 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:59:08 crc kubenswrapper[4183]: I0813 19:59:08.729190 4183 patch_prober.go:28] interesting pod/marketplace-operator-8b455464d-f9xdt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Aug 13 19:59:08 crc kubenswrapper[4183]: I0813 19:59:08.729275 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" Aug 13 19:59:08 crc kubenswrapper[4183]: I0813 19:59:08.934742 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" event={"ID":"e9127708-ccfd-4891-8a3a-f0cacb77e0f4","Type":"ContainerStarted","Data":"0e119602de1750a507b4e3fbbc37af9db215cdfe171f58b23acd54302144e238"} Aug 13 19:59:09 crc kubenswrapper[4183]: E0813 19:59:09.290158 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"\"" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:59:10 crc kubenswrapper[4183]: I0813 19:59:10.018352 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" event={"ID":"b54e8941-2fc4-432a-9e51-39684df9089e","Type":"ContainerStarted","Data":"dd7033f12f10dfa562ecc04746779666b1a34bddfcb245d6e2353cc2c05cc540"} Aug 13 19:59:10 crc kubenswrapper[4183]: I0813 19:59:10.081748 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" event={"ID":"5bacb25d-97b6-4491-8fb4-99feae1d802a","Type":"ContainerStarted","Data":"c00af436eed79628e0e4901e79048ca0af8fcfc3099b202cf5fa799464c7fc03"} Aug 13 19:59:10 crc kubenswrapper[4183]: I0813 19:59:10.135170 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" event={"ID":"af6b67a3-a2bd-4051-9adc-c208a5a65d79","Type":"ContainerStarted","Data":"aa3bd53db5b871b1e7ccc9029bf14c3e8c4163839c67447dd344680fd1080e59"} Aug 13 19:59:10 crc kubenswrapper[4183]: I0813 19:59:10.167201 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" event={"ID":"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0","Type":"ContainerStarted","Data":"24d2c9dad5c7f6fd94e47dca912545c4f5b5cbadb90c11ba477fb1b512f0e277"} Aug 13 19:59:10 crc kubenswrapper[4183]: I0813 19:59:10.192024 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" event={"ID":"10603adc-d495-423c-9459-4caa405960bb","Type":"ContainerStarted","Data":"459e80350bae6577b517dba7ef99686836a51fad11f6f4125003b262f73ebf17"} Aug 13 19:59:10 crc kubenswrapper[4183]: I0813 19:59:10.224534 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-gbw49" event={"ID":"13045510-8717-4a71-ade4-be95a76440a7","Type":"ContainerStarted","Data":"d6d93047e42b7c37ac294d852c1865b360a39c098b65b453bf43202316d1ee5f"} Aug 13 19:59:10 crc kubenswrapper[4183]: I0813 19:59:10.225748 4183 patch_prober.go:28] interesting pod/marketplace-operator-8b455464d-f9xdt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Aug 13 19:59:10 crc kubenswrapper[4183]: I0813 19:59:10.225873 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" Aug 13 19:59:11 crc kubenswrapper[4183]: I0813 19:59:11.278220 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" event={"ID":"6d67253e-2acd-4bc1-8185-793587da4f17","Type":"ContainerStarted","Data":"de7555d542c802e58046a90350e414a08c9d856a865303fa64131537f1cc00fc"} Aug 13 19:59:11 crc kubenswrapper[4183]: I0813 19:59:11.318271 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" event={"ID":"c085412c-b875-46c9-ae3e-e6b0d8067091","Type":"ContainerStarted","Data":"17f6677962bd95967c105804158d24c9aee9eb80515bdbdb6c49e51ae42b0a5c"} Aug 13 19:59:11 crc kubenswrapper[4183]: I0813 19:59:11.318621 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:59:11 crc kubenswrapper[4183]: I0813 19:59:11.328253 4183 patch_prober.go:28] interesting pod/olm-operator-6d8474f75f-x54mh container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Aug 13 19:59:11 crc kubenswrapper[4183]: I0813 19:59:11.328368 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Aug 13 19:59:11 crc kubenswrapper[4183]: I0813 19:59:11.356477 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" event={"ID":"aa90b3c2-febd-4588-a063-7fbbe82f00c1","Type":"ContainerStarted","Data":"8ef23ac527350f7127dc72ec6d1aba3bba5c4b14a730a4bd909a3fdfd399378c"} Aug 13 19:59:11 crc kubenswrapper[4183]: I0813 19:59:11.411405 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" event={"ID":"120b38dc-8236-4fa6-a452-642b8ad738ee","Type":"ContainerStarted","Data":"653c5a1f52832901395f8f14e559c992fce4ce38bc73620d39cf1567c2981bf9"} Aug 13 19:59:11 crc kubenswrapper[4183]: I0813 19:59:11.418058 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:59:11 crc kubenswrapper[4183]: I0813 19:59:11.427601 4183 patch_prober.go:28] interesting pod/route-controller-manager-5c4dbb8899-tchz5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Aug 13 19:59:11 crc kubenswrapper[4183]: I0813 19:59:11.427687 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" Aug 13 19:59:11 crc kubenswrapper[4183]: I0813 19:59:11.431216 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 19:59:11 crc kubenswrapper[4183]: I0813 19:59:11.441212 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Aug 13 19:59:11 crc kubenswrapper[4183]: I0813 19:59:11.441307 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Aug 13 19:59:11 crc kubenswrapper[4183]: E0813 19:59:11.490618 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-operators-dcqzh" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" Aug 13 19:59:11 crc kubenswrapper[4183]: E0813 19:59:11.491308 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:59:11 crc kubenswrapper[4183]: E0813 19:59:11.908493 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/community-operator-index:v4.16" Aug 13 19:59:11 crc kubenswrapper[4183]: E0813 19:59:11.909163 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/community-operator-index:v4.16" Aug 13 19:59:11 crc kubenswrapper[4183]: E0813 19:59:11.909333 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-n6sqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-8jhz6_openshift-marketplace(3f4dca86-e6ee-4ec9-8324-86aff960225e): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:59:11 crc kubenswrapper[4183]: E0813 19:59:11.909391 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:59:12 crc kubenswrapper[4183]: I0813 19:59:12.492982 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" event={"ID":"13ad7555-5f28-4555-a563-892713a8433a","Type":"ContainerStarted","Data":"0c7b53a35a67b2526c5310571264cb255c68a5ac90b79fcfed3ea524243646e1"} Aug 13 19:59:12 crc kubenswrapper[4183]: I0813 19:59:12.521463 4183 patch_prober.go:28] interesting pod/oauth-openshift-765b47f944-n2lhl container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.30:6443/healthz\": dial tcp 10.217.0.30:6443: connect: connection refused" start-of-body= Aug 13 19:59:12 crc kubenswrapper[4183]: I0813 19:59:12.521576 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.30:6443/healthz\": dial tcp 10.217.0.30:6443: connect: connection refused" Aug 13 19:59:12 crc kubenswrapper[4183]: I0813 19:59:12.522274 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:59:12 crc kubenswrapper[4183]: I0813 19:59:12.675186 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:12 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:12 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:12 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:12 crc kubenswrapper[4183]: I0813 19:59:12.678052 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:12 crc kubenswrapper[4183]: I0813 19:59:12.741163 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" event={"ID":"297ab9b6-2186-4d5b-a952-2bfd59af63c4","Type":"ContainerStarted","Data":"7a017f2026334b4ef3c2c72644e98cd26b3feafb1ad74386d1d7e4999fa9e9bb"} Aug 13 19:59:12 crc kubenswrapper[4183]: I0813 19:59:12.893079 4183 patch_prober.go:28] interesting pod/olm-operator-6d8474f75f-x54mh container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Aug 13 19:59:12 crc kubenswrapper[4183]: I0813 19:59:12.893258 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Aug 13 19:59:13 crc kubenswrapper[4183]: I0813 19:59:13.457120 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:13 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:13 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:13 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:13 crc kubenswrapper[4183]: I0813 19:59:13.458286 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:13 crc kubenswrapper[4183]: E0813 19:59:13.555577 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/community-operator-index:v4.16" Aug 13 19:59:13 crc kubenswrapper[4183]: E0813 19:59:13.556327 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/community-operator-index:v4.16" Aug 13 19:59:13 crc kubenswrapper[4183]: E0813 19:59:13.557394 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-n59fs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-k9qqb_openshift-marketplace(ccdf38cf-634a-41a2-9c8b-74bb86af80a7): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:59:13 crc kubenswrapper[4183]: E0813 19:59:13.557571 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/community-operators-k9qqb" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" Aug 13 19:59:13 crc kubenswrapper[4183]: I0813 19:59:13.893152 4183 patch_prober.go:28] interesting pod/route-controller-manager-5c4dbb8899-tchz5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 19:59:13 crc kubenswrapper[4183]: I0813 19:59:13.893326 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.17:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:13.988691 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" event={"ID":"87df87f4-ba66-4137-8e41-1fa632ad4207","Type":"ContainerStarted","Data":"5a16f80522246f66629d4cfcf1e317f7a3db9cc08045c713b73797a46e8882df"} Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:13.990019 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.002280 4183 patch_prober.go:28] interesting pod/controller-manager-6ff78978b4-q4vv8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.002505 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.023732 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" event={"ID":"43ae1c37-047b-4ee2-9fee-41e337dd4ac8","Type":"ContainerStarted","Data":"c39ec2f009f84a11146853eb53b1073037d39ef91f4d853abf6b613d7e2758e6"} Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.061266 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" event={"ID":"0b5c38ff-1fa8-4219-994d-15776acd4a4d","Type":"ContainerStarted","Data":"346fc13eab4a6442e7eb6bb7019dac9a1216274ae59cd519b5e7474a1dd1b4e2"} Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.125384 4183 generic.go:334] "Generic (PLEG): container finished" podID="5bacb25d-97b6-4491-8fb4-99feae1d802a" containerID="c00af436eed79628e0e4901e79048ca0af8fcfc3099b202cf5fa799464c7fc03" exitCode=0 Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.125542 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" event={"ID":"5bacb25d-97b6-4491-8fb4-99feae1d802a","Type":"ContainerDied","Data":"c00af436eed79628e0e4901e79048ca0af8fcfc3099b202cf5fa799464c7fc03"} Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.265455 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-65476884b9-9wcvx" event={"ID":"6268b7fe-8910-4505-b404-6f1df638105c","Type":"ContainerStarted","Data":"b4940961924b80341abc448ef2ef186a7af57fade4e32cd5feb2e52defb2d5f9"} Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.266575 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.269384 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.269458 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.409125 4183 patch_prober.go:28] interesting pod/oauth-openshift-765b47f944-n2lhl container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.30:6443/healthz\": dial tcp 10.217.0.30:6443: connect: connection refused" start-of-body= Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.409241 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.30:6443/healthz\": dial tcp 10.217.0.30:6443: connect: connection refused" Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.440141 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:14 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:14 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:14 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.440285 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.528690 4183 patch_prober.go:28] interesting pod/catalog-operator-857456c46-7f5wf container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.531286 4183 patch_prober.go:28] interesting pod/catalog-operator-857456c46-7f5wf container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.532753 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.531345 4183 patch_prober.go:28] interesting pod/marketplace-operator-8b455464d-f9xdt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.533736 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.536046 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.540190 4183 patch_prober.go:28] interesting pod/marketplace-operator-8b455464d-f9xdt container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.544531 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.696686 4183 patch_prober.go:28] interesting pod/olm-operator-6d8474f75f-x54mh container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.696924 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.712116 4183 patch_prober.go:28] interesting pod/olm-operator-6d8474f75f-x54mh container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.712236 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.901883 4183 patch_prober.go:28] interesting pod/controller-manager-6ff78978b4-q4vv8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.902317 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.902415 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.902445 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.920225 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.920358 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.951462 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.951540 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.955313 4183 patch_prober.go:28] interesting pod/console-84fccc7b6-mkncc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.955461 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Aug 13 19:59:15 crc kubenswrapper[4183]: I0813 19:59:15.027582 4183 patch_prober.go:28] interesting pod/oauth-openshift-765b47f944-n2lhl container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.30:6443/healthz\": dial tcp 10.217.0.30:6443: connect: connection refused" start-of-body= Aug 13 19:59:15 crc kubenswrapper[4183]: I0813 19:59:15.027930 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.30:6443/healthz\": dial tcp 10.217.0.30:6443: connect: connection refused" Aug 13 19:59:15 crc kubenswrapper[4183]: I0813 19:59:15.295721 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:59:15 crc kubenswrapper[4183]: I0813 19:59:15.460713 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:15 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:15 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:15 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:15 crc kubenswrapper[4183]: I0813 19:59:15.460930 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:15 crc kubenswrapper[4183]: I0813 19:59:15.553274 4183 patch_prober.go:28] interesting pod/packageserver-8464bcc55b-sjnqz container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.43:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 19:59:15 crc kubenswrapper[4183]: I0813 19:59:15.553471 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.43:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 19:59:15 crc kubenswrapper[4183]: I0813 19:59:15.554294 4183 patch_prober.go:28] interesting pod/packageserver-8464bcc55b-sjnqz container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.43:5443/healthz\": context deadline exceeded" start-of-body= Aug 13 19:59:15 crc kubenswrapper[4183]: I0813 19:59:15.554327 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.43:5443/healthz\": context deadline exceeded" Aug 13 19:59:15 crc kubenswrapper[4183]: I0813 19:59:15.678220 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" event={"ID":"4f8aa612-9da0-4a2b-911e-6a1764a4e74e","Type":"ContainerStarted","Data":"de6ce3128562801aa3c24e80d49667d2d193ade88fcdf509085e61d3d048041e"} Aug 13 19:59:15 crc kubenswrapper[4183]: I0813 19:59:15.708219 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" event={"ID":"34a48baf-1bee-4921-8bb2-9b7320e76f79","Type":"ContainerStarted","Data":"21441aa058a7fc7abd5477d6c596271f085a956981f7a1240f7a277a497c7755"} Aug 13 19:59:15 crc kubenswrapper[4183]: I0813 19:59:15.709051 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:59:15 crc kubenswrapper[4183]: I0813 19:59:15.840114 4183 generic.go:334] "Generic (PLEG): container finished" podID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" containerID="c74c246d46562df6bafe28139d83ae2ba55d3f0fc666dc8077050a654e246963" exitCode=0 Aug 13 19:59:15 crc kubenswrapper[4183]: I0813 19:59:15.841377 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" event={"ID":"c782cf62-a827-4677-b3c2-6f82c5f09cbb","Type":"ContainerDied","Data":"c74c246d46562df6bafe28139d83ae2ba55d3f0fc666dc8077050a654e246963"} Aug 13 19:59:15 crc kubenswrapper[4183]: I0813 19:59:15.842433 4183 patch_prober.go:28] interesting pod/controller-manager-6ff78978b4-q4vv8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Aug 13 19:59:15 crc kubenswrapper[4183]: I0813 19:59:15.842496 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Aug 13 19:59:15 crc kubenswrapper[4183]: I0813 19:59:15.842989 4183 patch_prober.go:28] interesting pod/oauth-openshift-765b47f944-n2lhl container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.30:6443/healthz\": dial tcp 10.217.0.30:6443: connect: connection refused" start-of-body= Aug 13 19:59:15 crc kubenswrapper[4183]: I0813 19:59:15.843050 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.30:6443/healthz\": dial tcp 10.217.0.30:6443: connect: connection refused" Aug 13 19:59:15 crc kubenswrapper[4183]: I0813 19:59:15.850667 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 19:59:15 crc kubenswrapper[4183]: I0813 19:59:15.850753 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 19:59:16 crc kubenswrapper[4183]: E0813 19:59:16.092412 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.16" Aug 13 19:59:16 crc kubenswrapper[4183]: E0813 19:59:16.092516 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.16" Aug 13 19:59:16 crc kubenswrapper[4183]: E0813 19:59:16.092636 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-tf29r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-8s8pc_openshift-marketplace(c782cf62-a827-4677-b3c2-6f82c5f09cbb): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:59:16 crc kubenswrapper[4183]: E0813 19:59:16.092723 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:59:16 crc kubenswrapper[4183]: E0813 19:59:16.435723 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/certified-operator-index:v4.16" Aug 13 19:59:16 crc kubenswrapper[4183]: E0813 19:59:16.436359 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/certified-operator-index:v4.16" Aug 13 19:59:16 crc kubenswrapper[4183]: E0813 19:59:16.436499 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-mwzcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-g4v97_openshift-marketplace(bb917686-edfb-4158-86ad-6fce0abec64c): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:59:16 crc kubenswrapper[4183]: E0813 19:59:16.436555 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/certified-operators-g4v97" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" Aug 13 19:59:16 crc kubenswrapper[4183]: I0813 19:59:16.450177 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:16 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:16 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:16 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:16 crc kubenswrapper[4183]: I0813 19:59:16.450374 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:16 crc kubenswrapper[4183]: I0813 19:59:16.993579 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" event={"ID":"a702c6d2-4dde-4077-ab8c-0f8df804bf7a","Type":"ContainerStarted","Data":"55fde84744bf28e99782e189a6f37f50b90f68a3503eb7f58d9744fc329b3ad0"} Aug 13 19:59:16 crc kubenswrapper[4183]: I0813 19:59:16.995511 4183 patch_prober.go:28] interesting pod/controller-manager-6ff78978b4-q4vv8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Aug 13 19:59:16 crc kubenswrapper[4183]: I0813 19:59:16.995591 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Aug 13 19:59:17 crc kubenswrapper[4183]: E0813 19:59:17.011104 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:59:17 crc kubenswrapper[4183]: I0813 19:59:17.450267 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:17 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:17 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:17 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:17 crc kubenswrapper[4183]: I0813 19:59:17.451048 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:18 crc kubenswrapper[4183]: I0813 19:59:18.013627 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" event={"ID":"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7","Type":"ContainerStarted","Data":"47802e2c3506925156013fb9ab1b2e35c0b10d40b6540eabeb02eed57b691069"} Aug 13 19:59:18 crc kubenswrapper[4183]: I0813 19:59:18.027744 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" event={"ID":"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf","Type":"ContainerStarted","Data":"de2b2e2d762c8b359ec567ae879d9fedbdd2fb02f477f190f4465a6d6279b220"} Aug 13 19:59:18 crc kubenswrapper[4183]: I0813 19:59:18.036728 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" event={"ID":"0b5d722a-1123-4935-9740-52a08d018bc9","Type":"ContainerStarted","Data":"097e790a946b216a85d0fae9757cd924373f90ee6f60238beb63ed4aaad70a83"} Aug 13 19:59:18 crc kubenswrapper[4183]: I0813 19:59:18.052644 4183 generic.go:334] "Generic (PLEG): container finished" podID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" containerID="1d3ccfcb0f390dfe83d5c073cc5942fd65993c97adb90156294ad82281a940f3" exitCode=0 Aug 13 19:59:18 crc kubenswrapper[4183]: I0813 19:59:18.053390 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" event={"ID":"9ad279b4-d9dc-42a8-a1c8-a002bd063482","Type":"ContainerDied","Data":"1d3ccfcb0f390dfe83d5c073cc5942fd65993c97adb90156294ad82281a940f3"} Aug 13 19:59:18 crc kubenswrapper[4183]: E0813 19:59:18.221555 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.16" Aug 13 19:59:18 crc kubenswrapper[4183]: E0813 19:59:18.222256 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.16" Aug 13 19:59:18 crc kubenswrapper[4183]: E0813 19:59:18.222765 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-r7dbp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-rmwfn_openshift-marketplace(9ad279b4-d9dc-42a8-a1c8-a002bd063482): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:59:18 crc kubenswrapper[4183]: E0813 19:59:18.223280 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:59:18 crc kubenswrapper[4183]: I0813 19:59:18.455540 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:18 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:18 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:18 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:18 crc kubenswrapper[4183]: I0813 19:59:18.455705 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:19 crc kubenswrapper[4183]: I0813 19:59:19.132644 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" event={"ID":"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab","Type":"ContainerStarted","Data":"b52df8e62a367664028244f096d775f6f9e6f572cd730e4e147620381f6880c3"} Aug 13 19:59:20 crc kubenswrapper[4183]: I0813 19:59:19.179333 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" event={"ID":"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be","Type":"ContainerStarted","Data":"7affac532533ef0eeb1ab47860360791c20d3b170a8f0f2ff3a4172b7a3e0d60"} Aug 13 19:59:20 crc kubenswrapper[4183]: I0813 19:59:19.179418 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:59:20 crc kubenswrapper[4183]: E0813 19:59:19.322218 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:59:20 crc kubenswrapper[4183]: I0813 19:59:19.481340 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:20 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:20 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:20 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:20 crc kubenswrapper[4183]: I0813 19:59:19.481422 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:20 crc kubenswrapper[4183]: I0813 19:59:20.187629 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" event={"ID":"5bacb25d-97b6-4491-8fb4-99feae1d802a","Type":"ContainerStarted","Data":"c5e2f15a8db655a6a0bf0f0e7b58aa9539a6061f0ba62d00544e8ae2fda4799c"} Aug 13 19:59:20 crc kubenswrapper[4183]: I0813 19:59:20.191395 4183 generic.go:334] "Generic (PLEG): container finished" podID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" containerID="b52df8e62a367664028244f096d775f6f9e6f572cd730e4e147620381f6880c3" exitCode=0 Aug 13 19:59:20 crc kubenswrapper[4183]: I0813 19:59:20.193318 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" event={"ID":"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab","Type":"ContainerDied","Data":"b52df8e62a367664028244f096d775f6f9e6f572cd730e4e147620381f6880c3"} Aug 13 19:59:20 crc kubenswrapper[4183]: I0813 19:59:20.431924 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 19:59:20 crc kubenswrapper[4183]: I0813 19:59:20.444106 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:20 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:20 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:20 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:20 crc kubenswrapper[4183]: I0813 19:59:20.444186 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:20 crc kubenswrapper[4183]: E0813 19:59:20.578019 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/certified-operator-index:v4.16" Aug 13 19:59:20 crc kubenswrapper[4183]: E0813 19:59:20.578086 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/certified-operator-index:v4.16" Aug 13 19:59:20 crc kubenswrapper[4183]: E0813 19:59:20.578199 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ncrf5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-7287f_openshift-marketplace(887d596e-c519-4bfa-af90-3edd9e1b2f0f): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:59:20 crc kubenswrapper[4183]: E0813 19:59:20.578250 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:59:21 crc kubenswrapper[4183]: I0813 19:59:21.439511 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:21 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:21 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:21 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:21 crc kubenswrapper[4183]: I0813 19:59:21.440174 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:22 crc kubenswrapper[4183]: I0813 19:59:22.321313 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" event={"ID":"59748b9b-c309-4712-aa85-bb38d71c4915","Type":"ContainerStarted","Data":"c58eafce8379a44387b88a8f240cc4db0f60e96be3a329c57feb7b3d30a9c1df"} Aug 13 19:59:22 crc kubenswrapper[4183]: I0813 19:59:22.323541 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:59:22 crc kubenswrapper[4183]: I0813 19:59:22.333687 4183 patch_prober.go:28] interesting pod/console-conversion-webhook-595f9969b-l6z49 container/conversion-webhook-server namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.61:9443/readyz\": dial tcp 10.217.0.61:9443: connect: connection refused" start-of-body= Aug 13 19:59:22 crc kubenswrapper[4183]: I0813 19:59:22.334196 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" containerName="conversion-webhook-server" probeResult="failure" output="Get \"https://10.217.0.61:9443/readyz\": dial tcp 10.217.0.61:9443: connect: connection refused" Aug 13 19:59:22 crc kubenswrapper[4183]: I0813 19:59:22.395051 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" event={"ID":"530553aa-0a1d-423e-8a22-f5eb4bdbb883","Type":"ContainerStarted","Data":"8d517c0fc52e9a1039f5e59cdbb937f13503c7a4c1c4b293a874285946b48f38"} Aug 13 19:59:22 crc kubenswrapper[4183]: I0813 19:59:22.444092 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:22 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:22 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:22 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:22 crc kubenswrapper[4183]: I0813 19:59:22.444232 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:23 crc kubenswrapper[4183]: E0813 19:59:23.383529 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-operator-index:v4.16" Aug 13 19:59:23 crc kubenswrapper[4183]: E0813 19:59:23.383975 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-operator-index:v4.16" Aug 13 19:59:23 crc kubenswrapper[4183]: E0813 19:59:23.384097 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ptdrb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-f4jkp_openshift-marketplace(4092a9f8-5acc-4932-9e90-ef962eeb301a): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:59:23 crc kubenswrapper[4183]: E0813 19:59:23.384157 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:59:23 crc kubenswrapper[4183]: I0813 19:59:23.446637 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:23 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:23 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:23 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:23 crc kubenswrapper[4183]: I0813 19:59:23.446729 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:23 crc kubenswrapper[4183]: I0813 19:59:23.541045 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-gbw49" event={"ID":"13045510-8717-4a71-ade4-be95a76440a7","Type":"ContainerStarted","Data":"616a149529a4e62cb9a66b620ce134ef7451a62a02ea4564d08effb1afb8a8e3"} Aug 13 19:59:23 crc kubenswrapper[4183]: I0813 19:59:23.543191 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-gbw49" Aug 13 19:59:23 crc kubenswrapper[4183]: I0813 19:59:23.550606 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-gbw49" Aug 13 19:59:23 crc kubenswrapper[4183]: I0813 19:59:23.583318 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" event={"ID":"72854c1e-5ae2-4ed6-9e50-ff3bccde2635","Type":"ContainerStarted","Data":"b84a7ab7f1820bc9c15f1779999dcf04a421b3a4ef043acf93ea2f14cdcff7d9"} Aug 13 19:59:23 crc kubenswrapper[4183]: I0813 19:59:23.589691 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" event={"ID":"e9127708-ccfd-4891-8a3a-f0cacb77e0f4","Type":"ContainerStarted","Data":"47f4fe3d214f9afb61d4c14f1173afecfd243739000ced3d025f9611dbfd4239"} Aug 13 19:59:23 crc kubenswrapper[4183]: I0813 19:59:23.594615 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:59:23 crc kubenswrapper[4183]: I0813 19:59:23.595949 4183 patch_prober.go:28] interesting pod/console-operator-5dbbc74dc9-cp5cd container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/readyz\": dial tcp 10.217.0.62:8443: connect: connection refused" start-of-body= Aug 13 19:59:23 crc kubenswrapper[4183]: I0813 19:59:23.596185 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.62:8443/readyz\": dial tcp 10.217.0.62:8443: connect: connection refused" Aug 13 19:59:23 crc kubenswrapper[4183]: I0813 19:59:23.616582 4183 patch_prober.go:28] interesting pod/console-conversion-webhook-595f9969b-l6z49 container/conversion-webhook-server namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.61:9443/readyz\": dial tcp 10.217.0.61:9443: connect: connection refused" start-of-body= Aug 13 19:59:23 crc kubenswrapper[4183]: I0813 19:59:23.616746 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" containerName="conversion-webhook-server" probeResult="failure" output="Get \"https://10.217.0.61:9443/readyz\": dial tcp 10.217.0.61:9443: connect: connection refused" Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.442155 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:24 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:24 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:24 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.442662 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.525297 4183 patch_prober.go:28] interesting pod/marketplace-operator-8b455464d-f9xdt container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.526345 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.528019 4183 patch_prober.go:28] interesting pod/marketplace-operator-8b455464d-f9xdt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.529015 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.567026 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.621020 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" event={"ID":"f728c15e-d8de-4a9a-a3ea-fdcead95cb91","Type":"ContainerStarted","Data":"1cca846256bf85cbd7c7f47d78ffd3a017ed62ad697f87acb64600f492c2e556"} Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.628659 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" event={"ID":"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab","Type":"ContainerStarted","Data":"a9c5c60859fe5965d3e56b1f36415e36c4ebccf094bcf5a836013b9db4262143"} Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.655400 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.656171 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.665497 4183 patch_prober.go:28] interesting pod/apiserver-69c565c9b6-vbdpd container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.665614 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.666135 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" event={"ID":"d0f40333-c860-4c04-8058-a0bf572dcf12","Type":"ContainerStarted","Data":"882d38708fa83bc398808c0ce244f77c0ef2b6ab6f69e988b1f27aaea5d0229e"} Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.672329 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" event={"ID":"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0","Type":"ContainerStarted","Data":"19ec4c1780cc88a3cfba567eee52fe5f2e6994b97cbb3947d1ab13f0c4146bf5"} Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.675828 4183 patch_prober.go:28] interesting pod/console-operator-5dbbc74dc9-cp5cd container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/readyz\": dial tcp 10.217.0.62:8443: connect: connection refused" start-of-body= Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.676112 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.62:8443/readyz\": dial tcp 10.217.0.62:8443: connect: connection refused" Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.681676 4183 patch_prober.go:28] interesting pod/console-conversion-webhook-595f9969b-l6z49 container/conversion-webhook-server namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.61:9443/readyz\": dial tcp 10.217.0.61:9443: connect: connection refused" start-of-body= Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.682043 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" containerName="conversion-webhook-server" probeResult="failure" output="Get \"https://10.217.0.61:9443/readyz\": dial tcp 10.217.0.61:9443: connect: connection refused" Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.698210 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.807965 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.876653 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.876737 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.877108 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.877152 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.889020 4183 patch_prober.go:28] interesting pod/controller-manager-6ff78978b4-q4vv8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.889129 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.960069 4183 patch_prober.go:28] interesting pod/console-84fccc7b6-mkncc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.961051 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.987503 4183 patch_prober.go:28] interesting pod/console-operator-5dbbc74dc9-cp5cd container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/readyz\": dial tcp 10.217.0.62:8443: connect: connection refused" start-of-body= Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.987631 4183 patch_prober.go:28] interesting pod/console-operator-5dbbc74dc9-cp5cd container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" start-of-body= Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.987733 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.987653 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.62:8443/readyz\": dial tcp 10.217.0.62:8443: connect: connection refused" Aug 13 19:59:25 crc kubenswrapper[4183]: I0813 19:59:25.020461 4183 patch_prober.go:28] interesting pod/oauth-openshift-765b47f944-n2lhl container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.30:6443/healthz\": dial tcp 10.217.0.30:6443: connect: connection refused" start-of-body= Aug 13 19:59:25 crc kubenswrapper[4183]: I0813 19:59:25.020575 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.30:6443/healthz\": dial tcp 10.217.0.30:6443: connect: connection refused" Aug 13 19:59:25 crc kubenswrapper[4183]: I0813 19:59:25.021135 4183 patch_prober.go:28] interesting pod/console-conversion-webhook-595f9969b-l6z49 container/conversion-webhook-server namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.61:9443/readyz\": dial tcp 10.217.0.61:9443: connect: connection refused" start-of-body= Aug 13 19:59:25 crc kubenswrapper[4183]: I0813 19:59:25.021177 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" containerName="conversion-webhook-server" probeResult="failure" output="Get \"https://10.217.0.61:9443/readyz\": dial tcp 10.217.0.61:9443: connect: connection refused" Aug 13 19:59:25 crc kubenswrapper[4183]: I0813 19:59:25.021239 4183 patch_prober.go:28] interesting pod/console-conversion-webhook-595f9969b-l6z49 container/conversion-webhook-server namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.61:9443/readyz\": dial tcp 10.217.0.61:9443: connect: connection refused" start-of-body= Aug 13 19:59:25 crc kubenswrapper[4183]: I0813 19:59:25.021272 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" containerName="conversion-webhook-server" probeResult="failure" output="Get \"https://10.217.0.61:9443/readyz\": dial tcp 10.217.0.61:9443: connect: connection refused" Aug 13 19:59:25 crc kubenswrapper[4183]: E0813 19:59:25.218175 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"\"" pod="openshift-marketplace/community-operators-k9qqb" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" Aug 13 19:59:25 crc kubenswrapper[4183]: E0813 19:59:25.373679 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-operator-index:v4.16" Aug 13 19:59:25 crc kubenswrapper[4183]: E0813 19:59:25.374597 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-operator-index:v4.16" Aug 13 19:59:25 crc kubenswrapper[4183]: E0813 19:59:25.374931 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-nzb4s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-dcqzh_openshift-marketplace(6db26b71-4e04-4688-a0c0-00e06e8c888d): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:59:25 crc kubenswrapper[4183]: E0813 19:59:25.374982 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/redhat-operators-dcqzh" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" Aug 13 19:59:25 crc kubenswrapper[4183]: I0813 19:59:25.434518 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:25 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:25 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:25 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:25 crc kubenswrapper[4183]: I0813 19:59:25.434683 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:25 crc kubenswrapper[4183]: I0813 19:59:25.688139 4183 generic.go:334] "Generic (PLEG): container finished" podID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" containerID="b84a7ab7f1820bc9c15f1779999dcf04a421b3a4ef043acf93ea2f14cdcff7d9" exitCode=0 Aug 13 19:59:25 crc kubenswrapper[4183]: I0813 19:59:25.688651 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" event={"ID":"72854c1e-5ae2-4ed6-9e50-ff3bccde2635","Type":"ContainerDied","Data":"b84a7ab7f1820bc9c15f1779999dcf04a421b3a4ef043acf93ea2f14cdcff7d9"} Aug 13 19:59:25 crc kubenswrapper[4183]: I0813 19:59:25.692565 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" event={"ID":"12e733dd-0939-4f1b-9cbb-13897e093787","Type":"ContainerStarted","Data":"98e6fc91236bf9c4dd7a99909033583c8b64e10f67e3130a12a92936c6a6a8dd"} Aug 13 19:59:25 crc kubenswrapper[4183]: I0813 19:59:25.703346 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" event={"ID":"10603adc-d495-423c-9459-4caa405960bb","Type":"ContainerStarted","Data":"f45aa787fb1c206638720c3ec1a09cb5a4462bb90c0d9e77276f385c9f24e9bc"} Aug 13 19:59:25 crc kubenswrapper[4183]: I0813 19:59:25.708073 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" event={"ID":"7d51f445-054a-4e4f-a67b-a828f5a32511","Type":"ContainerStarted","Data":"957c48a64bf505f55933cfc9cf99bce461d72f89938aa38299be4b2e4c832fb2"} Aug 13 19:59:26 crc kubenswrapper[4183]: I0813 19:59:26.453310 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:26 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:26 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:26 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:26 crc kubenswrapper[4183]: I0813 19:59:26.453464 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:26 crc kubenswrapper[4183]: E0813 19:59:26.580144 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/community-operator-index:v4.16" Aug 13 19:59:26 crc kubenswrapper[4183]: E0813 19:59:26.580278 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/community-operator-index:v4.16" Aug 13 19:59:26 crc kubenswrapper[4183]: E0813 19:59:26.580401 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-n6sqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-8jhz6_openshift-marketplace(3f4dca86-e6ee-4ec9-8324-86aff960225e): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:59:26 crc kubenswrapper[4183]: E0813 19:59:26.580459 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:59:27 crc kubenswrapper[4183]: I0813 19:59:27.442359 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:27 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:27 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:27 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:27 crc kubenswrapper[4183]: I0813 19:59:27.442744 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:27 crc kubenswrapper[4183]: I0813 19:59:27.749963 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" event={"ID":"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab","Type":"ContainerStarted","Data":"850160bdc6ea5ea83ea4c13388d6776a10113289f49f21b1ead74f152e5a1512"} Aug 13 19:59:27 crc kubenswrapper[4183]: I0813 19:59:27.761394 4183 generic.go:334] "Generic (PLEG): container finished" podID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" containerID="8d517c0fc52e9a1039f5e59cdbb937f13503c7a4c1c4b293a874285946b48f38" exitCode=0 Aug 13 19:59:27 crc kubenswrapper[4183]: I0813 19:59:27.761740 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" event={"ID":"530553aa-0a1d-423e-8a22-f5eb4bdbb883","Type":"ContainerDied","Data":"8d517c0fc52e9a1039f5e59cdbb937f13503c7a4c1c4b293a874285946b48f38"} Aug 13 19:59:28 crc kubenswrapper[4183]: E0813 19:59:28.212953 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"\"" pod="openshift-marketplace/certified-operators-g4v97" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" Aug 13 19:59:28 crc kubenswrapper[4183]: I0813 19:59:28.371432 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podStartSLOduration=35619914.37117759 podStartE2EDuration="9894h25m14.371177589s" podCreationTimestamp="2024-06-27 13:34:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 19:59:28.369383438 +0000 UTC m=+935.062048516" watchObservedRunningTime="2025-08-13 19:59:28.371177589 +0000 UTC m=+935.063842437" Aug 13 19:59:28 crc kubenswrapper[4183]: I0813 19:59:28.441302 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:28 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:28 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:28 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:28 crc kubenswrapper[4183]: I0813 19:59:28.441393 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:29 crc kubenswrapper[4183]: I0813 19:59:29.432333 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:29 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:29 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:29 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:29 crc kubenswrapper[4183]: I0813 19:59:29.433101 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:29 crc kubenswrapper[4183]: I0813 19:59:29.843299 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:59:29 crc kubenswrapper[4183]: I0813 19:59:29.844565 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:59:29 crc kubenswrapper[4183]: I0813 19:59:29.846243 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-mtx25 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Aug 13 19:59:29 crc kubenswrapper[4183]: I0813 19:59:29.846371 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Aug 13 19:59:30 crc kubenswrapper[4183]: I0813 19:59:30.435651 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:30 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:30 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:30 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:30 crc kubenswrapper[4183]: I0813 19:59:30.436305 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:31 crc kubenswrapper[4183]: E0813 19:59:31.325467 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.16" Aug 13 19:59:31 crc kubenswrapper[4183]: E0813 19:59:31.325538 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.16" Aug 13 19:59:31 crc kubenswrapper[4183]: E0813 19:59:31.325757 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-tf29r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-8s8pc_openshift-marketplace(c782cf62-a827-4677-b3c2-6f82c5f09cbb): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:59:31 crc kubenswrapper[4183]: E0813 19:59:31.325940 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:59:31 crc kubenswrapper[4183]: I0813 19:59:31.436887 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:31 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:31 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:31 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:31 crc kubenswrapper[4183]: I0813 19:59:31.436986 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:31 crc kubenswrapper[4183]: I0813 19:59:31.669384 4183 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="fsnotify: can't remove non-existent watch: /etc/kubernetes/kubelet-ca.crt" Aug 13 19:59:32 crc kubenswrapper[4183]: I0813 19:59:32.437963 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:32 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:32 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:32 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:32 crc kubenswrapper[4183]: I0813 19:59:32.438645 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:33 crc kubenswrapper[4183]: I0813 19:59:33.160183 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:59:33 crc kubenswrapper[4183]: I0813 19:59:33.259101 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "72854c1e-5ae2-4ed6-9e50-ff3bccde2635" (UID: "72854c1e-5ae2-4ed6-9e50-ff3bccde2635"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 19:59:33 crc kubenswrapper[4183]: I0813 19:59:33.259682 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kubelet-dir\") pod \"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\" (UID: \"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\") " Aug 13 19:59:33 crc kubenswrapper[4183]: I0813 19:59:33.260125 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access\") pod \"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\" (UID: \"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\") " Aug 13 19:59:33 crc kubenswrapper[4183]: I0813 19:59:33.260634 4183 reconciler_common.go:300] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kubelet-dir\") on node \"crc\" DevicePath \"\"" Aug 13 19:59:33 crc kubenswrapper[4183]: I0813 19:59:33.290011 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "72854c1e-5ae2-4ed6-9e50-ff3bccde2635" (UID: "72854c1e-5ae2-4ed6-9e50-ff3bccde2635"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 19:59:33 crc kubenswrapper[4183]: I0813 19:59:33.362543 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access\") on node \"crc\" DevicePath \"\"" Aug 13 19:59:33 crc kubenswrapper[4183]: I0813 19:59:33.440531 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:33 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:33 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:33 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:33 crc kubenswrapper[4183]: I0813 19:59:33.440941 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:33 crc kubenswrapper[4183]: I0813 19:59:33.831200 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" event={"ID":"72854c1e-5ae2-4ed6-9e50-ff3bccde2635","Type":"ContainerDied","Data":"d84dd6581e40beedee68c638bafabbf5843141ec2068acac7cb06e5af3360877"} Aug 13 19:59:33 crc kubenswrapper[4183]: I0813 19:59:33.831293 4183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d84dd6581e40beedee68c638bafabbf5843141ec2068acac7cb06e5af3360877" Aug 13 19:59:33 crc kubenswrapper[4183]: I0813 19:59:33.831374 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:59:34 crc kubenswrapper[4183]: E0813 19:59:34.211519 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:59:34 crc kubenswrapper[4183]: E0813 19:59:34.211927 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"\"" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:59:34 crc kubenswrapper[4183]: E0813 19:59:34.343755 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.16" Aug 13 19:59:34 crc kubenswrapper[4183]: E0813 19:59:34.344580 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.16" Aug 13 19:59:34 crc kubenswrapper[4183]: E0813 19:59:34.344712 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-r7dbp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-rmwfn_openshift-marketplace(9ad279b4-d9dc-42a8-a1c8-a002bd063482): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:59:34 crc kubenswrapper[4183]: E0813 19:59:34.344764 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:59:34 crc kubenswrapper[4183]: I0813 19:59:34.433338 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:34 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:34 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:34 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:34 crc kubenswrapper[4183]: I0813 19:59:34.433458 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:34 crc kubenswrapper[4183]: I0813 19:59:34.841116 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-mtx25 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Aug 13 19:59:34 crc kubenswrapper[4183]: I0813 19:59:34.841658 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Aug 13 19:59:34 crc kubenswrapper[4183]: I0813 19:59:34.872051 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 19:59:34 crc kubenswrapper[4183]: I0813 19:59:34.872110 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 19:59:34 crc kubenswrapper[4183]: I0813 19:59:34.872615 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 19:59:34 crc kubenswrapper[4183]: I0813 19:59:34.872671 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:59:34 crc kubenswrapper[4183]: I0813 19:59:34.873283 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 19:59:34 crc kubenswrapper[4183]: I0813 19:59:34.875150 4183 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"b4940961924b80341abc448ef2ef186a7af57fade4e32cd5feb2e52defb2d5f9"} pod="openshift-console/downloads-65476884b9-9wcvx" containerMessage="Container download-server failed liveness probe, will be restarted" Aug 13 19:59:34 crc kubenswrapper[4183]: I0813 19:59:34.875369 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" containerID="cri-o://b4940961924b80341abc448ef2ef186a7af57fade4e32cd5feb2e52defb2d5f9" gracePeriod=2 Aug 13 19:59:34 crc kubenswrapper[4183]: I0813 19:59:34.875904 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 19:59:34 crc kubenswrapper[4183]: I0813 19:59:34.875965 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 19:59:34 crc kubenswrapper[4183]: I0813 19:59:34.949438 4183 patch_prober.go:28] interesting pod/console-84fccc7b6-mkncc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Aug 13 19:59:34 crc kubenswrapper[4183]: I0813 19:59:34.949705 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Aug 13 19:59:34 crc kubenswrapper[4183]: I0813 19:59:34.985305 4183 patch_prober.go:28] interesting pod/console-operator-5dbbc74dc9-cp5cd container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" start-of-body= Aug 13 19:59:34 crc kubenswrapper[4183]: I0813 19:59:34.985402 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" Aug 13 19:59:34 crc kubenswrapper[4183]: I0813 19:59:34.986513 4183 patch_prober.go:28] interesting pod/console-operator-5dbbc74dc9-cp5cd container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/readyz\": dial tcp 10.217.0.62:8443: connect: connection refused" start-of-body= Aug 13 19:59:34 crc kubenswrapper[4183]: I0813 19:59:34.987203 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.62:8443/readyz\": dial tcp 10.217.0.62:8443: connect: connection refused" Aug 13 19:59:35 crc kubenswrapper[4183]: I0813 19:59:35.019257 4183 patch_prober.go:28] interesting pod/console-conversion-webhook-595f9969b-l6z49 container/conversion-webhook-server namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.61:9443/readyz\": dial tcp 10.217.0.61:9443: connect: connection refused" start-of-body= Aug 13 19:59:35 crc kubenswrapper[4183]: I0813 19:59:35.019362 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" containerName="conversion-webhook-server" probeResult="failure" output="Get \"https://10.217.0.61:9443/readyz\": dial tcp 10.217.0.61:9443: connect: connection refused" Aug 13 19:59:35 crc kubenswrapper[4183]: I0813 19:59:35.020556 4183 patch_prober.go:28] interesting pod/console-conversion-webhook-595f9969b-l6z49 container/conversion-webhook-server namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.61:9443/readyz\": dial tcp 10.217.0.61:9443: connect: connection refused" start-of-body= Aug 13 19:59:35 crc kubenswrapper[4183]: I0813 19:59:35.020970 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" containerName="conversion-webhook-server" probeResult="failure" output="Get \"https://10.217.0.61:9443/readyz\": dial tcp 10.217.0.61:9443: connect: connection refused" Aug 13 19:59:35 crc kubenswrapper[4183]: I0813 19:59:35.438605 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:35 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:35 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:35 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:35 crc kubenswrapper[4183]: I0813 19:59:35.438911 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:35 crc kubenswrapper[4183]: I0813 19:59:35.482606 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:59:35 crc kubenswrapper[4183]: I0813 19:59:35.751490 4183 patch_prober.go:28] interesting pod/marketplace-operator-8b455464d-f9xdt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 19:59:35 crc kubenswrapper[4183]: I0813 19:59:35.752102 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.13:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Aug 13 19:59:35 crc kubenswrapper[4183]: I0813 19:59:35.751981 4183 patch_prober.go:28] interesting pod/olm-operator-6d8474f75f-x54mh container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 19:59:35 crc kubenswrapper[4183]: I0813 19:59:35.752228 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 19:59:35 crc kubenswrapper[4183]: I0813 19:59:35.752015 4183 patch_prober.go:28] interesting pod/olm-operator-6d8474f75f-x54mh container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 19:59:35 crc kubenswrapper[4183]: I0813 19:59:35.752299 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Aug 13 19:59:35 crc kubenswrapper[4183]: I0813 19:59:35.769313 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:59:35 crc kubenswrapper[4183]: I0813 19:59:35.858535 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:59:35 crc kubenswrapper[4183]: I0813 19:59:35.860310 4183 generic.go:334] "Generic (PLEG): container finished" podID="6268b7fe-8910-4505-b404-6f1df638105c" containerID="b4940961924b80341abc448ef2ef186a7af57fade4e32cd5feb2e52defb2d5f9" exitCode=0 Aug 13 19:59:35 crc kubenswrapper[4183]: I0813 19:59:35.860468 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-65476884b9-9wcvx" event={"ID":"6268b7fe-8910-4505-b404-6f1df638105c","Type":"ContainerDied","Data":"b4940961924b80341abc448ef2ef186a7af57fade4e32cd5feb2e52defb2d5f9"} Aug 13 19:59:36 crc kubenswrapper[4183]: I0813 19:59:36.022392 4183 patch_prober.go:28] interesting pod/oauth-openshift-765b47f944-n2lhl container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.30:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 19:59:36 crc kubenswrapper[4183]: I0813 19:59:36.022581 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.30:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 19:59:36 crc kubenswrapper[4183]: I0813 19:59:36.067663 4183 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="fsnotify: can't remove non-existent watch: /etc/kubernetes/kubelet-ca.crt" Aug 13 19:59:36 crc kubenswrapper[4183]: I0813 19:59:36.432964 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:36 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:36 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:36 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:36 crc kubenswrapper[4183]: I0813 19:59:36.433261 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:37 crc kubenswrapper[4183]: E0813 19:59:37.215374 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"\"" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:59:37 crc kubenswrapper[4183]: I0813 19:59:37.447280 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:37 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:37 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:37 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:37 crc kubenswrapper[4183]: I0813 19:59:37.447479 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:38 crc kubenswrapper[4183]: E0813 19:59:38.215975 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-operators-dcqzh" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" Aug 13 19:59:38 crc kubenswrapper[4183]: I0813 19:59:38.435953 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:38 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:38 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:38 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:38 crc kubenswrapper[4183]: I0813 19:59:38.436590 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:38 crc kubenswrapper[4183]: I0813 19:59:38.932638 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" event={"ID":"7d51f445-054a-4e4f-a67b-a828f5a32511","Type":"ContainerStarted","Data":"7342452c1232185e3cd70eb0d269743e495acdb67ac2358d63c1509e164b1377"} Aug 13 19:59:38 crc kubenswrapper[4183]: I0813 19:59:38.939102 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" event={"ID":"530553aa-0a1d-423e-8a22-f5eb4bdbb883","Type":"ContainerStarted","Data":"f78c28c3dccb095318f195e1d81c6ec26e3a25cfb361d9aa9942e4d8a6f9923b"} Aug 13 19:59:38 crc kubenswrapper[4183]: I0813 19:59:38.940161 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:59:39 crc kubenswrapper[4183]: E0813 19:59:39.223292 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"\"" pod="openshift-marketplace/community-operators-k9qqb" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" Aug 13 19:59:39 crc kubenswrapper[4183]: I0813 19:59:39.443735 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:39 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:39 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:39 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:39 crc kubenswrapper[4183]: I0813 19:59:39.444275 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:39 crc kubenswrapper[4183]: I0813 19:59:39.961542 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" event={"ID":"12e733dd-0939-4f1b-9cbb-13897e093787","Type":"ContainerStarted","Data":"ff87aa3e7fe778204f9c122934ebd1afdd2fc3dff3e2c7942831852cb04c7fc6"} Aug 13 19:59:40 crc kubenswrapper[4183]: I0813 19:59:40.115312 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-service-ca/service-ca-666f99b6f-vlbxv"] Aug 13 19:59:40 crc kubenswrapper[4183]: I0813 19:59:40.116977 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" containerName="service-ca-controller" containerID="cri-o://47fe4a48f20f31be64ae9b101ef8f82942a11a5dc253da7cd8d82bea357cc9c7" gracePeriod=30 Aug 13 19:59:40 crc kubenswrapper[4183]: I0813 19:59:40.447684 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:40 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:40 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:40 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:40 crc kubenswrapper[4183]: I0813 19:59:40.448063 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:40 crc kubenswrapper[4183]: I0813 19:59:40.943630 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-666f99b6f-kk8kg"] Aug 13 19:59:40 crc kubenswrapper[4183]: I0813 19:59:40.951272 4183 topology_manager.go:215] "Topology Admit Handler" podUID="e4a7de23-6134-4044-902a-0900dc04a501" podNamespace="openshift-service-ca" podName="service-ca-666f99b6f-kk8kg" Aug 13 19:59:40 crc kubenswrapper[4183]: E0813 19:59:40.951892 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" containerName="pruner" Aug 13 19:59:40 crc kubenswrapper[4183]: I0813 19:59:40.951963 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" containerName="pruner" Aug 13 19:59:40 crc kubenswrapper[4183]: E0813 19:59:40.952055 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="8500d7bd-50fb-4ca6-af41-b7a24cae43cd" containerName="collect-profiles" Aug 13 19:59:40 crc kubenswrapper[4183]: I0813 19:59:40.952067 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="8500d7bd-50fb-4ca6-af41-b7a24cae43cd" containerName="collect-profiles" Aug 13 19:59:40 crc kubenswrapper[4183]: I0813 19:59:40.952223 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" containerName="pruner" Aug 13 19:59:40 crc kubenswrapper[4183]: I0813 19:59:40.952247 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="8500d7bd-50fb-4ca6-af41-b7a24cae43cd" containerName="collect-profiles" Aug 13 19:59:40 crc kubenswrapper[4183]: I0813 19:59:40.953316 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Aug 13 19:59:40 crc kubenswrapper[4183]: I0813 19:59:40.968896 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-79vsd" Aug 13 19:59:41 crc kubenswrapper[4183]: I0813 19:59:41.040960 4183 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="fsnotify: can't remove non-existent watch: /etc/kubernetes/kubelet-ca.crt" Aug 13 19:59:41 crc kubenswrapper[4183]: I0813 19:59:41.073230 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Aug 13 19:59:41 crc kubenswrapper[4183]: I0813 19:59:41.073359 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js87r\" (UniqueName: \"kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Aug 13 19:59:41 crc kubenswrapper[4183]: I0813 19:59:41.073391 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Aug 13 19:59:41 crc kubenswrapper[4183]: I0813 19:59:41.090682 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-666f99b6f-kk8kg"] Aug 13 19:59:41 crc kubenswrapper[4183]: I0813 19:59:41.178551 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Aug 13 19:59:41 crc kubenswrapper[4183]: I0813 19:59:41.178691 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-js87r\" (UniqueName: \"kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Aug 13 19:59:41 crc kubenswrapper[4183]: I0813 19:59:41.178721 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Aug 13 19:59:41 crc kubenswrapper[4183]: I0813 19:59:41.180394 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Aug 13 19:59:41 crc kubenswrapper[4183]: I0813 19:59:41.253571 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Aug 13 19:59:41 crc kubenswrapper[4183]: I0813 19:59:41.355614 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-js87r\" (UniqueName: \"kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Aug 13 19:59:41 crc kubenswrapper[4183]: I0813 19:59:41.447413 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:41 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:41 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:41 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:41 crc kubenswrapper[4183]: I0813 19:59:41.447506 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:41 crc kubenswrapper[4183]: I0813 19:59:41.611295 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Aug 13 19:59:42 crc kubenswrapper[4183]: I0813 19:59:42.003196 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-65476884b9-9wcvx" event={"ID":"6268b7fe-8910-4505-b404-6f1df638105c","Type":"ContainerStarted","Data":"f644dddfd8fc5546a8b056ca1431e4924ae5d29333100579d5e0759c289e206f"} Aug 13 19:59:42 crc kubenswrapper[4183]: I0813 19:59:42.005033 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:59:42 crc kubenswrapper[4183]: I0813 19:59:42.005239 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 19:59:42 crc kubenswrapper[4183]: I0813 19:59:42.005304 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 19:59:42 crc kubenswrapper[4183]: E0813 19:59:42.238198 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"\"" pod="openshift-marketplace/certified-operators-g4v97" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" Aug 13 19:59:42 crc kubenswrapper[4183]: I0813 19:59:42.450760 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:42 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:42 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:42 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:42 crc kubenswrapper[4183]: I0813 19:59:42.451196 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:42 crc kubenswrapper[4183]: I0813 19:59:42.662137 4183 patch_prober.go:28] interesting pod/openshift-config-operator-77658b5b66-dq5sc container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Aug 13 19:59:42 crc kubenswrapper[4183]: I0813 19:59:42.677438 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Aug 13 19:59:42 crc kubenswrapper[4183]: I0813 19:59:42.664605 4183 patch_prober.go:28] interesting pod/openshift-config-operator-77658b5b66-dq5sc container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Aug 13 19:59:42 crc kubenswrapper[4183]: I0813 19:59:42.677534 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Aug 13 19:59:43 crc kubenswrapper[4183]: I0813 19:59:43.016536 4183 generic.go:334] "Generic (PLEG): container finished" podID="378552fd-5e53-4882-87ff-95f3d9198861" containerID="47fe4a48f20f31be64ae9b101ef8f82942a11a5dc253da7cd8d82bea357cc9c7" exitCode=0 Aug 13 19:59:43 crc kubenswrapper[4183]: I0813 19:59:43.016921 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" event={"ID":"378552fd-5e53-4882-87ff-95f3d9198861","Type":"ContainerDied","Data":"47fe4a48f20f31be64ae9b101ef8f82942a11a5dc253da7cd8d82bea357cc9c7"} Aug 13 19:59:43 crc kubenswrapper[4183]: I0813 19:59:43.018079 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 19:59:43 crc kubenswrapper[4183]: I0813 19:59:43.018295 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 19:59:43 crc kubenswrapper[4183]: I0813 19:59:43.439731 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:43 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:43 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:43 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:43 crc kubenswrapper[4183]: I0813 19:59:43.440334 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:44 crc kubenswrapper[4183]: E0813 19:59:44.213760 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:59:44 crc kubenswrapper[4183]: I0813 19:59:44.441219 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:44 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:44 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:44 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:44 crc kubenswrapper[4183]: I0813 19:59:44.441340 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:44 crc kubenswrapper[4183]: I0813 19:59:44.594374 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:59:44 crc kubenswrapper[4183]: I0813 19:59:44.819339 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:59:44 crc kubenswrapper[4183]: I0813 19:59:44.871664 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 19:59:44 crc kubenswrapper[4183]: I0813 19:59:44.871873 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 19:59:44 crc kubenswrapper[4183]: I0813 19:59:44.872118 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 19:59:44 crc kubenswrapper[4183]: I0813 19:59:44.872210 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 19:59:44 crc kubenswrapper[4183]: E0813 19:59:44.874435 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="unmounted volumes=[registry-storage], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:59:44 crc kubenswrapper[4183]: I0813 19:59:44.949683 4183 patch_prober.go:28] interesting pod/console-84fccc7b6-mkncc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Aug 13 19:59:44 crc kubenswrapper[4183]: I0813 19:59:44.950412 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Aug 13 19:59:45 crc kubenswrapper[4183]: I0813 19:59:45.298527 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:59:45 crc kubenswrapper[4183]: I0813 19:59:45.310054 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:59:45 crc kubenswrapper[4183]: I0813 19:59:45.441733 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:45 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:45 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:45 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:45 crc kubenswrapper[4183]: I0813 19:59:45.442634 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:45 crc kubenswrapper[4183]: I0813 19:59:45.649936 4183 patch_prober.go:28] interesting pod/openshift-config-operator-77658b5b66-dq5sc container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Aug 13 19:59:45 crc kubenswrapper[4183]: I0813 19:59:45.650038 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Aug 13 19:59:45 crc kubenswrapper[4183]: I0813 19:59:45.649945 4183 patch_prober.go:28] interesting pod/openshift-config-operator-77658b5b66-dq5sc container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Aug 13 19:59:45 crc kubenswrapper[4183]: I0813 19:59:45.650244 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Aug 13 19:59:45 crc kubenswrapper[4183]: I0813 19:59:45.884165 4183 patch_prober.go:28] interesting pod/authentication-operator-7cc7ff75d5-g9qv8 container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 19:59:45 crc kubenswrapper[4183]: I0813 19:59:45.885001 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 19:59:45 crc kubenswrapper[4183]: I0813 19:59:45.948340 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:59:46 crc kubenswrapper[4183]: I0813 19:59:46.437716 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:46 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:46 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:46 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:46 crc kubenswrapper[4183]: I0813 19:59:46.438164 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:47 crc kubenswrapper[4183]: E0813 19:59:47.329990 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/certified-operator-index:v4.16" Aug 13 19:59:47 crc kubenswrapper[4183]: E0813 19:59:47.330495 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/certified-operator-index:v4.16" Aug 13 19:59:47 crc kubenswrapper[4183]: E0813 19:59:47.330660 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ncrf5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-7287f_openshift-marketplace(887d596e-c519-4bfa-af90-3edd9e1b2f0f): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:59:47 crc kubenswrapper[4183]: E0813 19:59:47.330729 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:59:47 crc kubenswrapper[4183]: I0813 19:59:47.573828 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:47 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:47 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:47 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:47 crc kubenswrapper[4183]: I0813 19:59:47.573981 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:47 crc kubenswrapper[4183]: I0813 19:59:47.799589 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-666f99b6f-kk8kg"] Aug 13 19:59:48 crc kubenswrapper[4183]: I0813 19:59:48.080496 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" event={"ID":"e4a7de23-6134-4044-902a-0900dc04a501","Type":"ContainerStarted","Data":"c5069234e6bbbde190e466fb11df01a395209a382d2942184c3f52c3865e00ee"} Aug 13 19:59:48 crc kubenswrapper[4183]: E0813 19:59:48.334680 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/community-operator-index:v4.16" Aug 13 19:59:48 crc kubenswrapper[4183]: E0813 19:59:48.334954 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/community-operator-index:v4.16" Aug 13 19:59:48 crc kubenswrapper[4183]: E0813 19:59:48.335577 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-n6sqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-8jhz6_openshift-marketplace(3f4dca86-e6ee-4ec9-8324-86aff960225e): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:59:48 crc kubenswrapper[4183]: E0813 19:59:48.335720 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:59:48 crc kubenswrapper[4183]: I0813 19:59:48.434752 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:48 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:48 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:48 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:48 crc kubenswrapper[4183]: I0813 19:59:48.435306 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:48 crc kubenswrapper[4183]: I0813 19:59:48.648599 4183 patch_prober.go:28] interesting pod/openshift-config-operator-77658b5b66-dq5sc container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Aug 13 19:59:48 crc kubenswrapper[4183]: I0813 19:59:48.649030 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Aug 13 19:59:48 crc kubenswrapper[4183]: I0813 19:59:48.650082 4183 patch_prober.go:28] interesting pod/openshift-config-operator-77658b5b66-dq5sc container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Aug 13 19:59:48 crc kubenswrapper[4183]: I0813 19:59:48.650129 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Aug 13 19:59:48 crc kubenswrapper[4183]: I0813 19:59:48.650161 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:59:48 crc kubenswrapper[4183]: I0813 19:59:48.651317 4183 patch_prober.go:28] interesting pod/openshift-config-operator-77658b5b66-dq5sc container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Aug 13 19:59:48 crc kubenswrapper[4183]: I0813 19:59:48.651352 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Aug 13 19:59:48 crc kubenswrapper[4183]: I0813 19:59:48.652510 4183 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"f78c28c3dccb095318f195e1d81c6ec26e3a25cfb361d9aa9942e4d8a6f9923b"} pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Aug 13 19:59:48 crc kubenswrapper[4183]: I0813 19:59:48.652585 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" containerName="openshift-config-operator" containerID="cri-o://f78c28c3dccb095318f195e1d81c6ec26e3a25cfb361d9aa9942e4d8a6f9923b" gracePeriod=30 Aug 13 19:59:49 crc kubenswrapper[4183]: I0813 19:59:49.029359 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-mtx25 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]log ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]etcd ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/generic-apiserver-start-informers ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/max-in-flight-filter ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/storage-object-count-tracker-hook ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/image.openshift.io-apiserver-caches ok Aug 13 19:59:49 crc kubenswrapper[4183]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Aug 13 19:59:49 crc kubenswrapper[4183]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/project.openshift.io-projectcache ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-startinformers ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-restmapperupdater ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Aug 13 19:59:49 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:49 crc kubenswrapper[4183]: I0813 19:59:49.029884 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:49 crc kubenswrapper[4183]: I0813 19:59:49.123308 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-mtx25 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]log ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]etcd ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/generic-apiserver-start-informers ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/max-in-flight-filter ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/storage-object-count-tracker-hook ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/image.openshift.io-apiserver-caches ok Aug 13 19:59:49 crc kubenswrapper[4183]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Aug 13 19:59:49 crc kubenswrapper[4183]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/project.openshift.io-projectcache ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-startinformers ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-restmapperupdater ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Aug 13 19:59:49 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:49 crc kubenswrapper[4183]: I0813 19:59:49.123512 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:49 crc kubenswrapper[4183]: I0813 19:59:49.139181 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" event={"ID":"378552fd-5e53-4882-87ff-95f3d9198861","Type":"ContainerDied","Data":"fbf310c9137d2862f3313bbe4210058a1015f75db6cabbd845d64c247c4ee039"} Aug 13 19:59:49 crc kubenswrapper[4183]: I0813 19:59:49.139746 4183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbf310c9137d2862f3313bbe4210058a1015f75db6cabbd845d64c247c4ee039" Aug 13 19:59:49 crc kubenswrapper[4183]: I0813 19:59:49.164685 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:59:49 crc kubenswrapper[4183]: I0813 19:59:49.194471 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle\") pod \"378552fd-5e53-4882-87ff-95f3d9198861\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " Aug 13 19:59:49 crc kubenswrapper[4183]: I0813 19:59:49.195109 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key\") pod \"378552fd-5e53-4882-87ff-95f3d9198861\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " Aug 13 19:59:49 crc kubenswrapper[4183]: I0813 19:59:49.195253 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7ntf\" (UniqueName: \"kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf\") pod \"378552fd-5e53-4882-87ff-95f3d9198861\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " Aug 13 19:59:49 crc kubenswrapper[4183]: I0813 19:59:49.202571 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "378552fd-5e53-4882-87ff-95f3d9198861" (UID: "378552fd-5e53-4882-87ff-95f3d9198861"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 19:59:49 crc kubenswrapper[4183]: I0813 19:59:49.208273 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf" (OuterVolumeSpecName: "kube-api-access-d7ntf") pod "378552fd-5e53-4882-87ff-95f3d9198861" (UID: "378552fd-5e53-4882-87ff-95f3d9198861"). InnerVolumeSpecName "kube-api-access-d7ntf". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 19:59:49 crc kubenswrapper[4183]: I0813 19:59:49.220765 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key" (OuterVolumeSpecName: "signing-key") pod "378552fd-5e53-4882-87ff-95f3d9198861" (UID: "378552fd-5e53-4882-87ff-95f3d9198861"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 19:59:49 crc kubenswrapper[4183]: E0813 19:59:49.229296 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-operators-dcqzh" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" Aug 13 19:59:49 crc kubenswrapper[4183]: E0813 19:59:49.229484 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:59:49 crc kubenswrapper[4183]: I0813 19:59:49.297207 4183 reconciler_common.go:300] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle\") on node \"crc\" DevicePath \"\"" Aug 13 19:59:49 crc kubenswrapper[4183]: I0813 19:59:49.297611 4183 reconciler_common.go:300] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key\") on node \"crc\" DevicePath \"\"" Aug 13 19:59:49 crc kubenswrapper[4183]: I0813 19:59:49.297734 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-d7ntf\" (UniqueName: \"kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf\") on node \"crc\" DevicePath \"\"" Aug 13 19:59:49 crc kubenswrapper[4183]: E0813 19:59:49.360235 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-operator-index:v4.16" Aug 13 19:59:49 crc kubenswrapper[4183]: E0813 19:59:49.360331 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-operator-index:v4.16" Aug 13 19:59:49 crc kubenswrapper[4183]: E0813 19:59:49.360594 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ptdrb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-f4jkp_openshift-marketplace(4092a9f8-5acc-4932-9e90-ef962eeb301a): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:59:49 crc kubenswrapper[4183]: E0813 19:59:49.360647 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:59:49 crc kubenswrapper[4183]: I0813 19:59:49.443457 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:49 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:49 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:49 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:49 crc kubenswrapper[4183]: I0813 19:59:49.444219 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:49 crc kubenswrapper[4183]: I0813 19:59:49.879979 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-mtx25 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]log ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]etcd ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/generic-apiserver-start-informers ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/max-in-flight-filter ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/storage-object-count-tracker-hook ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/image.openshift.io-apiserver-caches ok Aug 13 19:59:49 crc kubenswrapper[4183]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Aug 13 19:59:49 crc kubenswrapper[4183]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/project.openshift.io-projectcache ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-startinformers ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-restmapperupdater ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Aug 13 19:59:49 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:49 crc kubenswrapper[4183]: I0813 19:59:49.880081 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:50 crc kubenswrapper[4183]: I0813 19:59:50.177107 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:59:50 crc kubenswrapper[4183]: I0813 19:59:50.177878 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" event={"ID":"12e733dd-0939-4f1b-9cbb-13897e093787","Type":"ContainerStarted","Data":"34cf17f4d863a4ac8e304ee5c662018d813019d268cbb7022afa9bdac6b80fbd"} Aug 13 19:59:50 crc kubenswrapper[4183]: I0813 19:59:50.441573 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:50 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:50 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:50 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:50 crc kubenswrapper[4183]: I0813 19:59:50.443668 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:51 crc kubenswrapper[4183]: E0813 19:59:51.212575 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"\"" pod="openshift-marketplace/community-operators-k9qqb" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" Aug 13 19:59:51 crc kubenswrapper[4183]: I0813 19:59:51.440975 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:51 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:51 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:51 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:51 crc kubenswrapper[4183]: I0813 19:59:51.441203 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:51 crc kubenswrapper[4183]: E0813 19:59:51.468060 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="unmounted volumes=[registry-storage], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" Aug 13 19:59:51 crc kubenswrapper[4183]: I0813 19:59:51.987666 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6ff78978b4-q4vv8"] Aug 13 19:59:51 crc kubenswrapper[4183]: I0813 19:59:51.988080 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" containerName="controller-manager" containerID="cri-o://5a16f80522246f66629d4cfcf1e317f7a3db9cc08045c713b73797a46e8882df" gracePeriod=30 Aug 13 19:59:52 crc kubenswrapper[4183]: I0813 19:59:52.198111 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" event={"ID":"e4a7de23-6134-4044-902a-0900dc04a501","Type":"ContainerStarted","Data":"5ca33b1d9111046b71500c2532324037d0682ac3c0fabe705b5bd17f91afa552"} Aug 13 19:59:52 crc kubenswrapper[4183]: I0813 19:59:52.198164 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 19:59:52 crc kubenswrapper[4183]: I0813 19:59:52.409457 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-service-ca/service-ca-666f99b6f-vlbxv"] Aug 13 19:59:52 crc kubenswrapper[4183]: I0813 19:59:52.422430 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5"] Aug 13 19:59:52 crc kubenswrapper[4183]: I0813 19:59:52.427195 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" containerName="route-controller-manager" containerID="cri-o://aa3bd53db5b871b1e7ccc9029bf14c3e8c4163839c67447dd344680fd1080e59" gracePeriod=30 Aug 13 19:59:52 crc kubenswrapper[4183]: I0813 19:59:52.437009 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:52 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:52 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:52 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:52 crc kubenswrapper[4183]: I0813 19:59:52.437154 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:52 crc kubenswrapper[4183]: I0813 19:59:52.486875 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-service-ca/service-ca-666f99b6f-vlbxv"] Aug 13 19:59:52 crc kubenswrapper[4183]: I0813 19:59:52.649433 4183 patch_prober.go:28] interesting pod/openshift-config-operator-77658b5b66-dq5sc container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 19:59:52 crc kubenswrapper[4183]: I0813 19:59:52.649971 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 19:59:52 crc kubenswrapper[4183]: I0813 19:59:52.845735 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podStartSLOduration=12.845670263 podStartE2EDuration="12.845670263s" podCreationTimestamp="2025-08-13 19:59:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 19:59:52.781564366 +0000 UTC m=+959.474229104" watchObservedRunningTime="2025-08-13 19:59:52.845670263 +0000 UTC m=+959.538335011" Aug 13 19:59:53 crc kubenswrapper[4183]: I0813 19:59:53.219976 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="378552fd-5e53-4882-87ff-95f3d9198861" path="/var/lib/kubelet/pods/378552fd-5e53-4882-87ff-95f3d9198861/volumes" Aug 13 19:59:53 crc kubenswrapper[4183]: I0813 19:59:53.223157 4183 generic.go:334] "Generic (PLEG): container finished" podID="87df87f4-ba66-4137-8e41-1fa632ad4207" containerID="5a16f80522246f66629d4cfcf1e317f7a3db9cc08045c713b73797a46e8882df" exitCode=0 Aug 13 19:59:53 crc kubenswrapper[4183]: I0813 19:59:53.223289 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" event={"ID":"87df87f4-ba66-4137-8e41-1fa632ad4207","Type":"ContainerDied","Data":"5a16f80522246f66629d4cfcf1e317f7a3db9cc08045c713b73797a46e8882df"} Aug 13 19:59:53 crc kubenswrapper[4183]: I0813 19:59:53.228417 4183 generic.go:334] "Generic (PLEG): container finished" podID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" containerID="aa3bd53db5b871b1e7ccc9029bf14c3e8c4163839c67447dd344680fd1080e59" exitCode=0 Aug 13 19:59:53 crc kubenswrapper[4183]: I0813 19:59:53.228543 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" event={"ID":"af6b67a3-a2bd-4051-9adc-c208a5a65d79","Type":"ContainerDied","Data":"aa3bd53db5b871b1e7ccc9029bf14c3e8c4163839c67447dd344680fd1080e59"} Aug 13 19:59:53 crc kubenswrapper[4183]: I0813 19:59:53.437134 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:53 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:53 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:53 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:53 crc kubenswrapper[4183]: I0813 19:59:53.437248 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:53 crc kubenswrapper[4183]: I0813 19:59:53.854176 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:59:53 crc kubenswrapper[4183]: I0813 19:59:53.920999 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles\") pod \"87df87f4-ba66-4137-8e41-1fa632ad4207\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " Aug 13 19:59:53 crc kubenswrapper[4183]: I0813 19:59:53.921104 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzb57\" (UniqueName: \"kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57\") pod \"87df87f4-ba66-4137-8e41-1fa632ad4207\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " Aug 13 19:59:53 crc kubenswrapper[4183]: I0813 19:59:53.921134 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert\") pod \"87df87f4-ba66-4137-8e41-1fa632ad4207\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " Aug 13 19:59:53 crc kubenswrapper[4183]: I0813 19:59:53.921170 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config\") pod \"87df87f4-ba66-4137-8e41-1fa632ad4207\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " Aug 13 19:59:53 crc kubenswrapper[4183]: I0813 19:59:53.921195 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca\") pod \"87df87f4-ba66-4137-8e41-1fa632ad4207\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " Aug 13 19:59:53 crc kubenswrapper[4183]: I0813 19:59:53.922384 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "87df87f4-ba66-4137-8e41-1fa632ad4207" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 19:59:53 crc kubenswrapper[4183]: I0813 19:59:53.922508 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca" (OuterVolumeSpecName: "client-ca") pod "87df87f4-ba66-4137-8e41-1fa632ad4207" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 19:59:53 crc kubenswrapper[4183]: I0813 19:59:53.923655 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config" (OuterVolumeSpecName: "config") pod "87df87f4-ba66-4137-8e41-1fa632ad4207" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 19:59:53 crc kubenswrapper[4183]: I0813 19:59:53.969111 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57" (OuterVolumeSpecName: "kube-api-access-pzb57") pod "87df87f4-ba66-4137-8e41-1fa632ad4207" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207"). InnerVolumeSpecName "kube-api-access-pzb57". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 19:59:53 crc kubenswrapper[4183]: I0813 19:59:53.969275 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "87df87f4-ba66-4137-8e41-1fa632ad4207" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.023502 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-pzb57\" (UniqueName: \"kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57\") on node \"crc\" DevicePath \"\"" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.023541 4183 reconciler_common.go:300] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert\") on node \"crc\" DevicePath \"\"" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.023554 4183 reconciler_common.go:300] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config\") on node \"crc\" DevicePath \"\"" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.023573 4183 reconciler_common.go:300] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca\") on node \"crc\" DevicePath \"\"" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.023585 4183 reconciler_common.go:300] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.238042 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" event={"ID":"87df87f4-ba66-4137-8e41-1fa632ad4207","Type":"ContainerDied","Data":"4916f2a17d27bbf013c1e13f025d2cdf51127409f1a28c8a620b14bc4225ba0f"} Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.238184 4183 scope.go:117] "RemoveContainer" containerID="5a16f80522246f66629d4cfcf1e317f7a3db9cc08045c713b73797a46e8882df" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.238294 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.436856 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:54 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:54 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:54 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.437289 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.642583 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6ff78978b4-q4vv8"] Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.694196 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.694343 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.694387 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.694444 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.694472 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.698711 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.709297 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6ff78978b4-q4vv8"] Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.718283 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.844327 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpzhn\" (UniqueName: \"kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn\") pod \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.844401 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config\") pod \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.844479 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert\") pod \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.844546 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca\") pod \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.846529 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca" (OuterVolumeSpecName: "client-ca") pod "af6b67a3-a2bd-4051-9adc-c208a5a65d79" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.847339 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config" (OuterVolumeSpecName: "config") pod "af6b67a3-a2bd-4051-9adc-c208a5a65d79" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.861274 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn" (OuterVolumeSpecName: "kube-api-access-hpzhn") pod "af6b67a3-a2bd-4051-9adc-c208a5a65d79" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79"). InnerVolumeSpecName "kube-api-access-hpzhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.869651 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "af6b67a3-a2bd-4051-9adc-c208a5a65d79" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.871983 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.872086 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.876100 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.876212 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.896218 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-mtx25 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Aug 13 19:59:54 crc kubenswrapper[4183]: [+]log ok Aug 13 19:59:54 crc kubenswrapper[4183]: [+]etcd ok Aug 13 19:59:54 crc kubenswrapper[4183]: [+]poststarthook/generic-apiserver-start-informers ok Aug 13 19:59:54 crc kubenswrapper[4183]: [+]poststarthook/max-in-flight-filter ok Aug 13 19:59:54 crc kubenswrapper[4183]: [+]poststarthook/storage-object-count-tracker-hook ok Aug 13 19:59:54 crc kubenswrapper[4183]: [+]poststarthook/image.openshift.io-apiserver-caches ok Aug 13 19:59:54 crc kubenswrapper[4183]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Aug 13 19:59:54 crc kubenswrapper[4183]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Aug 13 19:59:54 crc kubenswrapper[4183]: [+]poststarthook/project.openshift.io-projectcache ok Aug 13 19:59:54 crc kubenswrapper[4183]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Aug 13 19:59:54 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-startinformers ok Aug 13 19:59:54 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-restmapperupdater ok Aug 13 19:59:54 crc kubenswrapper[4183]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Aug 13 19:59:54 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.896445 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.947258 4183 reconciler_common.go:300] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert\") on node \"crc\" DevicePath \"\"" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.947475 4183 reconciler_common.go:300] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca\") on node \"crc\" DevicePath \"\"" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.947494 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-hpzhn\" (UniqueName: \"kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn\") on node \"crc\" DevicePath \"\"" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.947512 4183 reconciler_common.go:300] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config\") on node \"crc\" DevicePath \"\"" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.953125 4183 patch_prober.go:28] interesting pod/console-84fccc7b6-mkncc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.953213 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.267619 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.291160 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" path="/var/lib/kubelet/pods/87df87f4-ba66-4137-8e41-1fa632ad4207/volumes" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.294870 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" event={"ID":"af6b67a3-a2bd-4051-9adc-c208a5a65d79","Type":"ContainerDied","Data":"893b4f9b5ed27072046f833f87a3b5c0ae52bb015f77a4268cf775d1c39b6dcf"} Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.294955 4183 scope.go:117] "RemoveContainer" containerID="aa3bd53db5b871b1e7ccc9029bf14c3e8c4163839c67447dd344680fd1080e59" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.331335 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-c4dd57946-mpxjt"] Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.331506 4183 topology_manager.go:215] "Topology Admit Handler" podUID="16f68e98-a8f9-417a-b92b-37bfd7b11e01" podNamespace="openshift-controller-manager" podName="controller-manager-c4dd57946-mpxjt" Aug 13 19:59:55 crc kubenswrapper[4183]: E0813 19:59:55.331700 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" containerName="route-controller-manager" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.331717 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" containerName="route-controller-manager" Aug 13 19:59:55 crc kubenswrapper[4183]: E0813 19:59:55.331736 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" containerName="controller-manager" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.331745 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" containerName="controller-manager" Aug 13 19:59:55 crc kubenswrapper[4183]: E0813 19:59:55.331763 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="378552fd-5e53-4882-87ff-95f3d9198861" containerName="service-ca-controller" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.331814 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="378552fd-5e53-4882-87ff-95f3d9198861" containerName="service-ca-controller" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.331971 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" containerName="route-controller-manager" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.331991 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="378552fd-5e53-4882-87ff-95f3d9198861" containerName="service-ca-controller" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.332008 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" containerName="controller-manager" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.332662 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" Aug 13 19:59:55 crc kubenswrapper[4183]: E0813 19:59:55.347326 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.16" Aug 13 19:59:55 crc kubenswrapper[4183]: E0813 19:59:55.347460 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.16" Aug 13 19:59:55 crc kubenswrapper[4183]: E0813 19:59:55.347597 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-tf29r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-8s8pc_openshift-marketplace(c782cf62-a827-4677-b3c2-6f82c5f09cbb): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:59:55 crc kubenswrapper[4183]: E0813 19:59:55.347655 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.367304 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16f68e98-a8f9-417a-b92b-37bfd7b11e01-proxy-ca-bundles\") pod \"controller-manager-c4dd57946-mpxjt\" (UID: \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\") " pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.367481 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16f68e98-a8f9-417a-b92b-37bfd7b11e01-serving-cert\") pod \"controller-manager-c4dd57946-mpxjt\" (UID: \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\") " pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.367520 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16f68e98-a8f9-417a-b92b-37bfd7b11e01-config\") pod \"controller-manager-c4dd57946-mpxjt\" (UID: \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\") " pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.367571 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16f68e98-a8f9-417a-b92b-37bfd7b11e01-client-ca\") pod \"controller-manager-c4dd57946-mpxjt\" (UID: \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\") " pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.367684 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvvgt\" (UniqueName: \"kubernetes.io/projected/16f68e98-a8f9-417a-b92b-37bfd7b11e01-kube-api-access-rvvgt\") pod \"controller-manager-c4dd57946-mpxjt\" (UID: \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\") " pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.445246 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:55 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:55 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:55 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.445358 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.468643 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16f68e98-a8f9-417a-b92b-37bfd7b11e01-client-ca\") pod \"controller-manager-c4dd57946-mpxjt\" (UID: \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\") " pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.468993 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rvvgt\" (UniqueName: \"kubernetes.io/projected/16f68e98-a8f9-417a-b92b-37bfd7b11e01-kube-api-access-rvvgt\") pod \"controller-manager-c4dd57946-mpxjt\" (UID: \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\") " pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.469037 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16f68e98-a8f9-417a-b92b-37bfd7b11e01-proxy-ca-bundles\") pod \"controller-manager-c4dd57946-mpxjt\" (UID: \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\") " pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.469071 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16f68e98-a8f9-417a-b92b-37bfd7b11e01-serving-cert\") pod \"controller-manager-c4dd57946-mpxjt\" (UID: \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\") " pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.469106 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16f68e98-a8f9-417a-b92b-37bfd7b11e01-config\") pod \"controller-manager-c4dd57946-mpxjt\" (UID: \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\") " pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.648929 4183 patch_prober.go:28] interesting pod/openshift-config-operator-77658b5b66-dq5sc container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.649094 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.692567 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.694217 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.696064 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.700916 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16f68e98-a8f9-417a-b92b-37bfd7b11e01-client-ca\") pod \"controller-manager-c4dd57946-mpxjt\" (UID: \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\") " pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.701464 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.711293 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16f68e98-a8f9-417a-b92b-37bfd7b11e01-serving-cert\") pod \"controller-manager-c4dd57946-mpxjt\" (UID: \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\") " pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.711751 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16f68e98-a8f9-417a-b92b-37bfd7b11e01-config\") pod \"controller-manager-c4dd57946-mpxjt\" (UID: \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\") " pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.791361 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-c4dd57946-mpxjt"] Aug 13 19:59:56 crc kubenswrapper[4183]: I0813 19:59:56.012351 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-58g82" Aug 13 19:59:56 crc kubenswrapper[4183]: I0813 19:59:56.152557 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Aug 13 19:59:56 crc kubenswrapper[4183]: I0813 19:59:56.166000 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Aug 13 19:59:56 crc kubenswrapper[4183]: I0813 19:59:56.177683 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16f68e98-a8f9-417a-b92b-37bfd7b11e01-proxy-ca-bundles\") pod \"controller-manager-c4dd57946-mpxjt\" (UID: \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\") " pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" Aug 13 19:59:56 crc kubenswrapper[4183]: I0813 19:59:56.435947 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:56 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:56 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:56 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:56 crc kubenswrapper[4183]: I0813 19:59:56.436149 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:56 crc kubenswrapper[4183]: I0813 19:59:56.471761 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt"] Aug 13 19:59:56 crc kubenswrapper[4183]: I0813 19:59:56.471976 4183 topology_manager.go:215] "Topology Admit Handler" podUID="83bf0764-e80c-490b-8d3c-3cf626fdb233" podNamespace="openshift-route-controller-manager" podName="route-controller-manager-5b77f9fd48-hb8xt" Aug 13 19:59:56 crc kubenswrapper[4183]: I0813 19:59:56.475959 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" Aug 13 19:59:56 crc kubenswrapper[4183]: I0813 19:59:56.612404 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/83bf0764-e80c-490b-8d3c-3cf626fdb233-client-ca\") pod \"route-controller-manager-5b77f9fd48-hb8xt\" (UID: \"83bf0764-e80c-490b-8d3c-3cf626fdb233\") " pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" Aug 13 19:59:56 crc kubenswrapper[4183]: I0813 19:59:56.612571 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njx72\" (UniqueName: \"kubernetes.io/projected/83bf0764-e80c-490b-8d3c-3cf626fdb233-kube-api-access-njx72\") pod \"route-controller-manager-5b77f9fd48-hb8xt\" (UID: \"83bf0764-e80c-490b-8d3c-3cf626fdb233\") " pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" Aug 13 19:59:56 crc kubenswrapper[4183]: I0813 19:59:56.612630 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83bf0764-e80c-490b-8d3c-3cf626fdb233-serving-cert\") pod \"route-controller-manager-5b77f9fd48-hb8xt\" (UID: \"83bf0764-e80c-490b-8d3c-3cf626fdb233\") " pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" Aug 13 19:59:56 crc kubenswrapper[4183]: I0813 19:59:56.613039 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83bf0764-e80c-490b-8d3c-3cf626fdb233-config\") pod \"route-controller-manager-5b77f9fd48-hb8xt\" (UID: \"83bf0764-e80c-490b-8d3c-3cf626fdb233\") " pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" Aug 13 19:59:56 crc kubenswrapper[4183]: I0813 19:59:56.679475 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Aug 13 19:59:56 crc kubenswrapper[4183]: I0813 19:59:56.714217 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/83bf0764-e80c-490b-8d3c-3cf626fdb233-client-ca\") pod \"route-controller-manager-5b77f9fd48-hb8xt\" (UID: \"83bf0764-e80c-490b-8d3c-3cf626fdb233\") " pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" Aug 13 19:59:56 crc kubenswrapper[4183]: I0813 19:59:56.714382 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-njx72\" (UniqueName: \"kubernetes.io/projected/83bf0764-e80c-490b-8d3c-3cf626fdb233-kube-api-access-njx72\") pod \"route-controller-manager-5b77f9fd48-hb8xt\" (UID: \"83bf0764-e80c-490b-8d3c-3cf626fdb233\") " pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" Aug 13 19:59:56 crc kubenswrapper[4183]: I0813 19:59:56.714435 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83bf0764-e80c-490b-8d3c-3cf626fdb233-serving-cert\") pod \"route-controller-manager-5b77f9fd48-hb8xt\" (UID: \"83bf0764-e80c-490b-8d3c-3cf626fdb233\") " pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" Aug 13 19:59:56 crc kubenswrapper[4183]: I0813 19:59:56.714613 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83bf0764-e80c-490b-8d3c-3cf626fdb233-config\") pod \"route-controller-manager-5b77f9fd48-hb8xt\" (UID: \"83bf0764-e80c-490b-8d3c-3cf626fdb233\") " pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" Aug 13 19:59:56 crc kubenswrapper[4183]: I0813 19:59:56.847427 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-9r4gl" Aug 13 19:59:56 crc kubenswrapper[4183]: I0813 19:59:56.847823 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Aug 13 19:59:56 crc kubenswrapper[4183]: I0813 19:59:56.848006 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Aug 13 19:59:56 crc kubenswrapper[4183]: I0813 19:59:56.857636 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/83bf0764-e80c-490b-8d3c-3cf626fdb233-client-ca\") pod \"route-controller-manager-5b77f9fd48-hb8xt\" (UID: \"83bf0764-e80c-490b-8d3c-3cf626fdb233\") " pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" Aug 13 19:59:56 crc kubenswrapper[4183]: I0813 19:59:56.923763 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvvgt\" (UniqueName: \"kubernetes.io/projected/16f68e98-a8f9-417a-b92b-37bfd7b11e01-kube-api-access-rvvgt\") pod \"controller-manager-c4dd57946-mpxjt\" (UID: \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\") " pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" Aug 13 19:59:57 crc kubenswrapper[4183]: I0813 19:59:57.000386 4183 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Aug 13 19:59:57 crc kubenswrapper[4183]: I0813 19:59:57.020516 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" Aug 13 19:59:57 crc kubenswrapper[4183]: I0813 19:59:57.042895 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Aug 13 19:59:57 crc kubenswrapper[4183]: I0813 19:59:57.052159 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Aug 13 19:59:57 crc kubenswrapper[4183]: I0813 19:59:57.059066 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83bf0764-e80c-490b-8d3c-3cf626fdb233-config\") pod \"route-controller-manager-5b77f9fd48-hb8xt\" (UID: \"83bf0764-e80c-490b-8d3c-3cf626fdb233\") " pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" Aug 13 19:59:57 crc kubenswrapper[4183]: I0813 19:59:57.070227 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83bf0764-e80c-490b-8d3c-3cf626fdb233-serving-cert\") pod \"route-controller-manager-5b77f9fd48-hb8xt\" (UID: \"83bf0764-e80c-490b-8d3c-3cf626fdb233\") " pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" Aug 13 19:59:57 crc kubenswrapper[4183]: I0813 19:59:57.115680 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-njx72\" (UniqueName: \"kubernetes.io/projected/83bf0764-e80c-490b-8d3c-3cf626fdb233-kube-api-access-njx72\") pod \"route-controller-manager-5b77f9fd48-hb8xt\" (UID: \"83bf0764-e80c-490b-8d3c-3cf626fdb233\") " pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" Aug 13 19:59:57 crc kubenswrapper[4183]: I0813 19:59:57.165521 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" Aug 13 19:59:57 crc kubenswrapper[4183]: I0813 19:59:57.173370 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt"] Aug 13 19:59:57 crc kubenswrapper[4183]: I0813 19:59:57.209604 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:59:57 crc kubenswrapper[4183]: E0813 19:59:57.219465 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"\"" pod="openshift-marketplace/certified-operators-g4v97" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" Aug 13 19:59:57 crc kubenswrapper[4183]: I0813 19:59:57.437713 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:57 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:57 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:57 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:57 crc kubenswrapper[4183]: I0813 19:59:57.437919 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:57 crc kubenswrapper[4183]: I0813 19:59:57.929510 4183 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-08-13T19:59:57.000640771Z","Handler":null,"Name":""} Aug 13 19:59:58 crc kubenswrapper[4183]: I0813 19:59:58.085657 4183 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Aug 13 19:59:58 crc kubenswrapper[4183]: I0813 19:59:58.085937 4183 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Aug 13 19:59:58 crc kubenswrapper[4183]: I0813 19:59:58.115602 4183 patch_prober.go:28] interesting pod/openshift-config-operator-77658b5b66-dq5sc container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": read tcp 10.217.0.2:40914->10.217.0.23:8443: read: connection reset by peer" start-of-body= Aug 13 19:59:58 crc kubenswrapper[4183]: I0813 19:59:58.115726 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": read tcp 10.217.0.2:40914->10.217.0.23:8443: read: connection reset by peer" Aug 13 19:59:58 crc kubenswrapper[4183]: E0813 19:59:58.213433 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"\"" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:59:58 crc kubenswrapper[4183]: I0813 19:59:58.357685 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" event={"ID":"12e733dd-0939-4f1b-9cbb-13897e093787","Type":"ContainerStarted","Data":"42d711544e11c05fc086e8f0c7a21cc883bc678e9e7c9221d490bdabc9cffe87"} Aug 13 19:59:58 crc kubenswrapper[4183]: I0813 19:59:58.360293 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-config-operator/0.log" Aug 13 19:59:58 crc kubenswrapper[4183]: I0813 19:59:58.360735 4183 generic.go:334] "Generic (PLEG): container finished" podID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" containerID="f78c28c3dccb095318f195e1d81c6ec26e3a25cfb361d9aa9942e4d8a6f9923b" exitCode=255 Aug 13 19:59:58 crc kubenswrapper[4183]: I0813 19:59:58.360869 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" event={"ID":"530553aa-0a1d-423e-8a22-f5eb4bdbb883","Type":"ContainerDied","Data":"f78c28c3dccb095318f195e1d81c6ec26e3a25cfb361d9aa9942e4d8a6f9923b"} Aug 13 19:59:58 crc kubenswrapper[4183]: I0813 19:59:58.442113 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:58 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:58 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:58 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:58 crc kubenswrapper[4183]: I0813 19:59:58.442250 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:59 crc kubenswrapper[4183]: E0813 19:59:59.236509 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"\"" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:59:59 crc kubenswrapper[4183]: I0813 19:59:59.435876 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:59 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:59 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:59 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:59 crc kubenswrapper[4183]: I0813 19:59:59.436152 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:59 crc kubenswrapper[4183]: I0813 19:59:59.866426 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:59:59 crc kubenswrapper[4183]: I0813 19:59:59.909397 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 20:00:00 crc kubenswrapper[4183]: I0813 20:00:00.027588 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-c4dd57946-mpxjt"] Aug 13 20:00:00 crc kubenswrapper[4183]: W0813 20:00:00.070724 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16f68e98_a8f9_417a_b92b_37bfd7b11e01.slice/crio-4cfa6ec97b88dab6d16213f83b80b7667542c9da6b7b1c559cfe136cf9055f54 WatchSource:0}: Error finding container 4cfa6ec97b88dab6d16213f83b80b7667542c9da6b7b1c559cfe136cf9055f54: Status 404 returned error can't find the container with id 4cfa6ec97b88dab6d16213f83b80b7667542c9da6b7b1c559cfe136cf9055f54 Aug 13 20:00:00 crc kubenswrapper[4183]: E0813 20:00:00.219221 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 20:00:00 crc kubenswrapper[4183]: I0813 20:00:00.430252 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2"] Aug 13 20:00:00 crc kubenswrapper[4183]: I0813 20:00:00.430382 4183 topology_manager.go:215] "Topology Admit Handler" podUID="deaee4f4-7b7a-442d-99b7-c8ac62ef5f27" podNamespace="openshift-operator-lifecycle-manager" podName="collect-profiles-29251920-wcws2" Aug 13 20:00:00 crc kubenswrapper[4183]: I0813 20:00:00.431281 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2" Aug 13 20:00:00 crc kubenswrapper[4183]: I0813 20:00:00.451065 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:00 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:00 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:00 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:00 crc kubenswrapper[4183]: I0813 20:00:00.451160 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:00 crc kubenswrapper[4183]: I0813 20:00:00.481406 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" event={"ID":"16f68e98-a8f9-417a-b92b-37bfd7b11e01","Type":"ContainerStarted","Data":"4cfa6ec97b88dab6d16213f83b80b7667542c9da6b7b1c559cfe136cf9055f54"} Aug 13 20:00:00 crc kubenswrapper[4183]: I0813 20:00:00.517054 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-45g9d" Aug 13 20:00:00 crc kubenswrapper[4183]: I0813 20:00:00.517335 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Aug 13 20:00:00 crc kubenswrapper[4183]: I0813 20:00:00.563374 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctj8c\" (UniqueName: \"kubernetes.io/projected/deaee4f4-7b7a-442d-99b7-c8ac62ef5f27-kube-api-access-ctj8c\") pod \"collect-profiles-29251920-wcws2\" (UID: \"deaee4f4-7b7a-442d-99b7-c8ac62ef5f27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2" Aug 13 20:00:00 crc kubenswrapper[4183]: I0813 20:00:00.563523 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/deaee4f4-7b7a-442d-99b7-c8ac62ef5f27-secret-volume\") pod \"collect-profiles-29251920-wcws2\" (UID: \"deaee4f4-7b7a-442d-99b7-c8ac62ef5f27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2" Aug 13 20:00:00 crc kubenswrapper[4183]: I0813 20:00:00.563608 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/deaee4f4-7b7a-442d-99b7-c8ac62ef5f27-config-volume\") pod \"collect-profiles-29251920-wcws2\" (UID: \"deaee4f4-7b7a-442d-99b7-c8ac62ef5f27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2" Aug 13 20:00:00 crc kubenswrapper[4183]: I0813 20:00:00.587423 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2"] Aug 13 20:00:00 crc kubenswrapper[4183]: I0813 20:00:00.650425 4183 patch_prober.go:28] interesting pod/openshift-config-operator-77658b5b66-dq5sc container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Aug 13 20:00:00 crc kubenswrapper[4183]: I0813 20:00:00.650573 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Aug 13 20:00:00 crc kubenswrapper[4183]: I0813 20:00:00.672066 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ctj8c\" (UniqueName: \"kubernetes.io/projected/deaee4f4-7b7a-442d-99b7-c8ac62ef5f27-kube-api-access-ctj8c\") pod \"collect-profiles-29251920-wcws2\" (UID: \"deaee4f4-7b7a-442d-99b7-c8ac62ef5f27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2" Aug 13 20:00:00 crc kubenswrapper[4183]: I0813 20:00:00.672139 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/deaee4f4-7b7a-442d-99b7-c8ac62ef5f27-secret-volume\") pod \"collect-profiles-29251920-wcws2\" (UID: \"deaee4f4-7b7a-442d-99b7-c8ac62ef5f27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2" Aug 13 20:00:00 crc kubenswrapper[4183]: I0813 20:00:00.672199 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/deaee4f4-7b7a-442d-99b7-c8ac62ef5f27-config-volume\") pod \"collect-profiles-29251920-wcws2\" (UID: \"deaee4f4-7b7a-442d-99b7-c8ac62ef5f27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2" Aug 13 20:00:00 crc kubenswrapper[4183]: I0813 20:00:00.681316 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/deaee4f4-7b7a-442d-99b7-c8ac62ef5f27-config-volume\") pod \"collect-profiles-29251920-wcws2\" (UID: \"deaee4f4-7b7a-442d-99b7-c8ac62ef5f27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2" Aug 13 20:00:00 crc kubenswrapper[4183]: I0813 20:00:00.767383 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/deaee4f4-7b7a-442d-99b7-c8ac62ef5f27-secret-volume\") pod \"collect-profiles-29251920-wcws2\" (UID: \"deaee4f4-7b7a-442d-99b7-c8ac62ef5f27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2" Aug 13 20:00:00 crc kubenswrapper[4183]: I0813 20:00:00.831735 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctj8c\" (UniqueName: \"kubernetes.io/projected/deaee4f4-7b7a-442d-99b7-c8ac62ef5f27-kube-api-access-ctj8c\") pod \"collect-profiles-29251920-wcws2\" (UID: \"deaee4f4-7b7a-442d-99b7-c8ac62ef5f27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2" Aug 13 20:00:01 crc kubenswrapper[4183]: E0813 20:00:01.214016 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-operators-dcqzh" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" Aug 13 20:00:01 crc kubenswrapper[4183]: E0813 20:00:01.354370 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.16" Aug 13 20:00:01 crc kubenswrapper[4183]: E0813 20:00:01.354432 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.16" Aug 13 20:00:01 crc kubenswrapper[4183]: E0813 20:00:01.354548 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-r7dbp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-rmwfn_openshift-marketplace(9ad279b4-d9dc-42a8-a1c8-a002bd063482): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 20:00:01 crc kubenswrapper[4183]: E0813 20:00:01.354595 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 20:00:01 crc kubenswrapper[4183]: I0813 20:00:01.435662 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:01 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:01 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:01 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:01 crc kubenswrapper[4183]: I0813 20:00:01.437439 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:01 crc kubenswrapper[4183]: I0813 20:00:01.694507 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt"] Aug 13 20:00:02 crc kubenswrapper[4183]: E0813 20:00:02.212677 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"\"" pod="openshift-marketplace/community-operators-k9qqb" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" Aug 13 20:00:02 crc kubenswrapper[4183]: I0813 20:00:02.434541 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:02 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:02 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:02 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:02 crc kubenswrapper[4183]: I0813 20:00:02.434647 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:02 crc kubenswrapper[4183]: I0813 20:00:02.494456 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2" Aug 13 20:00:02 crc kubenswrapper[4183]: I0813 20:00:02.683346 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" event={"ID":"83bf0764-e80c-490b-8d3c-3cf626fdb233","Type":"ContainerStarted","Data":"13b18d12f5f999b55b87ab784455cad9666242a99651bc76e260b2a3672b215a"} Aug 13 20:00:03 crc kubenswrapper[4183]: I0813 20:00:03.435374 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:03 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:03 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:03 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:03 crc kubenswrapper[4183]: I0813 20:00:03.435498 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:03 crc kubenswrapper[4183]: I0813 20:00:03.648682 4183 patch_prober.go:28] interesting pod/openshift-config-operator-77658b5b66-dq5sc container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Aug 13 20:00:03 crc kubenswrapper[4183]: I0813 20:00:03.649216 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Aug 13 20:00:04 crc kubenswrapper[4183]: I0813 20:00:04.435246 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:04 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:04 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:04 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:04 crc kubenswrapper[4183]: I0813 20:00:04.435580 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:04 crc kubenswrapper[4183]: I0813 20:00:04.872257 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:00:04 crc kubenswrapper[4183]: I0813 20:00:04.872991 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:00:04 crc kubenswrapper[4183]: I0813 20:00:04.873061 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 20:00:04 crc kubenswrapper[4183]: I0813 20:00:04.872265 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:00:04 crc kubenswrapper[4183]: I0813 20:00:04.873415 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:00:04 crc kubenswrapper[4183]: I0813 20:00:04.873953 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:00:04 crc kubenswrapper[4183]: I0813 20:00:04.873982 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:00:04 crc kubenswrapper[4183]: I0813 20:00:04.875079 4183 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"f644dddfd8fc5546a8b056ca1431e4924ae5d29333100579d5e0759c289e206f"} pod="openshift-console/downloads-65476884b9-9wcvx" containerMessage="Container download-server failed liveness probe, will be restarted" Aug 13 20:00:04 crc kubenswrapper[4183]: I0813 20:00:04.875131 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" containerID="cri-o://f644dddfd8fc5546a8b056ca1431e4924ae5d29333100579d5e0759c289e206f" gracePeriod=2 Aug 13 20:00:05 crc kubenswrapper[4183]: I0813 20:00:05.025423 4183 patch_prober.go:28] interesting pod/console-84fccc7b6-mkncc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Aug 13 20:00:05 crc kubenswrapper[4183]: I0813 20:00:05.026036 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Aug 13 20:00:05 crc kubenswrapper[4183]: I0813 20:00:05.396987 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2"] Aug 13 20:00:05 crc kubenswrapper[4183]: I0813 20:00:05.434620 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:05 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:05 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:05 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:05 crc kubenswrapper[4183]: I0813 20:00:05.435185 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:05 crc kubenswrapper[4183]: I0813 20:00:05.716564 4183 generic.go:334] "Generic (PLEG): container finished" podID="6268b7fe-8910-4505-b404-6f1df638105c" containerID="f644dddfd8fc5546a8b056ca1431e4924ae5d29333100579d5e0759c289e206f" exitCode=0 Aug 13 20:00:05 crc kubenswrapper[4183]: I0813 20:00:05.716715 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-65476884b9-9wcvx" event={"ID":"6268b7fe-8910-4505-b404-6f1df638105c","Type":"ContainerDied","Data":"f644dddfd8fc5546a8b056ca1431e4924ae5d29333100579d5e0759c289e206f"} Aug 13 20:00:05 crc kubenswrapper[4183]: I0813 20:00:05.717008 4183 scope.go:117] "RemoveContainer" containerID="b4940961924b80341abc448ef2ef186a7af57fade4e32cd5feb2e52defb2d5f9" Aug 13 20:00:05 crc kubenswrapper[4183]: I0813 20:00:05.719698 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2" event={"ID":"deaee4f4-7b7a-442d-99b7-c8ac62ef5f27","Type":"ContainerStarted","Data":"eae823dac0e12a2bc5b77515bdd8c7d980ff451f9904af126e1e2453718ac348"} Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.435459 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:06 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:06 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:06 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.436133 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.650037 4183 patch_prober.go:28] interesting pod/openshift-config-operator-77658b5b66-dq5sc container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.650225 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.730625 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" event={"ID":"83bf0764-e80c-490b-8d3c-3cf626fdb233","Type":"ContainerStarted","Data":"d5c73235c66ef57fa18c4f490c290086bd39214c316a1e20bac3ddba0b9ab23c"} Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.731126 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.734101 4183 patch_prober.go:28] interesting pod/route-controller-manager-5b77f9fd48-hb8xt container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.734194 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" podUID="83bf0764-e80c-490b-8d3c-3cf626fdb233" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.735317 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" event={"ID":"16f68e98-a8f9-417a-b92b-37bfd7b11e01","Type":"ContainerStarted","Data":"3adbf9773c9dee772e1fae33ef3bfea1611715fe8502455203e764d46595a8bc"} Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.741610 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-config-operator/0.log" Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.742420 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" event={"ID":"530553aa-0a1d-423e-8a22-f5eb4bdbb883","Type":"ContainerStarted","Data":"a82f834c3402db4242f753141733e4ebdbbd2a9132e9ded819a1a24bce37e03b"} Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.743332 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.807511 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" podStartSLOduration=12.807457808 podStartE2EDuration="12.807457808s" podCreationTimestamp="2025-08-13 19:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:00:06.802006483 +0000 UTC m=+973.494671341" watchObservedRunningTime="2025-08-13 20:00:06.807457808 +0000 UTC m=+973.500122546" Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.823476 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.823671 4183 topology_manager.go:215] "Topology Admit Handler" podUID="a0453d24-e872-43af-9e7a-86227c26d200" podNamespace="openshift-kube-controller-manager" podName="revision-pruner-9-crc" Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.824558 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.830140 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-dl9g2" Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.830723 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.843831 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a0453d24-e872-43af-9e7a-86227c26d200-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a0453d24-e872-43af-9e7a-86227c26d200\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.844033 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a0453d24-e872-43af-9e7a-86227c26d200-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a0453d24-e872-43af-9e7a-86227c26d200\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.857413 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.946207 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a0453d24-e872-43af-9e7a-86227c26d200-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a0453d24-e872-43af-9e7a-86227c26d200\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.946359 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a0453d24-e872-43af-9e7a-86227c26d200-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a0453d24-e872-43af-9e7a-86227c26d200\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.946558 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a0453d24-e872-43af-9e7a-86227c26d200-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a0453d24-e872-43af-9e7a-86227c26d200\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.951349 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" podStartSLOduration=13.9512997 podStartE2EDuration="13.9512997s" podCreationTimestamp="2025-08-13 19:59:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:00:06.947395608 +0000 UTC m=+973.640060666" watchObservedRunningTime="2025-08-13 20:00:06.9512997 +0000 UTC m=+973.643964418" Aug 13 20:00:07 crc kubenswrapper[4183]: I0813 20:00:07.023143 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" Aug 13 20:00:07 crc kubenswrapper[4183]: I0813 20:00:07.049629 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a0453d24-e872-43af-9e7a-86227c26d200-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a0453d24-e872-43af-9e7a-86227c26d200\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Aug 13 20:00:07 crc kubenswrapper[4183]: I0813 20:00:07.059444 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" Aug 13 20:00:07 crc kubenswrapper[4183]: I0813 20:00:07.444468 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:07 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:07 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:07 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:07 crc kubenswrapper[4183]: I0813 20:00:07.444561 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:07 crc kubenswrapper[4183]: I0813 20:00:07.597730 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Aug 13 20:00:08 crc kubenswrapper[4183]: I0813 20:00:08.042742 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" Aug 13 20:00:08 crc kubenswrapper[4183]: I0813 20:00:08.440824 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:08 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:08 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:08 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:08 crc kubenswrapper[4183]: I0813 20:00:08.441453 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:09 crc kubenswrapper[4183]: E0813 20:00:09.211359 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 20:00:09 crc kubenswrapper[4183]: I0813 20:00:09.434143 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-9-crc"] Aug 13 20:00:09 crc kubenswrapper[4183]: I0813 20:00:09.434352 4183 topology_manager.go:215] "Topology Admit Handler" podUID="227e3650-2a85-4229-8099-bb53972635b2" podNamespace="openshift-kube-controller-manager" podName="installer-9-crc" Aug 13 20:00:09 crc kubenswrapper[4183]: I0813 20:00:09.435408 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-9-crc" Aug 13 20:00:09 crc kubenswrapper[4183]: I0813 20:00:09.436985 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:09 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:09 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:09 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:09 crc kubenswrapper[4183]: I0813 20:00:09.437129 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:09 crc kubenswrapper[4183]: I0813 20:00:09.597139 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/227e3650-2a85-4229-8099-bb53972635b2-var-lock\") pod \"installer-9-crc\" (UID: \"227e3650-2a85-4229-8099-bb53972635b2\") " pod="openshift-kube-controller-manager/installer-9-crc" Aug 13 20:00:09 crc kubenswrapper[4183]: I0813 20:00:09.597291 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/227e3650-2a85-4229-8099-bb53972635b2-kube-api-access\") pod \"installer-9-crc\" (UID: \"227e3650-2a85-4229-8099-bb53972635b2\") " pod="openshift-kube-controller-manager/installer-9-crc" Aug 13 20:00:09 crc kubenswrapper[4183]: I0813 20:00:09.597420 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/227e3650-2a85-4229-8099-bb53972635b2-kubelet-dir\") pod \"installer-9-crc\" (UID: \"227e3650-2a85-4229-8099-bb53972635b2\") " pod="openshift-kube-controller-manager/installer-9-crc" Aug 13 20:00:09 crc kubenswrapper[4183]: I0813 20:00:09.699065 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/227e3650-2a85-4229-8099-bb53972635b2-kubelet-dir\") pod \"installer-9-crc\" (UID: \"227e3650-2a85-4229-8099-bb53972635b2\") " pod="openshift-kube-controller-manager/installer-9-crc" Aug 13 20:00:09 crc kubenswrapper[4183]: I0813 20:00:09.699153 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/227e3650-2a85-4229-8099-bb53972635b2-var-lock\") pod \"installer-9-crc\" (UID: \"227e3650-2a85-4229-8099-bb53972635b2\") " pod="openshift-kube-controller-manager/installer-9-crc" Aug 13 20:00:09 crc kubenswrapper[4183]: I0813 20:00:09.699205 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/227e3650-2a85-4229-8099-bb53972635b2-kubelet-dir\") pod \"installer-9-crc\" (UID: \"227e3650-2a85-4229-8099-bb53972635b2\") " pod="openshift-kube-controller-manager/installer-9-crc" Aug 13 20:00:09 crc kubenswrapper[4183]: I0813 20:00:09.699229 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/227e3650-2a85-4229-8099-bb53972635b2-kube-api-access\") pod \"installer-9-crc\" (UID: \"227e3650-2a85-4229-8099-bb53972635b2\") " pod="openshift-kube-controller-manager/installer-9-crc" Aug 13 20:00:09 crc kubenswrapper[4183]: I0813 20:00:09.699398 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/227e3650-2a85-4229-8099-bb53972635b2-var-lock\") pod \"installer-9-crc\" (UID: \"227e3650-2a85-4229-8099-bb53972635b2\") " pod="openshift-kube-controller-manager/installer-9-crc" Aug 13 20:00:10 crc kubenswrapper[4183]: I0813 20:00:10.137030 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-9-crc"] Aug 13 20:00:10 crc kubenswrapper[4183]: E0813 20:00:10.214874 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"\"" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 20:00:10 crc kubenswrapper[4183]: I0813 20:00:10.218068 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 20:00:10 crc kubenswrapper[4183]: I0813 20:00:10.346719 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/227e3650-2a85-4229-8099-bb53972635b2-kube-api-access\") pod \"installer-9-crc\" (UID: \"227e3650-2a85-4229-8099-bb53972635b2\") " pod="openshift-kube-controller-manager/installer-9-crc" Aug 13 20:00:10 crc kubenswrapper[4183]: I0813 20:00:10.444256 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:10 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:10 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:10 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:10 crc kubenswrapper[4183]: I0813 20:00:10.447014 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:10 crc kubenswrapper[4183]: I0813 20:00:10.514376 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-9-crc" Aug 13 20:00:10 crc kubenswrapper[4183]: I0813 20:00:10.815665 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-65476884b9-9wcvx" event={"ID":"6268b7fe-8910-4505-b404-6f1df638105c","Type":"ContainerStarted","Data":"50e7a71dc2a39377a3d66cf968c9c75001c5782d329877e2aeb63cfbd66e826a"} Aug 13 20:00:10 crc kubenswrapper[4183]: I0813 20:00:10.818629 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 20:00:10 crc kubenswrapper[4183]: I0813 20:00:10.818751 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:00:10 crc kubenswrapper[4183]: I0813 20:00:10.818898 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:00:10 crc kubenswrapper[4183]: I0813 20:00:10.832568 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2" event={"ID":"deaee4f4-7b7a-442d-99b7-c8ac62ef5f27","Type":"ContainerStarted","Data":"f432c7fb9551b92a90db75e3b1c003f4281525efd6e3f7f351865ef35c5ea786"} Aug 13 20:00:11 crc kubenswrapper[4183]: I0813 20:00:11.408692 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-c4dd57946-mpxjt"] Aug 13 20:00:11 crc kubenswrapper[4183]: I0813 20:00:11.409538 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" podUID="16f68e98-a8f9-417a-b92b-37bfd7b11e01" containerName="controller-manager" containerID="cri-o://3adbf9773c9dee772e1fae33ef3bfea1611715fe8502455203e764d46595a8bc" gracePeriod=30 Aug 13 20:00:11 crc kubenswrapper[4183]: I0813 20:00:11.446038 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:11 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:11 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:11 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:11 crc kubenswrapper[4183]: I0813 20:00:11.446320 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:11 crc kubenswrapper[4183]: I0813 20:00:11.657414 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt"] Aug 13 20:00:11 crc kubenswrapper[4183]: I0813 20:00:11.657694 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" podUID="83bf0764-e80c-490b-8d3c-3cf626fdb233" containerName="route-controller-manager" containerID="cri-o://d5c73235c66ef57fa18c4f490c290086bd39214c316a1e20bac3ddba0b9ab23c" gracePeriod=30 Aug 13 20:00:11 crc kubenswrapper[4183]: I0813 20:00:11.839995 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:00:11 crc kubenswrapper[4183]: I0813 20:00:11.840697 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:00:12 crc kubenswrapper[4183]: E0813 20:00:12.214330 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 20:00:12 crc kubenswrapper[4183]: E0813 20:00:12.214469 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"\"" pod="openshift-marketplace/certified-operators-g4v97" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" Aug 13 20:00:12 crc kubenswrapper[4183]: E0813 20:00:12.214595 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"\"" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 20:00:12 crc kubenswrapper[4183]: I0813 20:00:12.432418 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:12 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:12 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:12 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:12 crc kubenswrapper[4183]: I0813 20:00:12.432950 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:12 crc kubenswrapper[4183]: I0813 20:00:12.827582 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Aug 13 20:00:12 crc kubenswrapper[4183]: W0813 20:00:12.844932 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-poda0453d24_e872_43af_9e7a_86227c26d200.slice/crio-beb700893f285f1004019874abdcd9484d578d674149630d4658c680e6991319 WatchSource:0}: Error finding container beb700893f285f1004019874abdcd9484d578d674149630d4658c680e6991319: Status 404 returned error can't find the container with id beb700893f285f1004019874abdcd9484d578d674149630d4658c680e6991319 Aug 13 20:00:12 crc kubenswrapper[4183]: I0813 20:00:12.874373 4183 generic.go:334] "Generic (PLEG): container finished" podID="83bf0764-e80c-490b-8d3c-3cf626fdb233" containerID="d5c73235c66ef57fa18c4f490c290086bd39214c316a1e20bac3ddba0b9ab23c" exitCode=0 Aug 13 20:00:12 crc kubenswrapper[4183]: I0813 20:00:12.874577 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" event={"ID":"83bf0764-e80c-490b-8d3c-3cf626fdb233","Type":"ContainerDied","Data":"d5c73235c66ef57fa18c4f490c290086bd39214c316a1e20bac3ddba0b9ab23c"} Aug 13 20:00:12 crc kubenswrapper[4183]: I0813 20:00:12.882748 4183 generic.go:334] "Generic (PLEG): container finished" podID="16f68e98-a8f9-417a-b92b-37bfd7b11e01" containerID="3adbf9773c9dee772e1fae33ef3bfea1611715fe8502455203e764d46595a8bc" exitCode=0 Aug 13 20:00:12 crc kubenswrapper[4183]: I0813 20:00:12.883140 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" event={"ID":"16f68e98-a8f9-417a-b92b-37bfd7b11e01","Type":"ContainerDied","Data":"3adbf9773c9dee772e1fae33ef3bfea1611715fe8502455203e764d46595a8bc"} Aug 13 20:00:12 crc kubenswrapper[4183]: I0813 20:00:12.884751 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:00:12 crc kubenswrapper[4183]: I0813 20:00:12.891048 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:00:13 crc kubenswrapper[4183]: I0813 20:00:13.077103 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2" podStartSLOduration=13.077002107 podStartE2EDuration="13.077002107s" podCreationTimestamp="2025-08-13 20:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:00:13.063204943 +0000 UTC m=+979.755870041" watchObservedRunningTime="2025-08-13 20:00:13.077002107 +0000 UTC m=+979.769667125" Aug 13 20:00:13 crc kubenswrapper[4183]: E0813 20:00:13.215023 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 20:00:13 crc kubenswrapper[4183]: I0813 20:00:13.415704 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-9-crc"] Aug 13 20:00:13 crc kubenswrapper[4183]: I0813 20:00:13.444931 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-765b47f944-n2lhl"] Aug 13 20:00:13 crc kubenswrapper[4183]: I0813 20:00:13.453029 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:13 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:13 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:13 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:13 crc kubenswrapper[4183]: I0813 20:00:13.453140 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:13 crc kubenswrapper[4183]: W0813 20:00:13.496289 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod227e3650_2a85_4229_8099_bb53972635b2.slice/crio-ca267bd7a205181e470f424d652801f7ec40bf5a8c5b2880b6cf133cd7e518ef WatchSource:0}: Error finding container ca267bd7a205181e470f424d652801f7ec40bf5a8c5b2880b6cf133cd7e518ef: Status 404 returned error can't find the container with id ca267bd7a205181e470f424d652801f7ec40bf5a8c5b2880b6cf133cd7e518ef Aug 13 20:00:13 crc kubenswrapper[4183]: I0813 20:00:13.942064 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-9-crc" event={"ID":"227e3650-2a85-4229-8099-bb53972635b2","Type":"ContainerStarted","Data":"ca267bd7a205181e470f424d652801f7ec40bf5a8c5b2880b6cf133cd7e518ef"} Aug 13 20:00:13 crc kubenswrapper[4183]: I0813 20:00:13.944612 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a0453d24-e872-43af-9e7a-86227c26d200","Type":"ContainerStarted","Data":"beb700893f285f1004019874abdcd9484d578d674149630d4658c680e6991319"} Aug 13 20:00:13 crc kubenswrapper[4183]: I0813 20:00:13.967043 4183 generic.go:334] "Generic (PLEG): container finished" podID="deaee4f4-7b7a-442d-99b7-c8ac62ef5f27" containerID="f432c7fb9551b92a90db75e3b1c003f4281525efd6e3f7f351865ef35c5ea786" exitCode=0 Aug 13 20:00:13 crc kubenswrapper[4183]: I0813 20:00:13.967120 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2" event={"ID":"deaee4f4-7b7a-442d-99b7-c8ac62ef5f27","Type":"ContainerDied","Data":"f432c7fb9551b92a90db75e3b1c003f4281525efd6e3f7f351865ef35c5ea786"} Aug 13 20:00:14 crc kubenswrapper[4183]: E0813 20:00:14.233752 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-operators-dcqzh" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" Aug 13 20:00:14 crc kubenswrapper[4183]: I0813 20:00:14.437693 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:14 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:14 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:14 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:14 crc kubenswrapper[4183]: I0813 20:00:14.438231 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:14 crc kubenswrapper[4183]: I0813 20:00:14.871953 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:00:14 crc kubenswrapper[4183]: I0813 20:00:14.873447 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:00:14 crc kubenswrapper[4183]: I0813 20:00:14.872215 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:00:14 crc kubenswrapper[4183]: I0813 20:00:14.874133 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:00:14 crc kubenswrapper[4183]: I0813 20:00:14.949658 4183 patch_prober.go:28] interesting pod/console-84fccc7b6-mkncc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Aug 13 20:00:14 crc kubenswrapper[4183]: I0813 20:00:14.949746 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Aug 13 20:00:14 crc kubenswrapper[4183]: I0813 20:00:14.976380 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" event={"ID":"16f68e98-a8f9-417a-b92b-37bfd7b11e01","Type":"ContainerDied","Data":"4cfa6ec97b88dab6d16213f83b80b7667542c9da6b7b1c559cfe136cf9055f54"} Aug 13 20:00:14 crc kubenswrapper[4183]: I0813 20:00:14.976449 4183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4cfa6ec97b88dab6d16213f83b80b7667542c9da6b7b1c559cfe136cf9055f54" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.002072 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.103994 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvvgt\" (UniqueName: \"kubernetes.io/projected/16f68e98-a8f9-417a-b92b-37bfd7b11e01-kube-api-access-rvvgt\") pod \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\" (UID: \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\") " Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.104141 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16f68e98-a8f9-417a-b92b-37bfd7b11e01-client-ca\") pod \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\" (UID: \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\") " Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.104251 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16f68e98-a8f9-417a-b92b-37bfd7b11e01-config\") pod \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\" (UID: \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\") " Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.104314 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16f68e98-a8f9-417a-b92b-37bfd7b11e01-proxy-ca-bundles\") pod \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\" (UID: \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\") " Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.104408 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16f68e98-a8f9-417a-b92b-37bfd7b11e01-serving-cert\") pod \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\" (UID: \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\") " Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.105448 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16f68e98-a8f9-417a-b92b-37bfd7b11e01-client-ca" (OuterVolumeSpecName: "client-ca") pod "16f68e98-a8f9-417a-b92b-37bfd7b11e01" (UID: "16f68e98-a8f9-417a-b92b-37bfd7b11e01"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.106161 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16f68e98-a8f9-417a-b92b-37bfd7b11e01-config" (OuterVolumeSpecName: "config") pod "16f68e98-a8f9-417a-b92b-37bfd7b11e01" (UID: "16f68e98-a8f9-417a-b92b-37bfd7b11e01"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.106630 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16f68e98-a8f9-417a-b92b-37bfd7b11e01-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "16f68e98-a8f9-417a-b92b-37bfd7b11e01" (UID: "16f68e98-a8f9-417a-b92b-37bfd7b11e01"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.144033 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16f68e98-a8f9-417a-b92b-37bfd7b11e01-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16f68e98-a8f9-417a-b92b-37bfd7b11e01" (UID: "16f68e98-a8f9-417a-b92b-37bfd7b11e01"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.164398 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16f68e98-a8f9-417a-b92b-37bfd7b11e01-kube-api-access-rvvgt" (OuterVolumeSpecName: "kube-api-access-rvvgt") pod "16f68e98-a8f9-417a-b92b-37bfd7b11e01" (UID: "16f68e98-a8f9-417a-b92b-37bfd7b11e01"). InnerVolumeSpecName "kube-api-access-rvvgt". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.207183 4183 reconciler_common.go:300] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16f68e98-a8f9-417a-b92b-37bfd7b11e01-serving-cert\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.207266 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-rvvgt\" (UniqueName: \"kubernetes.io/projected/16f68e98-a8f9-417a-b92b-37bfd7b11e01-kube-api-access-rvvgt\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.207297 4183 reconciler_common.go:300] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16f68e98-a8f9-417a-b92b-37bfd7b11e01-client-ca\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.207317 4183 reconciler_common.go:300] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16f68e98-a8f9-417a-b92b-37bfd7b11e01-config\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.207334 4183 reconciler_common.go:300] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16f68e98-a8f9-417a-b92b-37bfd7b11e01-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.440088 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:15 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:15 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:15 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.440501 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.687573 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.818880 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njx72\" (UniqueName: \"kubernetes.io/projected/83bf0764-e80c-490b-8d3c-3cf626fdb233-kube-api-access-njx72\") pod \"83bf0764-e80c-490b-8d3c-3cf626fdb233\" (UID: \"83bf0764-e80c-490b-8d3c-3cf626fdb233\") " Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.819048 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/83bf0764-e80c-490b-8d3c-3cf626fdb233-client-ca\") pod \"83bf0764-e80c-490b-8d3c-3cf626fdb233\" (UID: \"83bf0764-e80c-490b-8d3c-3cf626fdb233\") " Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.819085 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83bf0764-e80c-490b-8d3c-3cf626fdb233-config\") pod \"83bf0764-e80c-490b-8d3c-3cf626fdb233\" (UID: \"83bf0764-e80c-490b-8d3c-3cf626fdb233\") " Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.819178 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83bf0764-e80c-490b-8d3c-3cf626fdb233-serving-cert\") pod \"83bf0764-e80c-490b-8d3c-3cf626fdb233\" (UID: \"83bf0764-e80c-490b-8d3c-3cf626fdb233\") " Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.821131 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83bf0764-e80c-490b-8d3c-3cf626fdb233-client-ca" (OuterVolumeSpecName: "client-ca") pod "83bf0764-e80c-490b-8d3c-3cf626fdb233" (UID: "83bf0764-e80c-490b-8d3c-3cf626fdb233"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.821665 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83bf0764-e80c-490b-8d3c-3cf626fdb233-config" (OuterVolumeSpecName: "config") pod "83bf0764-e80c-490b-8d3c-3cf626fdb233" (UID: "83bf0764-e80c-490b-8d3c-3cf626fdb233"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.829234 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83bf0764-e80c-490b-8d3c-3cf626fdb233-kube-api-access-njx72" (OuterVolumeSpecName: "kube-api-access-njx72") pod "83bf0764-e80c-490b-8d3c-3cf626fdb233" (UID: "83bf0764-e80c-490b-8d3c-3cf626fdb233"). InnerVolumeSpecName "kube-api-access-njx72". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.839170 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83bf0764-e80c-490b-8d3c-3cf626fdb233-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "83bf0764-e80c-490b-8d3c-3cf626fdb233" (UID: "83bf0764-e80c-490b-8d3c-3cf626fdb233"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.920862 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-njx72\" (UniqueName: \"kubernetes.io/projected/83bf0764-e80c-490b-8d3c-3cf626fdb233-kube-api-access-njx72\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.920931 4183 reconciler_common.go:300] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/83bf0764-e80c-490b-8d3c-3cf626fdb233-client-ca\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.920954 4183 reconciler_common.go:300] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83bf0764-e80c-490b-8d3c-3cf626fdb233-config\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.920969 4183 reconciler_common.go:300] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83bf0764-e80c-490b-8d3c-3cf626fdb233-serving-cert\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.988923 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" event={"ID":"83bf0764-e80c-490b-8d3c-3cf626fdb233","Type":"ContainerDied","Data":"13b18d12f5f999b55b87ab784455cad9666242a99651bc76e260b2a3672b215a"} Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.988936 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.988992 4183 scope.go:117] "RemoveContainer" containerID="d5c73235c66ef57fa18c4f490c290086bd39214c316a1e20bac3ddba0b9ab23c" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.988982 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" Aug 13 20:00:16 crc kubenswrapper[4183]: I0813 20:00:16.341272 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2" Aug 13 20:00:16 crc kubenswrapper[4183]: I0813 20:00:16.432894 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ctj8c\" (UniqueName: \"kubernetes.io/projected/deaee4f4-7b7a-442d-99b7-c8ac62ef5f27-kube-api-access-ctj8c\") pod \"deaee4f4-7b7a-442d-99b7-c8ac62ef5f27\" (UID: \"deaee4f4-7b7a-442d-99b7-c8ac62ef5f27\") " Aug 13 20:00:16 crc kubenswrapper[4183]: I0813 20:00:16.433074 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/deaee4f4-7b7a-442d-99b7-c8ac62ef5f27-config-volume\") pod \"deaee4f4-7b7a-442d-99b7-c8ac62ef5f27\" (UID: \"deaee4f4-7b7a-442d-99b7-c8ac62ef5f27\") " Aug 13 20:00:16 crc kubenswrapper[4183]: I0813 20:00:16.433126 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/deaee4f4-7b7a-442d-99b7-c8ac62ef5f27-secret-volume\") pod \"deaee4f4-7b7a-442d-99b7-c8ac62ef5f27\" (UID: \"deaee4f4-7b7a-442d-99b7-c8ac62ef5f27\") " Aug 13 20:00:16 crc kubenswrapper[4183]: I0813 20:00:16.434291 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/deaee4f4-7b7a-442d-99b7-c8ac62ef5f27-config-volume" (OuterVolumeSpecName: "config-volume") pod "deaee4f4-7b7a-442d-99b7-c8ac62ef5f27" (UID: "deaee4f4-7b7a-442d-99b7-c8ac62ef5f27"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:00:16 crc kubenswrapper[4183]: I0813 20:00:16.439630 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/deaee4f4-7b7a-442d-99b7-c8ac62ef5f27-kube-api-access-ctj8c" (OuterVolumeSpecName: "kube-api-access-ctj8c") pod "deaee4f4-7b7a-442d-99b7-c8ac62ef5f27" (UID: "deaee4f4-7b7a-442d-99b7-c8ac62ef5f27"). InnerVolumeSpecName "kube-api-access-ctj8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:00:16 crc kubenswrapper[4183]: I0813 20:00:16.446259 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:16 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:16 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:16 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:16 crc kubenswrapper[4183]: I0813 20:00:16.446463 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deaee4f4-7b7a-442d-99b7-c8ac62ef5f27-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "deaee4f4-7b7a-442d-99b7-c8ac62ef5f27" (UID: "deaee4f4-7b7a-442d-99b7-c8ac62ef5f27"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:00:16 crc kubenswrapper[4183]: I0813 20:00:16.446488 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:16 crc kubenswrapper[4183]: I0813 20:00:16.543389 4183 reconciler_common.go:300] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/deaee4f4-7b7a-442d-99b7-c8ac62ef5f27-config-volume\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:16 crc kubenswrapper[4183]: I0813 20:00:16.543514 4183 reconciler_common.go:300] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/deaee4f4-7b7a-442d-99b7-c8ac62ef5f27-secret-volume\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:16 crc kubenswrapper[4183]: I0813 20:00:16.543544 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-ctj8c\" (UniqueName: \"kubernetes.io/projected/deaee4f4-7b7a-442d-99b7-c8ac62ef5f27-kube-api-access-ctj8c\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.006121 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a0453d24-e872-43af-9e7a-86227c26d200","Type":"ContainerStarted","Data":"3e7eb9892d5a94b55021884eb7d6b9f29402769ffac497c2b86edb6618a7ef4d"} Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.013564 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2" event={"ID":"deaee4f4-7b7a-442d-99b7-c8ac62ef5f27","Type":"ContainerDied","Data":"eae823dac0e12a2bc5b77515bdd8c7d980ff451f9904af126e1e2453718ac348"} Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.013619 4183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eae823dac0e12a2bc5b77515bdd8c7d980ff451f9904af126e1e2453718ac348" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.013743 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2" Aug 13 20:00:17 crc kubenswrapper[4183]: E0813 20:00:17.213161 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"\"" pod="openshift-marketplace/community-operators-k9qqb" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.337281 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.337432 4183 topology_manager.go:215] "Topology Admit Handler" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" podNamespace="openshift-kube-apiserver" podName="installer-9-crc" Aug 13 20:00:17 crc kubenswrapper[4183]: E0813 20:00:17.337602 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="16f68e98-a8f9-417a-b92b-37bfd7b11e01" containerName="controller-manager" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.337620 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="16f68e98-a8f9-417a-b92b-37bfd7b11e01" containerName="controller-manager" Aug 13 20:00:17 crc kubenswrapper[4183]: E0813 20:00:17.337640 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="deaee4f4-7b7a-442d-99b7-c8ac62ef5f27" containerName="collect-profiles" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.337653 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="deaee4f4-7b7a-442d-99b7-c8ac62ef5f27" containerName="collect-profiles" Aug 13 20:00:17 crc kubenswrapper[4183]: E0813 20:00:17.337671 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="83bf0764-e80c-490b-8d3c-3cf626fdb233" containerName="route-controller-manager" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.337716 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="83bf0764-e80c-490b-8d3c-3cf626fdb233" containerName="route-controller-manager" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.338220 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="deaee4f4-7b7a-442d-99b7-c8ac62ef5f27" containerName="collect-profiles" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.338243 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="83bf0764-e80c-490b-8d3c-3cf626fdb233" containerName="route-controller-manager" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.338255 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="16f68e98-a8f9-417a-b92b-37bfd7b11e01" containerName="controller-manager" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.338641 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.383506 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2ad657a4-8b02-4373-8d0d-b0e25345dc90-kubelet-dir\") pod \"installer-9-crc\" (UID: \"2ad657a4-8b02-4373-8d0d-b0e25345dc90\") " pod="openshift-kube-apiserver/installer-9-crc" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.384930 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2ad657a4-8b02-4373-8d0d-b0e25345dc90-kube-api-access\") pod \"installer-9-crc\" (UID: \"2ad657a4-8b02-4373-8d0d-b0e25345dc90\") " pod="openshift-kube-apiserver/installer-9-crc" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.385493 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2ad657a4-8b02-4373-8d0d-b0e25345dc90-var-lock\") pod \"installer-9-crc\" (UID: \"2ad657a4-8b02-4373-8d0d-b0e25345dc90\") " pod="openshift-kube-apiserver/installer-9-crc" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.404515 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.412347 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-4kgh8" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.448936 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:17 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:17 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:17 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.449427 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.486887 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2ad657a4-8b02-4373-8d0d-b0e25345dc90-var-lock\") pod \"installer-9-crc\" (UID: \"2ad657a4-8b02-4373-8d0d-b0e25345dc90\") " pod="openshift-kube-apiserver/installer-9-crc" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.487010 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2ad657a4-8b02-4373-8d0d-b0e25345dc90-kubelet-dir\") pod \"installer-9-crc\" (UID: \"2ad657a4-8b02-4373-8d0d-b0e25345dc90\") " pod="openshift-kube-apiserver/installer-9-crc" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.487142 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2ad657a4-8b02-4373-8d0d-b0e25345dc90-var-lock\") pod \"installer-9-crc\" (UID: \"2ad657a4-8b02-4373-8d0d-b0e25345dc90\") " pod="openshift-kube-apiserver/installer-9-crc" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.487243 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2ad657a4-8b02-4373-8d0d-b0e25345dc90-kubelet-dir\") pod \"installer-9-crc\" (UID: \"2ad657a4-8b02-4373-8d0d-b0e25345dc90\") " pod="openshift-kube-apiserver/installer-9-crc" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.487684 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2ad657a4-8b02-4373-8d0d-b0e25345dc90-kube-api-access\") pod \"installer-9-crc\" (UID: \"2ad657a4-8b02-4373-8d0d-b0e25345dc90\") " pod="openshift-kube-apiserver/installer-9-crc" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.520086 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.588519 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw"] Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.588681 4183 topology_manager.go:215] "Topology Admit Handler" podUID="1713e8bc-bab0-49a8-8618-9ded2e18906c" podNamespace="openshift-route-controller-manager" podName="route-controller-manager-6cfd9fc8fc-7sbzw" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.589416 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.627075 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-9r4gl" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.627262 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.627423 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.627961 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.628068 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.628206 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.697161 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qgvb\" (UniqueName: \"kubernetes.io/projected/1713e8bc-bab0-49a8-8618-9ded2e18906c-kube-api-access-9qgvb\") pod \"route-controller-manager-6cfd9fc8fc-7sbzw\" (UID: \"1713e8bc-bab0-49a8-8618-9ded2e18906c\") " pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.697279 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1713e8bc-bab0-49a8-8618-9ded2e18906c-serving-cert\") pod \"route-controller-manager-6cfd9fc8fc-7sbzw\" (UID: \"1713e8bc-bab0-49a8-8618-9ded2e18906c\") " pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.697345 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1713e8bc-bab0-49a8-8618-9ded2e18906c-config\") pod \"route-controller-manager-6cfd9fc8fc-7sbzw\" (UID: \"1713e8bc-bab0-49a8-8618-9ded2e18906c\") " pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.697383 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1713e8bc-bab0-49a8-8618-9ded2e18906c-client-ca\") pod \"route-controller-manager-6cfd9fc8fc-7sbzw\" (UID: \"1713e8bc-bab0-49a8-8618-9ded2e18906c\") " pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.798571 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1713e8bc-bab0-49a8-8618-9ded2e18906c-config\") pod \"route-controller-manager-6cfd9fc8fc-7sbzw\" (UID: \"1713e8bc-bab0-49a8-8618-9ded2e18906c\") " pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.798655 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1713e8bc-bab0-49a8-8618-9ded2e18906c-client-ca\") pod \"route-controller-manager-6cfd9fc8fc-7sbzw\" (UID: \"1713e8bc-bab0-49a8-8618-9ded2e18906c\") " pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.798729 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9qgvb\" (UniqueName: \"kubernetes.io/projected/1713e8bc-bab0-49a8-8618-9ded2e18906c-kube-api-access-9qgvb\") pod \"route-controller-manager-6cfd9fc8fc-7sbzw\" (UID: \"1713e8bc-bab0-49a8-8618-9ded2e18906c\") " pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.798921 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1713e8bc-bab0-49a8-8618-9ded2e18906c-serving-cert\") pod \"route-controller-manager-6cfd9fc8fc-7sbzw\" (UID: \"1713e8bc-bab0-49a8-8618-9ded2e18906c\") " pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.801515 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1713e8bc-bab0-49a8-8618-9ded2e18906c-config\") pod \"route-controller-manager-6cfd9fc8fc-7sbzw\" (UID: \"1713e8bc-bab0-49a8-8618-9ded2e18906c\") " pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.802501 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1713e8bc-bab0-49a8-8618-9ded2e18906c-client-ca\") pod \"route-controller-manager-6cfd9fc8fc-7sbzw\" (UID: \"1713e8bc-bab0-49a8-8618-9ded2e18906c\") " pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.847371 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1713e8bc-bab0-49a8-8618-9ded2e18906c-serving-cert\") pod \"route-controller-manager-6cfd9fc8fc-7sbzw\" (UID: \"1713e8bc-bab0-49a8-8618-9ded2e18906c\") " pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.914268 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt"] Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.945291 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw"] Aug 13 20:00:18 crc kubenswrapper[4183]: I0813 20:00:18.077101 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-9-crc" event={"ID":"227e3650-2a85-4229-8099-bb53972635b2","Type":"ContainerStarted","Data":"1bbed3b469cb62a0e76b6e9718249f34f00007dc9f9e6dcd22b346fb357ece99"} Aug 13 20:00:18 crc kubenswrapper[4183]: I0813 20:00:18.112972 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt"] Aug 13 20:00:18 crc kubenswrapper[4183]: I0813 20:00:18.129067 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2ad657a4-8b02-4373-8d0d-b0e25345dc90-kube-api-access\") pod \"installer-9-crc\" (UID: \"2ad657a4-8b02-4373-8d0d-b0e25345dc90\") " pod="openshift-kube-apiserver/installer-9-crc" Aug 13 20:00:18 crc kubenswrapper[4183]: I0813 20:00:18.155154 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qgvb\" (UniqueName: \"kubernetes.io/projected/1713e8bc-bab0-49a8-8618-9ded2e18906c-kube-api-access-9qgvb\") pod \"route-controller-manager-6cfd9fc8fc-7sbzw\" (UID: \"1713e8bc-bab0-49a8-8618-9ded2e18906c\") " pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" Aug 13 20:00:18 crc kubenswrapper[4183]: I0813 20:00:18.213252 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" Aug 13 20:00:18 crc kubenswrapper[4183]: I0813 20:00:18.263518 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Aug 13 20:00:18 crc kubenswrapper[4183]: I0813 20:00:18.464305 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:18 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:18 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:18 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:18 crc kubenswrapper[4183]: I0813 20:00:18.464656 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:18 crc kubenswrapper[4183]: I0813 20:00:18.806627 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-console/console-5d9678894c-wx62n"] Aug 13 20:00:18 crc kubenswrapper[4183]: I0813 20:00:18.806961 4183 topology_manager.go:215] "Topology Admit Handler" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" podNamespace="openshift-console" podName="console-5d9678894c-wx62n" Aug 13 20:00:18 crc kubenswrapper[4183]: I0813 20:00:18.807928 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:18 crc kubenswrapper[4183]: I0813 20:00:18.869628 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-ng44q" Aug 13 20:00:18 crc kubenswrapper[4183]: I0813 20:00:18.937734 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-oauth-serving-cert\") pod \"console-5d9678894c-wx62n\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:18 crc kubenswrapper[4183]: I0813 20:00:18.937945 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjq9b\" (UniqueName: \"kubernetes.io/projected/384ed0e8-86e4-42df-bd2c-604c1f536a15-kube-api-access-hjq9b\") pod \"console-5d9678894c-wx62n\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:18 crc kubenswrapper[4183]: I0813 20:00:18.938025 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-trusted-ca-bundle\") pod \"console-5d9678894c-wx62n\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:18 crc kubenswrapper[4183]: I0813 20:00:18.938067 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-console-config\") pod \"console-5d9678894c-wx62n\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:18 crc kubenswrapper[4183]: I0813 20:00:18.938098 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/384ed0e8-86e4-42df-bd2c-604c1f536a15-console-oauth-config\") pod \"console-5d9678894c-wx62n\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:18 crc kubenswrapper[4183]: I0813 20:00:18.938179 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-service-ca\") pod \"console-5d9678894c-wx62n\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:18 crc kubenswrapper[4183]: I0813 20:00:18.938207 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/384ed0e8-86e4-42df-bd2c-604c1f536a15-console-serving-cert\") pod \"console-5d9678894c-wx62n\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:18 crc kubenswrapper[4183]: I0813 20:00:18.951491 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5d9678894c-wx62n"] Aug 13 20:00:19 crc kubenswrapper[4183]: I0813 20:00:19.039936 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-trusted-ca-bundle\") pod \"console-5d9678894c-wx62n\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:19 crc kubenswrapper[4183]: I0813 20:00:19.040041 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-console-config\") pod \"console-5d9678894c-wx62n\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:19 crc kubenswrapper[4183]: I0813 20:00:19.040075 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/384ed0e8-86e4-42df-bd2c-604c1f536a15-console-oauth-config\") pod \"console-5d9678894c-wx62n\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:19 crc kubenswrapper[4183]: I0813 20:00:19.040179 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-service-ca\") pod \"console-5d9678894c-wx62n\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:19 crc kubenswrapper[4183]: I0813 20:00:19.040204 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/384ed0e8-86e4-42df-bd2c-604c1f536a15-console-serving-cert\") pod \"console-5d9678894c-wx62n\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:19 crc kubenswrapper[4183]: I0813 20:00:19.040248 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-oauth-serving-cert\") pod \"console-5d9678894c-wx62n\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:19 crc kubenswrapper[4183]: I0813 20:00:19.040287 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hjq9b\" (UniqueName: \"kubernetes.io/projected/384ed0e8-86e4-42df-bd2c-604c1f536a15-kube-api-access-hjq9b\") pod \"console-5d9678894c-wx62n\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:19 crc kubenswrapper[4183]: I0813 20:00:19.043475 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-service-ca\") pod \"console-5d9678894c-wx62n\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:19 crc kubenswrapper[4183]: I0813 20:00:19.057114 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-oauth-serving-cert\") pod \"console-5d9678894c-wx62n\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:19 crc kubenswrapper[4183]: I0813 20:00:19.058261 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-console-config\") pod \"console-5d9678894c-wx62n\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:19 crc kubenswrapper[4183]: I0813 20:00:19.062310 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-c4dd57946-mpxjt"] Aug 13 20:00:19 crc kubenswrapper[4183]: I0813 20:00:19.074712 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/384ed0e8-86e4-42df-bd2c-604c1f536a15-console-serving-cert\") pod \"console-5d9678894c-wx62n\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:19 crc kubenswrapper[4183]: I0813 20:00:19.088099 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/384ed0e8-86e4-42df-bd2c-604c1f536a15-console-oauth-config\") pod \"console-5d9678894c-wx62n\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:19 crc kubenswrapper[4183]: I0813 20:00:19.102692 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-trusted-ca-bundle\") pod \"console-5d9678894c-wx62n\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:19 crc kubenswrapper[4183]: I0813 20:00:19.203213 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-c4dd57946-mpxjt"] Aug 13 20:00:19 crc kubenswrapper[4183]: I0813 20:00:19.293347 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16f68e98-a8f9-417a-b92b-37bfd7b11e01" path="/var/lib/kubelet/pods/16f68e98-a8f9-417a-b92b-37bfd7b11e01/volumes" Aug 13 20:00:19 crc kubenswrapper[4183]: I0813 20:00:19.308462 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83bf0764-e80c-490b-8d3c-3cf626fdb233" path="/var/lib/kubelet/pods/83bf0764-e80c-490b-8d3c-3cf626fdb233/volumes" Aug 13 20:00:19 crc kubenswrapper[4183]: I0813 20:00:19.426015 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjq9b\" (UniqueName: \"kubernetes.io/projected/384ed0e8-86e4-42df-bd2c-604c1f536a15-kube-api-access-hjq9b\") pod \"console-5d9678894c-wx62n\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:19 crc kubenswrapper[4183]: I0813 20:00:19.441234 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:19 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:19 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:19 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:19 crc kubenswrapper[4183]: I0813 20:00:19.441519 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:19 crc kubenswrapper[4183]: I0813 20:00:19.537411 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.223065 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-67685c4459-7p2h8"] Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.223268 4183 topology_manager.go:215] "Topology Admit Handler" podUID="a560ec6a-586f-403c-a08e-e3a76fa1b7fd" podNamespace="openshift-controller-manager" podName="controller-manager-67685c4459-7p2h8" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.224825 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.230713 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-9-crc" podStartSLOduration=11.230656964 podStartE2EDuration="11.230656964s" podCreationTimestamp="2025-08-13 20:00:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:00:20.208394449 +0000 UTC m=+986.901059297" watchObservedRunningTime="2025-08-13 20:00:20.230656964 +0000 UTC m=+986.923321692" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.253745 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.254245 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.254530 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-58g82" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.254737 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.255015 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.259287 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.288758 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.350073 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-67685c4459-7p2h8"] Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.378405 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w8t6\" (UniqueName: \"kubernetes.io/projected/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-kube-api-access-5w8t6\") pod \"controller-manager-67685c4459-7p2h8\" (UID: \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\") " pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.378560 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-serving-cert\") pod \"controller-manager-67685c4459-7p2h8\" (UID: \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\") " pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.378654 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-config\") pod \"controller-manager-67685c4459-7p2h8\" (UID: \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\") " pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.378685 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-proxy-ca-bundles\") pod \"controller-manager-67685c4459-7p2h8\" (UID: \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\") " pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.378717 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-client-ca\") pod \"controller-manager-67685c4459-7p2h8\" (UID: \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\") " pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.456205 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:20 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:20 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:20 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.456309 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.487569 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=14.487510238 podStartE2EDuration="14.487510238s" podCreationTimestamp="2025-08-13 20:00:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:00:20.487207379 +0000 UTC m=+987.179872367" watchObservedRunningTime="2025-08-13 20:00:20.487510238 +0000 UTC m=+987.180175056" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.489643 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-serving-cert\") pod \"controller-manager-67685c4459-7p2h8\" (UID: \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\") " pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.489816 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-proxy-ca-bundles\") pod \"controller-manager-67685c4459-7p2h8\" (UID: \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\") " pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.489878 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-config\") pod \"controller-manager-67685c4459-7p2h8\" (UID: \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\") " pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.489918 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-client-ca\") pod \"controller-manager-67685c4459-7p2h8\" (UID: \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\") " pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.489970 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5w8t6\" (UniqueName: \"kubernetes.io/projected/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-kube-api-access-5w8t6\") pod \"controller-manager-67685c4459-7p2h8\" (UID: \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\") " pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.494680 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-client-ca\") pod \"controller-manager-67685c4459-7p2h8\" (UID: \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\") " pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.504770 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-config\") pod \"controller-manager-67685c4459-7p2h8\" (UID: \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\") " pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.567351 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-proxy-ca-bundles\") pod \"controller-manager-67685c4459-7p2h8\" (UID: \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\") " pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.568035 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-serving-cert\") pod \"controller-manager-67685c4459-7p2h8\" (UID: \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\") " pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.632650 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-5w8t6\" (UniqueName: \"kubernetes.io/projected/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-kube-api-access-5w8t6\") pod \"controller-manager-67685c4459-7p2h8\" (UID: \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\") " pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.870208 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" Aug 13 20:00:21 crc kubenswrapper[4183]: I0813 20:00:21.163475 4183 generic.go:334] "Generic (PLEG): container finished" podID="a0453d24-e872-43af-9e7a-86227c26d200" containerID="3e7eb9892d5a94b55021884eb7d6b9f29402769ffac497c2b86edb6618a7ef4d" exitCode=0 Aug 13 20:00:21 crc kubenswrapper[4183]: I0813 20:00:21.163712 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a0453d24-e872-43af-9e7a-86227c26d200","Type":"ContainerDied","Data":"3e7eb9892d5a94b55021884eb7d6b9f29402769ffac497c2b86edb6618a7ef4d"} Aug 13 20:00:21 crc kubenswrapper[4183]: E0813 20:00:21.234485 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 20:00:21 crc kubenswrapper[4183]: I0813 20:00:21.442436 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:21 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:21 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:21 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:21 crc kubenswrapper[4183]: I0813 20:00:21.442512 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:22 crc kubenswrapper[4183]: I0813 20:00:22.447411 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:22 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:22 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:22 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:22 crc kubenswrapper[4183]: I0813 20:00:22.447973 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:23 crc kubenswrapper[4183]: E0813 20:00:23.214650 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"\"" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 20:00:23 crc kubenswrapper[4183]: E0813 20:00:23.214767 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"\"" pod="openshift-marketplace/certified-operators-g4v97" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" Aug 13 20:00:23 crc kubenswrapper[4183]: I0813 20:00:23.442020 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:23 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:23 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:23 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:23 crc kubenswrapper[4183]: I0813 20:00:23.442109 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:23 crc kubenswrapper[4183]: I0813 20:00:23.817439 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw"] Aug 13 20:00:23 crc kubenswrapper[4183]: W0813 20:00:23.846698 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1713e8bc_bab0_49a8_8618_9ded2e18906c.slice/crio-1f55b781eeb63db4da6e3bc3852aae7ae0cefc781245125be87fc29e75ead715 WatchSource:0}: Error finding container 1f55b781eeb63db4da6e3bc3852aae7ae0cefc781245125be87fc29e75ead715: Status 404 returned error can't find the container with id 1f55b781eeb63db4da6e3bc3852aae7ae0cefc781245125be87fc29e75ead715 Aug 13 20:00:24 crc kubenswrapper[4183]: I0813 20:00:24.033654 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Aug 13 20:00:24 crc kubenswrapper[4183]: I0813 20:00:24.086096 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a0453d24-e872-43af-9e7a-86227c26d200-kube-api-access\") pod \"a0453d24-e872-43af-9e7a-86227c26d200\" (UID: \"a0453d24-e872-43af-9e7a-86227c26d200\") " Aug 13 20:00:24 crc kubenswrapper[4183]: I0813 20:00:24.086222 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a0453d24-e872-43af-9e7a-86227c26d200-kubelet-dir\") pod \"a0453d24-e872-43af-9e7a-86227c26d200\" (UID: \"a0453d24-e872-43af-9e7a-86227c26d200\") " Aug 13 20:00:24 crc kubenswrapper[4183]: I0813 20:00:24.086428 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a0453d24-e872-43af-9e7a-86227c26d200-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a0453d24-e872-43af-9e7a-86227c26d200" (UID: "a0453d24-e872-43af-9e7a-86227c26d200"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:00:24 crc kubenswrapper[4183]: I0813 20:00:24.086602 4183 reconciler_common.go:300] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a0453d24-e872-43af-9e7a-86227c26d200-kubelet-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:24 crc kubenswrapper[4183]: I0813 20:00:24.096156 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0453d24-e872-43af-9e7a-86227c26d200-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a0453d24-e872-43af-9e7a-86227c26d200" (UID: "a0453d24-e872-43af-9e7a-86227c26d200"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:00:24 crc kubenswrapper[4183]: I0813 20:00:24.188626 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a0453d24-e872-43af-9e7a-86227c26d200-kube-api-access\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:24 crc kubenswrapper[4183]: I0813 20:00:24.229861 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a0453d24-e872-43af-9e7a-86227c26d200","Type":"ContainerDied","Data":"beb700893f285f1004019874abdcd9484d578d674149630d4658c680e6991319"} Aug 13 20:00:24 crc kubenswrapper[4183]: I0813 20:00:24.229921 4183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="beb700893f285f1004019874abdcd9484d578d674149630d4658c680e6991319" Aug 13 20:00:24 crc kubenswrapper[4183]: I0813 20:00:24.229949 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Aug 13 20:00:24 crc kubenswrapper[4183]: I0813 20:00:24.237458 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" event={"ID":"1713e8bc-bab0-49a8-8618-9ded2e18906c","Type":"ContainerStarted","Data":"1f55b781eeb63db4da6e3bc3852aae7ae0cefc781245125be87fc29e75ead715"} Aug 13 20:00:24 crc kubenswrapper[4183]: I0813 20:00:24.274326 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Aug 13 20:00:24 crc kubenswrapper[4183]: I0813 20:00:24.278576 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5d9678894c-wx62n"] Aug 13 20:00:24 crc kubenswrapper[4183]: I0813 20:00:24.293300 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-67685c4459-7p2h8"] Aug 13 20:00:24 crc kubenswrapper[4183]: I0813 20:00:24.460858 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:24 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:24 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:24 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:24 crc kubenswrapper[4183]: I0813 20:00:24.460981 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:24 crc kubenswrapper[4183]: I0813 20:00:24.804322 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-67685c4459-7p2h8"] Aug 13 20:00:24 crc kubenswrapper[4183]: I0813 20:00:24.871691 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:00:24 crc kubenswrapper[4183]: I0813 20:00:24.871820 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:00:24 crc kubenswrapper[4183]: I0813 20:00:24.873620 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:00:24 crc kubenswrapper[4183]: I0813 20:00:24.873700 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:00:24 crc kubenswrapper[4183]: I0813 20:00:24.949736 4183 patch_prober.go:28] interesting pod/console-84fccc7b6-mkncc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Aug 13 20:00:24 crc kubenswrapper[4183]: I0813 20:00:24.949926 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.065141 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw"] Aug 13 20:00:25 crc kubenswrapper[4183]: E0813 20:00:25.213757 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.242361 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-585546dd8b-v5m4t"] Aug 13 20:00:25 crc kubenswrapper[4183]: E0813 20:00:25.243561 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="unmounted volumes=[registry-storage], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.249038 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" event={"ID":"1713e8bc-bab0-49a8-8618-9ded2e18906c","Type":"ContainerStarted","Data":"6f473c92f07e1c47edf5b8e65134aeb43315eb0c72514a8b4132da92f81b1fe5"} Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.251370 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" event={"ID":"a560ec6a-586f-403c-a08e-e3a76fa1b7fd","Type":"ContainerStarted","Data":"7772cfe77a9084a8b1da62b48709afa4195652cf6fbe8e33fe7a5414394f71e7"} Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.251428 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" event={"ID":"a560ec6a-586f-403c-a08e-e3a76fa1b7fd","Type":"ContainerStarted","Data":"51aea926a857cd455ca0d021b49fa37618de4d0422d7dc1eb122be83f78ae2aa"} Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.251569 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" podUID="a560ec6a-586f-403c-a08e-e3a76fa1b7fd" containerName="controller-manager" containerID="cri-o://7772cfe77a9084a8b1da62b48709afa4195652cf6fbe8e33fe7a5414394f71e7" gracePeriod=30 Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.252282 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.258232 4183 patch_prober.go:28] interesting pod/controller-manager-67685c4459-7p2h8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": dial tcp 10.217.0.58:8443: connect: connection refused" start-of-body= Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.258715 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" podUID="a560ec6a-586f-403c-a08e-e3a76fa1b7fd" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": dial tcp 10.217.0.58:8443: connect: connection refused" Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.262914 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d9678894c-wx62n" event={"ID":"384ed0e8-86e4-42df-bd2c-604c1f536a15","Type":"ContainerStarted","Data":"bc9bc2d351deda360fe2c9a8ea52b6167467896e22b28bcf9fdb33f8155b79ba"} Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.262974 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d9678894c-wx62n" event={"ID":"384ed0e8-86e4-42df-bd2c-604c1f536a15","Type":"ContainerStarted","Data":"612e7824c92f4db329dd14ca96f855eb9f361591c35855b089640224677bf2f7"} Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.271544 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"2ad657a4-8b02-4373-8d0d-b0e25345dc90","Type":"ContainerStarted","Data":"9b70547ed21fdd52e8499a4a8257b914c8e7ffca7487e1b746ab6e52f3ad42e8"} Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.442476 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:25 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:25 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:25 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.442661 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.512090 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-console/console-5d9678894c-wx62n" podStartSLOduration=7.512029209 podStartE2EDuration="7.512029209s" podCreationTimestamp="2025-08-13 20:00:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:00:25.508307233 +0000 UTC m=+992.200972071" watchObservedRunningTime="2025-08-13 20:00:25.512029209 +0000 UTC m=+992.204694147" Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.563868 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" podStartSLOduration=14.563758574 podStartE2EDuration="14.563758574s" podCreationTimestamp="2025-08-13 20:00:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:00:25.56185584 +0000 UTC m=+992.254520888" watchObservedRunningTime="2025-08-13 20:00:25.563758574 +0000 UTC m=+992.256423352" Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.794333 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-75779c45fd-v2j2v"] Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.794578 4183 topology_manager.go:215] "Topology Admit Handler" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" podNamespace="openshift-image-registry" podName="image-registry-75779c45fd-v2j2v" Aug 13 20:00:25 crc kubenswrapper[4183]: E0813 20:00:25.797195 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="a0453d24-e872-43af-9e7a-86227c26d200" containerName="pruner" Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.797239 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0453d24-e872-43af-9e7a-86227c26d200" containerName="pruner" Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.797633 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0453d24-e872-43af-9e7a-86227c26d200" containerName="pruner" Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.800477 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.946364 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scpwv\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-kube-api-access-scpwv\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.948726 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.949007 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.949154 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-bound-sa-token\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.949303 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.949605 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-ca-trust-extracted\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.951486 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-certificates\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.959620 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" podStartSLOduration=14.95954932 podStartE2EDuration="14.95954932s" podCreationTimestamp="2025-08-13 20:00:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:00:25.744372094 +0000 UTC m=+992.437037032" watchObservedRunningTime="2025-08-13 20:00:25.95954932 +0000 UTC m=+992.652214048" Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.960208 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-75779c45fd-v2j2v"] Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.053304 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-bound-sa-token\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.053466 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.053568 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-ca-trust-extracted\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.053600 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-certificates\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.053652 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-scpwv\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-kube-api-access-scpwv\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.053699 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.053763 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.056353 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.057262 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-certificates\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.060476 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-ca-trust-extracted\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.072588 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.077750 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.095379 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-scpwv\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-kube-api-access-scpwv\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.117737 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-bound-sa-token\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:26 crc kubenswrapper[4183]: E0813 20:00:26.226722 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 20:00:26 crc kubenswrapper[4183]: E0813 20:00:26.240942 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"\"" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.324629 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-67685c4459-7p2h8_a560ec6a-586f-403c-a08e-e3a76fa1b7fd/controller-manager/0.log" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.328329 4183 generic.go:334] "Generic (PLEG): container finished" podID="a560ec6a-586f-403c-a08e-e3a76fa1b7fd" containerID="7772cfe77a9084a8b1da62b48709afa4195652cf6fbe8e33fe7a5414394f71e7" exitCode=2 Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.344900 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.345270 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" podUID="1713e8bc-bab0-49a8-8618-9ded2e18906c" containerName="route-controller-manager" containerID="cri-o://6f473c92f07e1c47edf5b8e65134aeb43315eb0c72514a8b4132da92f81b1fe5" gracePeriod=30 Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.350498 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" event={"ID":"a560ec6a-586f-403c-a08e-e3a76fa1b7fd","Type":"ContainerDied","Data":"7772cfe77a9084a8b1da62b48709afa4195652cf6fbe8e33fe7a5414394f71e7"} Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.352716 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.398176 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.459657 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca\") pod \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.460573 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khtlk\" (UniqueName: \"kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-kube-api-access-khtlk\") pod \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.461266 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.461434 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-bound-sa-token\") pod \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.478939 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-ca-trust-extracted\") pod \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.479169 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls\") pod \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.479315 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-certificates\") pod \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.479434 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets\") pod \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.480475 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "c5bb4cdd-21b9-49ed-84ae-a405b60a0306" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.480720 4183 reconciler_common.go:300] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.484328 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "c5bb4cdd-21b9-49ed-84ae-a405b60a0306" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.478755 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "c5bb4cdd-21b9-49ed-84ae-a405b60a0306" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.545169 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-kube-api-access-khtlk" (OuterVolumeSpecName: "kube-api-access-khtlk") pod "c5bb4cdd-21b9-49ed-84ae-a405b60a0306" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306"). InnerVolumeSpecName "kube-api-access-khtlk". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.578325 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "c5bb4cdd-21b9-49ed-84ae-a405b60a0306" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.579830 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "c5bb4cdd-21b9-49ed-84ae-a405b60a0306" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.589642 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-khtlk\" (UniqueName: \"kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-kube-api-access-khtlk\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.589706 4183 reconciler_common.go:300] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-bound-sa-token\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.589728 4183 reconciler_common.go:300] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.589743 4183 reconciler_common.go:300] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-certificates\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.589861 4183 reconciler_common.go:300] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.607624 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "c5bb4cdd-21b9-49ed-84ae-a405b60a0306" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.693416 4183 reconciler_common.go:300] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.719467 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (OuterVolumeSpecName: "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97") pod "c5bb4cdd-21b9-49ed-84ae-a405b60a0306" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306"). InnerVolumeSpecName "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97". PluginName "kubernetes.io/csi", VolumeGidValue "" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.743450 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:26 crc kubenswrapper[4183]: [+]has-synced ok Aug 13 20:00:26 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:26 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.743560 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.795611 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.841473 4183 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.842278 4183 operation_generator.go:664] "MountVolume.MountDevice succeeded for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6ea5f9a7192af1960ec8c50a86fd2d9a756dbf85695798868f611e04a03ec009/globalmount\"" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.857663 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.959176 4183 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","burstable","podaf6b67a3-a2bd-4051-9adc-c208a5a65d79"] err="unable to destroy cgroup paths for cgroup [kubepods burstable podaf6b67a3-a2bd-4051-9adc-c208a5a65d79] : Timed out while waiting for systemd to remove kubepods-burstable-podaf6b67a3_a2bd_4051_9adc_c208a5a65d79.slice" Aug 13 20:00:26 crc kubenswrapper[4183]: E0813 20:00:26.959342 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods burstable podaf6b67a3-a2bd-4051-9adc-c208a5a65d79] : unable to destroy cgroup paths for cgroup [kubepods burstable podaf6b67a3-a2bd-4051-9adc-c208a5a65d79] : Timed out while waiting for systemd to remove kubepods-burstable-podaf6b67a3_a2bd_4051_9adc_c208a5a65d79.slice" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 20:00:27 crc kubenswrapper[4183]: E0813 20:00:27.229118 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-operators-dcqzh" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" Aug 13 20:00:27 crc kubenswrapper[4183]: I0813 20:00:27.381153 4183 generic.go:334] "Generic (PLEG): container finished" podID="1713e8bc-bab0-49a8-8618-9ded2e18906c" containerID="6f473c92f07e1c47edf5b8e65134aeb43315eb0c72514a8b4132da92f81b1fe5" exitCode=0 Aug 13 20:00:27 crc kubenswrapper[4183]: I0813 20:00:27.386049 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 20:00:27 crc kubenswrapper[4183]: I0813 20:00:27.381490 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" event={"ID":"1713e8bc-bab0-49a8-8618-9ded2e18906c","Type":"ContainerDied","Data":"6f473c92f07e1c47edf5b8e65134aeb43315eb0c72514a8b4132da92f81b1fe5"} Aug 13 20:00:27 crc kubenswrapper[4183]: I0813 20:00:27.390105 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 20:00:27 crc kubenswrapper[4183]: I0813 20:00:27.439866 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 20:00:27 crc kubenswrapper[4183]: I0813 20:00:27.444650 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 20:00:27 crc kubenswrapper[4183]: I0813 20:00:27.455221 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5"] Aug 13 20:00:27 crc kubenswrapper[4183]: I0813 20:00:27.503648 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5"] Aug 13 20:00:27 crc kubenswrapper[4183]: I0813 20:00:27.530253 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 20:00:27 crc kubenswrapper[4183]: I0813 20:00:27.609055 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-q786x" Aug 13 20:00:27 crc kubenswrapper[4183]: I0813 20:00:27.614083 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 20:00:27 crc kubenswrapper[4183]: I0813 20:00:27.622491 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:27 crc kubenswrapper[4183]: I0813 20:00:27.623115 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-585546dd8b-v5m4t"] Aug 13 20:00:27 crc kubenswrapper[4183]: I0813 20:00:27.640968 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-585546dd8b-v5m4t"] Aug 13 20:00:27 crc kubenswrapper[4183]: I0813 20:00:27.925967 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:27 crc kubenswrapper[4183]: I0813 20:00:27.964493 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:28 crc kubenswrapper[4183]: I0813 20:00:28.216388 4183 patch_prober.go:28] interesting pod/route-controller-manager-6cfd9fc8fc-7sbzw container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.56:8443/healthz\": dial tcp 10.217.0.56:8443: connect: connection refused" start-of-body= Aug 13 20:00:28 crc kubenswrapper[4183]: I0813 20:00:28.216741 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" podUID="1713e8bc-bab0-49a8-8618-9ded2e18906c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.56:8443/healthz\": dial tcp 10.217.0.56:8443: connect: connection refused" Aug 13 20:00:28 crc kubenswrapper[4183]: I0813 20:00:28.409079 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"2ad657a4-8b02-4373-8d0d-b0e25345dc90","Type":"ContainerStarted","Data":"7be671fc50422e885dbb1fec6a6c30037cba5481e39185347522a94f177d9763"} Aug 13 20:00:28 crc kubenswrapper[4183]: I0813 20:00:28.500363 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=11.500303538 podStartE2EDuration="11.500303538s" podCreationTimestamp="2025-08-13 20:00:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:00:28.495690207 +0000 UTC m=+995.188354975" watchObservedRunningTime="2025-08-13 20:00:28.500303538 +0000 UTC m=+995.192968266" Aug 13 20:00:28 crc kubenswrapper[4183]: I0813 20:00:28.958488 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-67685c4459-7p2h8_a560ec6a-586f-403c-a08e-e3a76fa1b7fd/controller-manager/0.log" Aug 13 20:00:28 crc kubenswrapper[4183]: I0813 20:00:28.958581 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.062890 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-78589965b8-vmcwt"] Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.063091 4183 topology_manager.go:215] "Topology Admit Handler" podUID="00d32440-4cce-4609-96f3-51ac94480aab" podNamespace="openshift-controller-manager" podName="controller-manager-78589965b8-vmcwt" Aug 13 20:00:29 crc kubenswrapper[4183]: E0813 20:00:29.063268 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="a560ec6a-586f-403c-a08e-e3a76fa1b7fd" containerName="controller-manager" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.063287 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="a560ec6a-586f-403c-a08e-e3a76fa1b7fd" containerName="controller-manager" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.063420 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="a560ec6a-586f-403c-a08e-e3a76fa1b7fd" containerName="controller-manager" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.063968 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.072336 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-client-ca\") pod \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\" (UID: \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\") " Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.072441 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5w8t6\" (UniqueName: \"kubernetes.io/projected/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-kube-api-access-5w8t6\") pod \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\" (UID: \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\") " Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.072480 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-proxy-ca-bundles\") pod \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\" (UID: \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\") " Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.072519 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-serving-cert\") pod \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\" (UID: \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\") " Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.072558 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-config\") pod \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\" (UID: \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\") " Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.074365 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-client-ca" (OuterVolumeSpecName: "client-ca") pod "a560ec6a-586f-403c-a08e-e3a76fa1b7fd" (UID: "a560ec6a-586f-403c-a08e-e3a76fa1b7fd"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.075255 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-config" (OuterVolumeSpecName: "config") pod "a560ec6a-586f-403c-a08e-e3a76fa1b7fd" (UID: "a560ec6a-586f-403c-a08e-e3a76fa1b7fd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.075384 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a560ec6a-586f-403c-a08e-e3a76fa1b7fd" (UID: "a560ec6a-586f-403c-a08e-e3a76fa1b7fd"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.097608 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a560ec6a-586f-403c-a08e-e3a76fa1b7fd" (UID: "a560ec6a-586f-403c-a08e-e3a76fa1b7fd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.098220 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-kube-api-access-5w8t6" (OuterVolumeSpecName: "kube-api-access-5w8t6") pod "a560ec6a-586f-403c-a08e-e3a76fa1b7fd" (UID: "a560ec6a-586f-403c-a08e-e3a76fa1b7fd"). InnerVolumeSpecName "kube-api-access-5w8t6". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.175480 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00d32440-4cce-4609-96f3-51ac94480aab-config\") pod \"controller-manager-78589965b8-vmcwt\" (UID: \"00d32440-4cce-4609-96f3-51ac94480aab\") " pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.175590 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/00d32440-4cce-4609-96f3-51ac94480aab-client-ca\") pod \"controller-manager-78589965b8-vmcwt\" (UID: \"00d32440-4cce-4609-96f3-51ac94480aab\") " pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.175748 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqzj5\" (UniqueName: \"kubernetes.io/projected/00d32440-4cce-4609-96f3-51ac94480aab-kube-api-access-hqzj5\") pod \"controller-manager-78589965b8-vmcwt\" (UID: \"00d32440-4cce-4609-96f3-51ac94480aab\") " pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.175897 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00d32440-4cce-4609-96f3-51ac94480aab-serving-cert\") pod \"controller-manager-78589965b8-vmcwt\" (UID: \"00d32440-4cce-4609-96f3-51ac94480aab\") " pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.176096 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/00d32440-4cce-4609-96f3-51ac94480aab-proxy-ca-bundles\") pod \"controller-manager-78589965b8-vmcwt\" (UID: \"00d32440-4cce-4609-96f3-51ac94480aab\") " pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.176150 4183 reconciler_common.go:300] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-client-ca\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.176166 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-5w8t6\" (UniqueName: \"kubernetes.io/projected/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-kube-api-access-5w8t6\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.176182 4183 reconciler_common.go:300] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.176199 4183 reconciler_common.go:300] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-serving-cert\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.176210 4183 reconciler_common.go:300] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-config\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.227261 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" path="/var/lib/kubelet/pods/af6b67a3-a2bd-4051-9adc-c208a5a65d79/volumes" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.238069 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" path="/var/lib/kubelet/pods/c5bb4cdd-21b9-49ed-84ae-a405b60a0306/volumes" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.277915 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00d32440-4cce-4609-96f3-51ac94480aab-config\") pod \"controller-manager-78589965b8-vmcwt\" (UID: \"00d32440-4cce-4609-96f3-51ac94480aab\") " pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.278005 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/00d32440-4cce-4609-96f3-51ac94480aab-client-ca\") pod \"controller-manager-78589965b8-vmcwt\" (UID: \"00d32440-4cce-4609-96f3-51ac94480aab\") " pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.278062 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hqzj5\" (UniqueName: \"kubernetes.io/projected/00d32440-4cce-4609-96f3-51ac94480aab-kube-api-access-hqzj5\") pod \"controller-manager-78589965b8-vmcwt\" (UID: \"00d32440-4cce-4609-96f3-51ac94480aab\") " pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.278102 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00d32440-4cce-4609-96f3-51ac94480aab-serving-cert\") pod \"controller-manager-78589965b8-vmcwt\" (UID: \"00d32440-4cce-4609-96f3-51ac94480aab\") " pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.278165 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/00d32440-4cce-4609-96f3-51ac94480aab-proxy-ca-bundles\") pod \"controller-manager-78589965b8-vmcwt\" (UID: \"00d32440-4cce-4609-96f3-51ac94480aab\") " pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.280764 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/00d32440-4cce-4609-96f3-51ac94480aab-client-ca\") pod \"controller-manager-78589965b8-vmcwt\" (UID: \"00d32440-4cce-4609-96f3-51ac94480aab\") " pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.289748 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00d32440-4cce-4609-96f3-51ac94480aab-config\") pod \"controller-manager-78589965b8-vmcwt\" (UID: \"00d32440-4cce-4609-96f3-51ac94480aab\") " pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.303540 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00d32440-4cce-4609-96f3-51ac94480aab-serving-cert\") pod \"controller-manager-78589965b8-vmcwt\" (UID: \"00d32440-4cce-4609-96f3-51ac94480aab\") " pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.297027 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/00d32440-4cce-4609-96f3-51ac94480aab-proxy-ca-bundles\") pod \"controller-manager-78589965b8-vmcwt\" (UID: \"00d32440-4cce-4609-96f3-51ac94480aab\") " pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.446095 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-67685c4459-7p2h8_a560ec6a-586f-403c-a08e-e3a76fa1b7fd/controller-manager/0.log" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.447603 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.448594 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" event={"ID":"a560ec6a-586f-403c-a08e-e3a76fa1b7fd","Type":"ContainerDied","Data":"51aea926a857cd455ca0d021b49fa37618de4d0422d7dc1eb122be83f78ae2aa"} Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.448637 4183 scope.go:117] "RemoveContainer" containerID="7772cfe77a9084a8b1da62b48709afa4195652cf6fbe8e33fe7a5414394f71e7" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.534635 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqzj5\" (UniqueName: \"kubernetes.io/projected/00d32440-4cce-4609-96f3-51ac94480aab-kube-api-access-hqzj5\") pod \"controller-manager-78589965b8-vmcwt\" (UID: \"00d32440-4cce-4609-96f3-51ac94480aab\") " pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.542744 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.547562 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.580460 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.580551 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.727692 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.759209 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-78589965b8-vmcwt"] Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.892572 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-67685c4459-7p2h8"] Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.908205 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-67685c4459-7p2h8"] Aug 13 20:00:30 crc kubenswrapper[4183]: I0813 20:00:30.302407 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" Aug 13 20:00:30 crc kubenswrapper[4183]: I0813 20:00:30.435154 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qgvb\" (UniqueName: \"kubernetes.io/projected/1713e8bc-bab0-49a8-8618-9ded2e18906c-kube-api-access-9qgvb\") pod \"1713e8bc-bab0-49a8-8618-9ded2e18906c\" (UID: \"1713e8bc-bab0-49a8-8618-9ded2e18906c\") " Aug 13 20:00:30 crc kubenswrapper[4183]: I0813 20:00:30.435222 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1713e8bc-bab0-49a8-8618-9ded2e18906c-config\") pod \"1713e8bc-bab0-49a8-8618-9ded2e18906c\" (UID: \"1713e8bc-bab0-49a8-8618-9ded2e18906c\") " Aug 13 20:00:30 crc kubenswrapper[4183]: I0813 20:00:30.435287 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1713e8bc-bab0-49a8-8618-9ded2e18906c-client-ca\") pod \"1713e8bc-bab0-49a8-8618-9ded2e18906c\" (UID: \"1713e8bc-bab0-49a8-8618-9ded2e18906c\") " Aug 13 20:00:30 crc kubenswrapper[4183]: I0813 20:00:30.435338 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1713e8bc-bab0-49a8-8618-9ded2e18906c-serving-cert\") pod \"1713e8bc-bab0-49a8-8618-9ded2e18906c\" (UID: \"1713e8bc-bab0-49a8-8618-9ded2e18906c\") " Aug 13 20:00:30 crc kubenswrapper[4183]: I0813 20:00:30.438191 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1713e8bc-bab0-49a8-8618-9ded2e18906c-config" (OuterVolumeSpecName: "config") pod "1713e8bc-bab0-49a8-8618-9ded2e18906c" (UID: "1713e8bc-bab0-49a8-8618-9ded2e18906c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:00:30 crc kubenswrapper[4183]: I0813 20:00:30.443688 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1713e8bc-bab0-49a8-8618-9ded2e18906c-client-ca" (OuterVolumeSpecName: "client-ca") pod "1713e8bc-bab0-49a8-8618-9ded2e18906c" (UID: "1713e8bc-bab0-49a8-8618-9ded2e18906c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:00:30 crc kubenswrapper[4183]: I0813 20:00:30.458748 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1713e8bc-bab0-49a8-8618-9ded2e18906c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1713e8bc-bab0-49a8-8618-9ded2e18906c" (UID: "1713e8bc-bab0-49a8-8618-9ded2e18906c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:00:30 crc kubenswrapper[4183]: I0813 20:00:30.496356 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1713e8bc-bab0-49a8-8618-9ded2e18906c-kube-api-access-9qgvb" (OuterVolumeSpecName: "kube-api-access-9qgvb") pod "1713e8bc-bab0-49a8-8618-9ded2e18906c" (UID: "1713e8bc-bab0-49a8-8618-9ded2e18906c"). InnerVolumeSpecName "kube-api-access-9qgvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:00:30 crc kubenswrapper[4183]: I0813 20:00:30.523608 4183 generic.go:334] "Generic (PLEG): container finished" podID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" containerID="c39ec2f009f84a11146853eb53b1073037d39ef91f4d853abf6b613d7e2758e6" exitCode=0 Aug 13 20:00:30 crc kubenswrapper[4183]: I0813 20:00:30.523720 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" event={"ID":"43ae1c37-047b-4ee2-9fee-41e337dd4ac8","Type":"ContainerDied","Data":"c39ec2f009f84a11146853eb53b1073037d39ef91f4d853abf6b613d7e2758e6"} Aug 13 20:00:30 crc kubenswrapper[4183]: I0813 20:00:30.524488 4183 scope.go:117] "RemoveContainer" containerID="c39ec2f009f84a11146853eb53b1073037d39ef91f4d853abf6b613d7e2758e6" Aug 13 20:00:30 crc kubenswrapper[4183]: I0813 20:00:30.538585 4183 reconciler_common.go:300] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1713e8bc-bab0-49a8-8618-9ded2e18906c-client-ca\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:30 crc kubenswrapper[4183]: I0813 20:00:30.538648 4183 reconciler_common.go:300] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1713e8bc-bab0-49a8-8618-9ded2e18906c-serving-cert\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:30 crc kubenswrapper[4183]: I0813 20:00:30.538667 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-9qgvb\" (UniqueName: \"kubernetes.io/projected/1713e8bc-bab0-49a8-8618-9ded2e18906c-kube-api-access-9qgvb\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:30 crc kubenswrapper[4183]: I0813 20:00:30.538681 4183 reconciler_common.go:300] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1713e8bc-bab0-49a8-8618-9ded2e18906c-config\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:30 crc kubenswrapper[4183]: I0813 20:00:30.546888 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" event={"ID":"1713e8bc-bab0-49a8-8618-9ded2e18906c","Type":"ContainerDied","Data":"1f55b781eeb63db4da6e3bc3852aae7ae0cefc781245125be87fc29e75ead715"} Aug 13 20:00:30 crc kubenswrapper[4183]: I0813 20:00:30.547014 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" Aug 13 20:00:30 crc kubenswrapper[4183]: I0813 20:00:30.547043 4183 scope.go:117] "RemoveContainer" containerID="6f473c92f07e1c47edf5b8e65134aeb43315eb0c72514a8b4132da92f81b1fe5" Aug 13 20:00:30 crc kubenswrapper[4183]: I0813 20:00:30.863030 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-7cbd5666ff-bbfrf"] Aug 13 20:00:30 crc kubenswrapper[4183]: I0813 20:00:30.873688 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw"] Aug 13 20:00:30 crc kubenswrapper[4183]: I0813 20:00:30.902979 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw"] Aug 13 20:00:30 crc kubenswrapper[4183]: I0813 20:00:30.987534 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-75779c45fd-v2j2v"] Aug 13 20:00:30 crc kubenswrapper[4183]: W0813 20:00:30.987941 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9a7bc46_2f44_4aff_9cb5_97c97a4a8319.slice/crio-7356b549b0982e9c27e0a88782d3f3e7496dc427a4624d350543676e28d5f73e WatchSource:0}: Error finding container 7356b549b0982e9c27e0a88782d3f3e7496dc427a4624d350543676e28d5f73e: Status 404 returned error can't find the container with id 7356b549b0982e9c27e0a88782d3f3e7496dc427a4624d350543676e28d5f73e Aug 13 20:00:31 crc kubenswrapper[4183]: I0813 20:00:31.086667 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-78589965b8-vmcwt"] Aug 13 20:00:31 crc kubenswrapper[4183]: W0813 20:00:31.106958 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00d32440_4cce_4609_96f3_51ac94480aab.slice/crio-97945bb2ed21e57bfdbc9492cf4d12c73fca9904379ba3b00d1adaaec35574f9 WatchSource:0}: Error finding container 97945bb2ed21e57bfdbc9492cf4d12c73fca9904379ba3b00d1adaaec35574f9: Status 404 returned error can't find the container with id 97945bb2ed21e57bfdbc9492cf4d12c73fca9904379ba3b00d1adaaec35574f9 Aug 13 20:00:31 crc kubenswrapper[4183]: I0813 20:00:31.228752 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1713e8bc-bab0-49a8-8618-9ded2e18906c" path="/var/lib/kubelet/pods/1713e8bc-bab0-49a8-8618-9ded2e18906c/volumes" Aug 13 20:00:31 crc kubenswrapper[4183]: I0813 20:00:31.230549 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a560ec6a-586f-403c-a08e-e3a76fa1b7fd" path="/var/lib/kubelet/pods/a560ec6a-586f-403c-a08e-e3a76fa1b7fd/volumes" Aug 13 20:00:31 crc kubenswrapper[4183]: I0813 20:00:31.586239 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" event={"ID":"42b6a393-6194-4620-bf8f-7e4b6cbe5679","Type":"ContainerStarted","Data":"958ba1ee8e9afa1cbcf49a3010aa63c2343b2e7ad70d6958e858075ed46bd0f4"} Aug 13 20:00:31 crc kubenswrapper[4183]: I0813 20:00:31.596368 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" event={"ID":"00d32440-4cce-4609-96f3-51ac94480aab","Type":"ContainerStarted","Data":"97945bb2ed21e57bfdbc9492cf4d12c73fca9904379ba3b00d1adaaec35574f9"} Aug 13 20:00:31 crc kubenswrapper[4183]: I0813 20:00:31.624983 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" event={"ID":"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319","Type":"ContainerStarted","Data":"7356b549b0982e9c27e0a88782d3f3e7496dc427a4624d350543676e28d5f73e"} Aug 13 20:00:32 crc kubenswrapper[4183]: E0813 20:00:32.222479 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"\"" pod="openshift-marketplace/community-operators-k9qqb" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" Aug 13 20:00:32 crc kubenswrapper[4183]: I0813 20:00:32.647092 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" event={"ID":"43ae1c37-047b-4ee2-9fee-41e337dd4ac8","Type":"ContainerStarted","Data":"e95a2bd82003b18d4f81fa9d98e21982ecce835638a4f389a02f1c7db1efd2d6"} Aug 13 20:00:33 crc kubenswrapper[4183]: E0813 20:00:33.233310 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.403280 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh"] Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.403521 4183 topology_manager.go:215] "Topology Admit Handler" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" podNamespace="openshift-route-controller-manager" podName="route-controller-manager-846977c6bc-7gjhh" Aug 13 20:00:33 crc kubenswrapper[4183]: E0813 20:00:33.411971 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="1713e8bc-bab0-49a8-8618-9ded2e18906c" containerName="route-controller-manager" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.412025 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="1713e8bc-bab0-49a8-8618-9ded2e18906c" containerName="route-controller-manager" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.412233 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="1713e8bc-bab0-49a8-8618-9ded2e18906c" containerName="route-controller-manager" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.413558 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.435584 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-9r4gl" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.435944 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.435598 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.436371 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.435720 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.445125 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.511701 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh"] Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.515590 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hdnq\" (UniqueName: \"kubernetes.io/projected/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-kube-api-access-5hdnq\") pod \"route-controller-manager-846977c6bc-7gjhh\" (UID: \"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d\") " pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.515713 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-serving-cert\") pod \"route-controller-manager-846977c6bc-7gjhh\" (UID: \"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d\") " pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.515757 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-client-ca\") pod \"route-controller-manager-846977c6bc-7gjhh\" (UID: \"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d\") " pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.515908 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-config\") pod \"route-controller-manager-846977c6bc-7gjhh\" (UID: \"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d\") " pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.618353 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5hdnq\" (UniqueName: \"kubernetes.io/projected/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-kube-api-access-5hdnq\") pod \"route-controller-manager-846977c6bc-7gjhh\" (UID: \"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d\") " pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.618508 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-serving-cert\") pod \"route-controller-manager-846977c6bc-7gjhh\" (UID: \"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d\") " pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.618536 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-client-ca\") pod \"route-controller-manager-846977c6bc-7gjhh\" (UID: \"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d\") " pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.618569 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-config\") pod \"route-controller-manager-846977c6bc-7gjhh\" (UID: \"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d\") " pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.620528 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-config\") pod \"route-controller-manager-846977c6bc-7gjhh\" (UID: \"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d\") " pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.620550 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-client-ca\") pod \"route-controller-manager-846977c6bc-7gjhh\" (UID: \"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d\") " pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.636224 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-serving-cert\") pod \"route-controller-manager-846977c6bc-7gjhh\" (UID: \"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d\") " pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.656596 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" event={"ID":"42b6a393-6194-4620-bf8f-7e4b6cbe5679","Type":"ContainerStarted","Data":"32fd955a56de5925978ca9c74fd5477e1123ae91904669c797c57e09bb337d84"} Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.669757 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" event={"ID":"00d32440-4cce-4609-96f3-51ac94480aab","Type":"ContainerStarted","Data":"71a0cdc384f9d93ad108bee372da2b3e7dddb9b98c65c36f3ddbf584a54fd830"} Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.672107 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.686249 4183 patch_prober.go:28] interesting pod/controller-manager-78589965b8-vmcwt container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" start-of-body= Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.686351 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" podUID="00d32440-4cce-4609-96f3-51ac94480aab" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.687119 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" event={"ID":"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319","Type":"ContainerStarted","Data":"dc62e76377abe761c91fc70b8c010469ee052b1cdb26156cc98186814ab9ea53"} Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.688349 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.881044 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hdnq\" (UniqueName: \"kubernetes.io/projected/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-kube-api-access-5hdnq\") pod \"route-controller-manager-846977c6bc-7gjhh\" (UID: \"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d\") " pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.989830 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" podStartSLOduration=35619978.989690684 podStartE2EDuration="9894h26m18.989690681s" podCreationTimestamp="2024-06-27 13:34:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:00:33.978483142 +0000 UTC m=+1000.671147910" watchObservedRunningTime="2025-08-13 20:00:33.989690681 +0000 UTC m=+1000.682355409" Aug 13 20:00:34 crc kubenswrapper[4183]: I0813 20:00:34.051124 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" Aug 13 20:00:34 crc kubenswrapper[4183]: I0813 20:00:34.153396 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" podStartSLOduration=10.153340467 podStartE2EDuration="10.153340467s" podCreationTimestamp="2025-08-13 20:00:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:00:34.152671758 +0000 UTC m=+1000.845336576" watchObservedRunningTime="2025-08-13 20:00:34.153340467 +0000 UTC m=+1000.846005335" Aug 13 20:00:34 crc kubenswrapper[4183]: I0813 20:00:34.752986 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" event={"ID":"f728c15e-d8de-4a9a-a3ea-fdcead95cb91","Type":"ContainerDied","Data":"cd3ef5d43082d2ea06ff8ebdc73d431372f8a376212f30c5008a7b9579df7014"} Aug 13 20:00:34 crc kubenswrapper[4183]: I0813 20:00:34.755623 4183 scope.go:117] "RemoveContainer" containerID="cd3ef5d43082d2ea06ff8ebdc73d431372f8a376212f30c5008a7b9579df7014" Aug 13 20:00:34 crc kubenswrapper[4183]: I0813 20:00:34.778290 4183 generic.go:334] "Generic (PLEG): container finished" podID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" containerID="cd3ef5d43082d2ea06ff8ebdc73d431372f8a376212f30c5008a7b9579df7014" exitCode=0 Aug 13 20:00:34 crc kubenswrapper[4183]: I0813 20:00:34.784930 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 20:00:34 crc kubenswrapper[4183]: I0813 20:00:34.811093 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" Aug 13 20:00:34 crc kubenswrapper[4183]: I0813 20:00:34.876467 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:00:34 crc kubenswrapper[4183]: I0813 20:00:34.877102 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:00:34 crc kubenswrapper[4183]: I0813 20:00:34.877160 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 20:00:34 crc kubenswrapper[4183]: I0813 20:00:34.878764 4183 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"50e7a71dc2a39377a3d66cf968c9c75001c5782d329877e2aeb63cfbd66e826a"} pod="openshift-console/downloads-65476884b9-9wcvx" containerMessage="Container download-server failed liveness probe, will be restarted" Aug 13 20:00:34 crc kubenswrapper[4183]: I0813 20:00:34.878979 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" containerID="cri-o://50e7a71dc2a39377a3d66cf968c9c75001c5782d329877e2aeb63cfbd66e826a" gracePeriod=2 Aug 13 20:00:34 crc kubenswrapper[4183]: I0813 20:00:34.883544 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:00:34 crc kubenswrapper[4183]: I0813 20:00:34.883678 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:00:34 crc kubenswrapper[4183]: I0813 20:00:34.884083 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:00:34 crc kubenswrapper[4183]: I0813 20:00:34.884124 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:00:34 crc kubenswrapper[4183]: I0813 20:00:34.949186 4183 patch_prober.go:28] interesting pod/console-84fccc7b6-mkncc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Aug 13 20:00:34 crc kubenswrapper[4183]: I0813 20:00:34.949289 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Aug 13 20:00:35 crc kubenswrapper[4183]: I0813 20:00:35.099506 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podStartSLOduration=10.091607161 podStartE2EDuration="10.091607161s" podCreationTimestamp="2025-08-13 20:00:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:00:34.228453009 +0000 UTC m=+1000.921117757" watchObservedRunningTime="2025-08-13 20:00:35.091607161 +0000 UTC m=+1001.784272259" Aug 13 20:00:35 crc kubenswrapper[4183]: I0813 20:00:35.793329 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr_9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7/kube-storage-version-migrator-operator/0.log" Aug 13 20:00:35 crc kubenswrapper[4183]: I0813 20:00:35.793791 4183 generic.go:334] "Generic (PLEG): container finished" podID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" containerID="47802e2c3506925156013fb9ab1b2e35c0b10d40b6540eabeb02eed57b691069" exitCode=1 Aug 13 20:00:35 crc kubenswrapper[4183]: I0813 20:00:35.793984 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" event={"ID":"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7","Type":"ContainerDied","Data":"47802e2c3506925156013fb9ab1b2e35c0b10d40b6540eabeb02eed57b691069"} Aug 13 20:00:35 crc kubenswrapper[4183]: I0813 20:00:35.794920 4183 scope.go:117] "RemoveContainer" containerID="47802e2c3506925156013fb9ab1b2e35c0b10d40b6540eabeb02eed57b691069" Aug 13 20:00:35 crc kubenswrapper[4183]: I0813 20:00:35.802757 4183 generic.go:334] "Generic (PLEG): container finished" podID="6268b7fe-8910-4505-b404-6f1df638105c" containerID="50e7a71dc2a39377a3d66cf968c9c75001c5782d329877e2aeb63cfbd66e826a" exitCode=0 Aug 13 20:00:35 crc kubenswrapper[4183]: I0813 20:00:35.804097 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-65476884b9-9wcvx" event={"ID":"6268b7fe-8910-4505-b404-6f1df638105c","Type":"ContainerDied","Data":"50e7a71dc2a39377a3d66cf968c9c75001c5782d329877e2aeb63cfbd66e826a"} Aug 13 20:00:35 crc kubenswrapper[4183]: I0813 20:00:35.804154 4183 scope.go:117] "RemoveContainer" containerID="f644dddfd8fc5546a8b056ca1431e4924ae5d29333100579d5e0759c289e206f" Aug 13 20:00:36 crc kubenswrapper[4183]: E0813 20:00:36.213445 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"\"" pod="openshift-marketplace/certified-operators-g4v97" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" Aug 13 20:00:36 crc kubenswrapper[4183]: I0813 20:00:36.534373 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh"] Aug 13 20:00:36 crc kubenswrapper[4183]: I0813 20:00:36.810824 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" event={"ID":"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d","Type":"ContainerStarted","Data":"7b8bdc9f188dc335dab87669dac72f597c63109a9725099d338fac6691b46d6e"} Aug 13 20:00:36 crc kubenswrapper[4183]: I0813 20:00:36.955501 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-9-crc"] Aug 13 20:00:36 crc kubenswrapper[4183]: I0813 20:00:36.958703 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-controller-manager/installer-9-crc" podUID="227e3650-2a85-4229-8099-bb53972635b2" containerName="installer" containerID="cri-o://1bbed3b469cb62a0e76b6e9718249f34f00007dc9f9e6dcd22b346fb357ece99" gracePeriod=30 Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.119038 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-7-crc"] Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.119167 4183 topology_manager.go:215] "Topology Admit Handler" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" podNamespace="openshift-kube-scheduler" podName="installer-7-crc" Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.120818 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-7-crc" Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.138623 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.147529 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-9ln8g" Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.150315 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b57cce81-8ea0-4c4d-aae1-ee024d201c15-kube-api-access\") pod \"installer-7-crc\" (UID: \"b57cce81-8ea0-4c4d-aae1-ee024d201c15\") " pod="openshift-kube-scheduler/installer-7-crc" Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.150644 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b57cce81-8ea0-4c4d-aae1-ee024d201c15-kubelet-dir\") pod \"installer-7-crc\" (UID: \"b57cce81-8ea0-4c4d-aae1-ee024d201c15\") " pod="openshift-kube-scheduler/installer-7-crc" Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.150879 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b57cce81-8ea0-4c4d-aae1-ee024d201c15-var-lock\") pod \"installer-7-crc\" (UID: \"b57cce81-8ea0-4c4d-aae1-ee024d201c15\") " pod="openshift-kube-scheduler/installer-7-crc" Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.238027 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-7-crc"] Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.253661 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b57cce81-8ea0-4c4d-aae1-ee024d201c15-kubelet-dir\") pod \"installer-7-crc\" (UID: \"b57cce81-8ea0-4c4d-aae1-ee024d201c15\") " pod="openshift-kube-scheduler/installer-7-crc" Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.253867 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b57cce81-8ea0-4c4d-aae1-ee024d201c15-var-lock\") pod \"installer-7-crc\" (UID: \"b57cce81-8ea0-4c4d-aae1-ee024d201c15\") " pod="openshift-kube-scheduler/installer-7-crc" Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.254054 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b57cce81-8ea0-4c4d-aae1-ee024d201c15-kube-api-access\") pod \"installer-7-crc\" (UID: \"b57cce81-8ea0-4c4d-aae1-ee024d201c15\") " pod="openshift-kube-scheduler/installer-7-crc" Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.261225 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b57cce81-8ea0-4c4d-aae1-ee024d201c15-kubelet-dir\") pod \"installer-7-crc\" (UID: \"b57cce81-8ea0-4c4d-aae1-ee024d201c15\") " pod="openshift-kube-scheduler/installer-7-crc" Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.261665 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b57cce81-8ea0-4c4d-aae1-ee024d201c15-var-lock\") pod \"installer-7-crc\" (UID: \"b57cce81-8ea0-4c4d-aae1-ee024d201c15\") " pod="openshift-kube-scheduler/installer-7-crc" Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.605668 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b57cce81-8ea0-4c4d-aae1-ee024d201c15-kube-api-access\") pod \"installer-7-crc\" (UID: \"b57cce81-8ea0-4c4d-aae1-ee024d201c15\") " pod="openshift-kube-scheduler/installer-7-crc" Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.792007 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-7-crc" Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.804994 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-10-crc"] Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.814754 4183 topology_manager.go:215] "Topology Admit Handler" podUID="2f155735-a9be-4621-a5f2-5ab4b6957acd" podNamespace="openshift-kube-controller-manager" podName="revision-pruner-10-crc" Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.816472 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-10-crc" Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.880656 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2f155735-a9be-4621-a5f2-5ab4b6957acd-kubelet-dir\") pod \"revision-pruner-10-crc\" (UID: \"2f155735-a9be-4621-a5f2-5ab4b6957acd\") " pod="openshift-kube-controller-manager/revision-pruner-10-crc" Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.880746 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2f155735-a9be-4621-a5f2-5ab4b6957acd-kube-api-access\") pod \"revision-pruner-10-crc\" (UID: \"2f155735-a9be-4621-a5f2-5ab4b6957acd\") " pod="openshift-kube-controller-manager/revision-pruner-10-crc" Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.983580 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2f155735-a9be-4621-a5f2-5ab4b6957acd-kubelet-dir\") pod \"revision-pruner-10-crc\" (UID: \"2f155735-a9be-4621-a5f2-5ab4b6957acd\") " pod="openshift-kube-controller-manager/revision-pruner-10-crc" Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.983635 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2f155735-a9be-4621-a5f2-5ab4b6957acd-kube-api-access\") pod \"revision-pruner-10-crc\" (UID: \"2f155735-a9be-4621-a5f2-5ab4b6957acd\") " pod="openshift-kube-controller-manager/revision-pruner-10-crc" Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.984187 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2f155735-a9be-4621-a5f2-5ab4b6957acd-kubelet-dir\") pod \"revision-pruner-10-crc\" (UID: \"2f155735-a9be-4621-a5f2-5ab4b6957acd\") " pod="openshift-kube-controller-manager/revision-pruner-10-crc" Aug 13 20:00:38 crc kubenswrapper[4183]: I0813 20:00:38.454577 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-10-crc"] Aug 13 20:00:39 crc kubenswrapper[4183]: E0813 20:00:39.390016 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 20:00:39 crc kubenswrapper[4183]: I0813 20:00:39.568118 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2f155735-a9be-4621-a5f2-5ab4b6957acd-kube-api-access\") pod \"revision-pruner-10-crc\" (UID: \"2f155735-a9be-4621-a5f2-5ab4b6957acd\") " pod="openshift-kube-controller-manager/revision-pruner-10-crc" Aug 13 20:00:39 crc kubenswrapper[4183]: I0813 20:00:39.569114 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:00:39 crc kubenswrapper[4183]: I0813 20:00:39.582126 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:00:39 crc kubenswrapper[4183]: I0813 20:00:39.696974 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-10-crc"] Aug 13 20:00:39 crc kubenswrapper[4183]: I0813 20:00:39.697465 4183 topology_manager.go:215] "Topology Admit Handler" podUID="79050916-d488-4806-b556-1b0078b31e53" podNamespace="openshift-kube-controller-manager" podName="installer-10-crc" Aug 13 20:00:39 crc kubenswrapper[4183]: I0813 20:00:39.700363 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-10-crc" Aug 13 20:00:39 crc kubenswrapper[4183]: I0813 20:00:39.753930 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" containerName="oauth-openshift" containerID="cri-o://0c7b53a35a67b2526c5310571264cb255c68a5ac90b79fcfed3ea524243646e1" gracePeriod=14 Aug 13 20:00:39 crc kubenswrapper[4183]: I0813 20:00:39.810566 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/79050916-d488-4806-b556-1b0078b31e53-kubelet-dir\") pod \"installer-10-crc\" (UID: \"79050916-d488-4806-b556-1b0078b31e53\") " pod="openshift-kube-controller-manager/installer-10-crc" Aug 13 20:00:39 crc kubenswrapper[4183]: I0813 20:00:39.810673 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/79050916-d488-4806-b556-1b0078b31e53-kube-api-access\") pod \"installer-10-crc\" (UID: \"79050916-d488-4806-b556-1b0078b31e53\") " pod="openshift-kube-controller-manager/installer-10-crc" Aug 13 20:00:39 crc kubenswrapper[4183]: I0813 20:00:39.810716 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/79050916-d488-4806-b556-1b0078b31e53-var-lock\") pod \"installer-10-crc\" (UID: \"79050916-d488-4806-b556-1b0078b31e53\") " pod="openshift-kube-controller-manager/installer-10-crc" Aug 13 20:00:39 crc kubenswrapper[4183]: I0813 20:00:39.831172 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-10-crc"] Aug 13 20:00:39 crc kubenswrapper[4183]: I0813 20:00:39.944011 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/79050916-d488-4806-b556-1b0078b31e53-kubelet-dir\") pod \"installer-10-crc\" (UID: \"79050916-d488-4806-b556-1b0078b31e53\") " pod="openshift-kube-controller-manager/installer-10-crc" Aug 13 20:00:39 crc kubenswrapper[4183]: I0813 20:00:39.944184 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/79050916-d488-4806-b556-1b0078b31e53-kubelet-dir\") pod \"installer-10-crc\" (UID: \"79050916-d488-4806-b556-1b0078b31e53\") " pod="openshift-kube-controller-manager/installer-10-crc" Aug 13 20:00:39 crc kubenswrapper[4183]: I0813 20:00:39.944405 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/79050916-d488-4806-b556-1b0078b31e53-kube-api-access\") pod \"installer-10-crc\" (UID: \"79050916-d488-4806-b556-1b0078b31e53\") " pod="openshift-kube-controller-manager/installer-10-crc" Aug 13 20:00:39 crc kubenswrapper[4183]: I0813 20:00:39.944573 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/79050916-d488-4806-b556-1b0078b31e53-var-lock\") pod \"installer-10-crc\" (UID: \"79050916-d488-4806-b556-1b0078b31e53\") " pod="openshift-kube-controller-manager/installer-10-crc" Aug 13 20:00:39 crc kubenswrapper[4183]: I0813 20:00:39.944690 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/79050916-d488-4806-b556-1b0078b31e53-var-lock\") pod \"installer-10-crc\" (UID: \"79050916-d488-4806-b556-1b0078b31e53\") " pod="openshift-kube-controller-manager/installer-10-crc" Aug 13 20:00:40 crc kubenswrapper[4183]: I0813 20:00:40.096732 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-10-crc" Aug 13 20:00:40 crc kubenswrapper[4183]: I0813 20:00:40.416091 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/79050916-d488-4806-b556-1b0078b31e53-kube-api-access\") pod \"installer-10-crc\" (UID: \"79050916-d488-4806-b556-1b0078b31e53\") " pod="openshift-kube-controller-manager/installer-10-crc" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.385622 4183 generic.go:334] "Generic (PLEG): container finished" podID="13ad7555-5f28-4555-a563-892713a8433a" containerID="0c7b53a35a67b2526c5310571264cb255c68a5ac90b79fcfed3ea524243646e1" exitCode=0 Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.386137 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" event={"ID":"13ad7555-5f28-4555-a563-892713a8433a","Type":"ContainerDied","Data":"0c7b53a35a67b2526c5310571264cb255c68a5ac90b79fcfed3ea524243646e1"} Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.401449 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-67cbf64bc9-mtx25"] Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.410324 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" containerName="openshift-apiserver" containerID="cri-o://a9c5c60859fe5965d3e56b1f36415e36c4ebccf094bcf5a836013b9db4262143" gracePeriod=90 Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.411028 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" containerName="openshift-apiserver-check-endpoints" containerID="cri-o://850160bdc6ea5ea83ea4c13388d6776a10113289f49f21b1ead74f152e5a1512" gracePeriod=90 Aug 13 20:00:41 crc kubenswrapper[4183]: E0813 20:00:41.422041 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-operators-dcqzh" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.458973 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-apiserver/apiserver-67cbf64bc9-mtx25"] Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.702243 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-67cbf64bc9-jjfds"] Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.703251 4183 topology_manager.go:215] "Topology Admit Handler" podUID="b23d6435-6431-4905-b41b-a517327385e5" podNamespace="openshift-apiserver" podName="apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: E0813 20:00:41.703572 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" containerName="openshift-apiserver" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.703675 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" containerName="openshift-apiserver" Aug 13 20:00:41 crc kubenswrapper[4183]: E0813 20:00:41.703766 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" containerName="openshift-apiserver-check-endpoints" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.703958 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" containerName="openshift-apiserver-check-endpoints" Aug 13 20:00:41 crc kubenswrapper[4183]: E0813 20:00:41.704089 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" containerName="fix-audit-permissions" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.704172 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" containerName="fix-audit-permissions" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.704371 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" containerName="openshift-apiserver" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.704486 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" containerName="openshift-apiserver-check-endpoints" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.705521 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.738116 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-r9fjc" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.834000 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b23d6435-6431-4905-b41b-a517327385e5-encryption-config\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.834386 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-audit\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.834513 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b23d6435-6431-4905-b41b-a517327385e5-audit-dir\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.834694 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-trusted-ca-bundle\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.834930 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-etcd-serving-ca\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.835076 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b23d6435-6431-4905-b41b-a517327385e5-node-pullsecrets\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.835192 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-config\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.835300 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b23d6435-6431-4905-b41b-a517327385e5-serving-cert\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.835453 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b23d6435-6431-4905-b41b-a517327385e5-etcd-client\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.835576 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-image-import-ca\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.835753 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6j2kj\" (UniqueName: \"kubernetes.io/projected/b23d6435-6431-4905-b41b-a517327385e5-kube-api-access-6j2kj\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.939227 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6j2kj\" (UniqueName: \"kubernetes.io/projected/b23d6435-6431-4905-b41b-a517327385e5-kube-api-access-6j2kj\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.970617 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b23d6435-6431-4905-b41b-a517327385e5-encryption-config\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.971536 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-67cbf64bc9-jjfds"] Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.974603 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-audit\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.974710 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b23d6435-6431-4905-b41b-a517327385e5-audit-dir\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.974774 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-trusted-ca-bundle\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.975084 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-etcd-serving-ca\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.975197 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b23d6435-6431-4905-b41b-a517327385e5-node-pullsecrets\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.975283 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-config\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.975327 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b23d6435-6431-4905-b41b-a517327385e5-serving-cert\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.975403 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b23d6435-6431-4905-b41b-a517327385e5-etcd-client\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.975474 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-image-import-ca\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.979601 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-etcd-serving-ca\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.980346 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-audit\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.980404 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b23d6435-6431-4905-b41b-a517327385e5-audit-dir\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.994656 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b23d6435-6431-4905-b41b-a517327385e5-node-pullsecrets\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:42 crc kubenswrapper[4183]: I0813 20:00:42.001627 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-config\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:42 crc kubenswrapper[4183]: I0813 20:00:42.003866 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-trusted-ca-bundle\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:42 crc kubenswrapper[4183]: I0813 20:00:42.016768 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-image-import-ca\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:42 crc kubenswrapper[4183]: I0813 20:00:42.070052 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b23d6435-6431-4905-b41b-a517327385e5-serving-cert\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:42 crc kubenswrapper[4183]: I0813 20:00:42.084201 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b23d6435-6431-4905-b41b-a517327385e5-encryption-config\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:42 crc kubenswrapper[4183]: I0813 20:00:42.107393 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b23d6435-6431-4905-b41b-a517327385e5-etcd-client\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:42 crc kubenswrapper[4183]: I0813 20:00:42.354144 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-10-crc" Aug 13 20:00:42 crc kubenswrapper[4183]: I0813 20:00:42.892229 4183 generic.go:334] "Generic (PLEG): container finished" podID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" containerID="850160bdc6ea5ea83ea4c13388d6776a10113289f49f21b1ead74f152e5a1512" exitCode=0 Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.240716 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.336192 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data\") pod \"13ad7555-5f28-4555-a563-892713a8433a\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.336314 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template\") pod \"13ad7555-5f28-4555-a563-892713a8433a\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.336353 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection\") pod \"13ad7555-5f28-4555-a563-892713a8433a\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.336387 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies\") pod \"13ad7555-5f28-4555-a563-892713a8433a\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.336422 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig\") pod \"13ad7555-5f28-4555-a563-892713a8433a\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.336463 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login\") pod \"13ad7555-5f28-4555-a563-892713a8433a\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.336507 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error\") pod \"13ad7555-5f28-4555-a563-892713a8433a\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.336572 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle\") pod \"13ad7555-5f28-4555-a563-892713a8433a\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.336627 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs\") pod \"13ad7555-5f28-4555-a563-892713a8433a\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.336677 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca\") pod \"13ad7555-5f28-4555-a563-892713a8433a\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.336719 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4r68\" (UniqueName: \"kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68\") pod \"13ad7555-5f28-4555-a563-892713a8433a\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.336762 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert\") pod \"13ad7555-5f28-4555-a563-892713a8433a\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.336889 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/13ad7555-5f28-4555-a563-892713a8433a-audit-dir\") pod \"13ad7555-5f28-4555-a563-892713a8433a\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.336935 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session\") pod \"13ad7555-5f28-4555-a563-892713a8433a\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.342265 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "13ad7555-5f28-4555-a563-892713a8433a" (UID: "13ad7555-5f28-4555-a563-892713a8433a"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.358965 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "13ad7555-5f28-4555-a563-892713a8433a" (UID: "13ad7555-5f28-4555-a563-892713a8433a"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.362115 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13ad7555-5f28-4555-a563-892713a8433a-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "13ad7555-5f28-4555-a563-892713a8433a" (UID: "13ad7555-5f28-4555-a563-892713a8433a"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.362656 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "13ad7555-5f28-4555-a563-892713a8433a" (UID: "13ad7555-5f28-4555-a563-892713a8433a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.363739 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "13ad7555-5f28-4555-a563-892713a8433a" (UID: "13ad7555-5f28-4555-a563-892713a8433a"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.380757 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68" (OuterVolumeSpecName: "kube-api-access-w4r68") pod "13ad7555-5f28-4555-a563-892713a8433a" (UID: "13ad7555-5f28-4555-a563-892713a8433a"). InnerVolumeSpecName "kube-api-access-w4r68". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.411029 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "13ad7555-5f28-4555-a563-892713a8433a" (UID: "13ad7555-5f28-4555-a563-892713a8433a"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.412205 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "13ad7555-5f28-4555-a563-892713a8433a" (UID: "13ad7555-5f28-4555-a563-892713a8433a"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.412924 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "13ad7555-5f28-4555-a563-892713a8433a" (UID: "13ad7555-5f28-4555-a563-892713a8433a"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.412973 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "13ad7555-5f28-4555-a563-892713a8433a" (UID: "13ad7555-5f28-4555-a563-892713a8433a"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.414127 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "13ad7555-5f28-4555-a563-892713a8433a" (UID: "13ad7555-5f28-4555-a563-892713a8433a"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.421348 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "13ad7555-5f28-4555-a563-892713a8433a" (UID: "13ad7555-5f28-4555-a563-892713a8433a"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.424319 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "13ad7555-5f28-4555-a563-892713a8433a" (UID: "13ad7555-5f28-4555-a563-892713a8433a"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.427660 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "13ad7555-5f28-4555-a563-892713a8433a" (UID: "13ad7555-5f28-4555-a563-892713a8433a"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.439072 4183 reconciler_common.go:300] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/13ad7555-5f28-4555-a563-892713a8433a-audit-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.439131 4183 reconciler_common.go:300] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.439151 4183 reconciler_common.go:300] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.439165 4183 reconciler_common.go:300] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.439179 4183 reconciler_common.go:300] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.439193 4183 reconciler_common.go:300] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.439206 4183 reconciler_common.go:300] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.439219 4183 reconciler_common.go:300] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.439231 4183 reconciler_common.go:300] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.439245 4183 reconciler_common.go:300] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.439258 4183 reconciler_common.go:300] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.439272 4183 reconciler_common.go:300] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.439283 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-w4r68\" (UniqueName: \"kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.439296 4183 reconciler_common.go:300] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:44 crc kubenswrapper[4183]: I0813 20:00:44.005191 4183 generic.go:334] "Generic (PLEG): container finished" podID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" containerID="346fc13eab4a6442e7eb6bb7019dac9a1216274ae59cd519b5e7474a1dd1b4e2" exitCode=0 Aug 13 20:00:44 crc kubenswrapper[4183]: I0813 20:00:44.005354 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" event={"ID":"0b5c38ff-1fa8-4219-994d-15776acd4a4d","Type":"ContainerDied","Data":"346fc13eab4a6442e7eb6bb7019dac9a1216274ae59cd519b5e7474a1dd1b4e2"} Aug 13 20:00:44 crc kubenswrapper[4183]: I0813 20:00:44.006295 4183 scope.go:117] "RemoveContainer" containerID="346fc13eab4a6442e7eb6bb7019dac9a1216274ae59cd519b5e7474a1dd1b4e2" Aug 13 20:00:44 crc kubenswrapper[4183]: I0813 20:00:44.074016 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" event={"ID":"13ad7555-5f28-4555-a563-892713a8433a","Type":"ContainerDied","Data":"8266ab3300c992b59b23d4fcd1c7a7c7c8c97e041b449a5bbd87fb5e57084141"} Aug 13 20:00:44 crc kubenswrapper[4183]: I0813 20:00:44.074906 4183 scope.go:117] "RemoveContainer" containerID="0c7b53a35a67b2526c5310571264cb255c68a5ac90b79fcfed3ea524243646e1" Aug 13 20:00:44 crc kubenswrapper[4183]: I0813 20:00:44.075503 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 20:00:44 crc kubenswrapper[4183]: I0813 20:00:44.871563 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:00:44 crc kubenswrapper[4183]: I0813 20:00:44.871677 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:00:44 crc kubenswrapper[4183]: I0813 20:00:44.884710 4183 patch_prober.go:28] interesting pod/authentication-operator-7cc7ff75d5-g9qv8 container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Aug 13 20:00:44 crc kubenswrapper[4183]: I0813 20:00:44.884925 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" Aug 13 20:00:44 crc kubenswrapper[4183]: I0813 20:00:44.952264 4183 patch_prober.go:28] interesting pod/console-84fccc7b6-mkncc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Aug 13 20:00:44 crc kubenswrapper[4183]: I0813 20:00:44.953407 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Aug 13 20:00:45 crc kubenswrapper[4183]: I0813 20:00:45.272608 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr_9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7/kube-storage-version-migrator-operator/0.log" Aug 13 20:00:45 crc kubenswrapper[4183]: I0813 20:00:45.503656 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-6j2kj\" (UniqueName: \"kubernetes.io/projected/b23d6435-6431-4905-b41b-a517327385e5-kube-api-access-6j2kj\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:45 crc kubenswrapper[4183]: I0813 20:00:45.603890 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" event={"ID":"f728c15e-d8de-4a9a-a3ea-fdcead95cb91","Type":"ContainerStarted","Data":"f8740679d62a596414a4bace5b51c52a61eb8997cb3aad74b6e37fb0898cbd9a"} Aug 13 20:00:45 crc kubenswrapper[4183]: I0813 20:00:45.663716 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:45 crc kubenswrapper[4183]: I0813 20:00:45.788531 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-10-crc"] Aug 13 20:00:45 crc kubenswrapper[4183]: I0813 20:00:45.872562 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-10-crc"] Aug 13 20:00:45 crc kubenswrapper[4183]: I0813 20:00:45.899327 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-7-crc"] Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.265636 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b"] Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.266229 4183 topology_manager.go:215] "Topology Admit Handler" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" podNamespace="openshift-authentication" podName="oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: E0813 20:00:46.266462 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="13ad7555-5f28-4555-a563-892713a8433a" containerName="oauth-openshift" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.266482 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="13ad7555-5f28-4555-a563-892713a8433a" containerName="oauth-openshift" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.266635 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="13ad7555-5f28-4555-a563-892713a8433a" containerName="oauth-openshift" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.284461 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.369983 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.378862 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.379339 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.379608 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.379753 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ggjm\" (UniqueName: \"kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.380041 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.380171 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.380307 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.380571 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.385252 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.385923 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.386294 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.414543 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.454696 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.455345 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.463164 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.463969 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.466214 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.471661 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.472147 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-dir\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.472334 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.472521 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.472656 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.467659 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.474041 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.507414 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-6sd5l" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.511328 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.576295 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.576402 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.576476 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.576508 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.576539 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.576562 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-7ggjm\" (UniqueName: \"kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.576589 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.576621 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.576650 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.576690 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.576717 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.583259 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.592742 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.592943 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-dir\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.592999 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.636523 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-765b47f944-n2lhl"] Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.647947 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.648016 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-dir\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.649387 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.683061 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.689520 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.733286 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.736500 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.750459 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.753396 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.761600 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b"] Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.790375 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.799700 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.820428 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.820881 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.891525 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.956583 4183 generic.go:334] "Generic (PLEG): container finished" podID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" containerID="2c4363bf35c3850ea69697df9035284b39acfc987f5b168c9bf3f20002f44039" exitCode=0 Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.956890 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" event={"ID":"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e","Type":"ContainerDied","Data":"2c4363bf35c3850ea69697df9035284b39acfc987f5b168c9bf3f20002f44039"} Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.957877 4183 scope.go:117] "RemoveContainer" containerID="2c4363bf35c3850ea69697df9035284b39acfc987f5b168c9bf3f20002f44039" Aug 13 20:00:47 crc kubenswrapper[4183]: I0813 20:00:47.161170 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-7-crc" event={"ID":"b57cce81-8ea0-4c4d-aae1-ee024d201c15","Type":"ContainerStarted","Data":"639e0e9093fe7c92ed967648091e3738a0b9f70f4bdb231708a7ad902081cdab"} Aug 13 20:00:47 crc kubenswrapper[4183]: I0813 20:00:47.176297 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-765b47f944-n2lhl"] Aug 13 20:00:47 crc kubenswrapper[4183]: I0813 20:00:47.185972 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ggjm\" (UniqueName: \"kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:47 crc kubenswrapper[4183]: I0813 20:00:47.304578 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:47 crc kubenswrapper[4183]: I0813 20:00:47.400373 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13ad7555-5f28-4555-a563-892713a8433a" path="/var/lib/kubelet/pods/13ad7555-5f28-4555-a563-892713a8433a/volumes" Aug 13 20:00:47 crc kubenswrapper[4183]: I0813 20:00:47.558469 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" event={"ID":"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d","Type":"ContainerStarted","Data":"417399fd591cd0cade9e86c96a7f4a9443d365dc57f627f00e02594fd8957bf3"} Aug 13 20:00:47 crc kubenswrapper[4183]: I0813 20:00:47.560090 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" Aug 13 20:00:47 crc kubenswrapper[4183]: I0813 20:00:47.837463 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-10-crc" event={"ID":"2f155735-a9be-4621-a5f2-5ab4b6957acd","Type":"ContainerStarted","Data":"c05ff35bd00034fcfab3a644cd84bcb84bc4a9c535bd6172e2012a7d16ea6eb5"} Aug 13 20:00:48 crc kubenswrapper[4183]: I0813 20:00:48.067045 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr_9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7/kube-storage-version-migrator-operator/0.log" Aug 13 20:00:48 crc kubenswrapper[4183]: I0813 20:00:48.067940 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" event={"ID":"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7","Type":"ContainerStarted","Data":"043a876882e6525ddc5f76decf1b6c920a7b88ce28a2364941d8f877fa66e241"} Aug 13 20:00:48 crc kubenswrapper[4183]: I0813 20:00:48.239693 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" Aug 13 20:00:48 crc kubenswrapper[4183]: I0813 20:00:48.501762 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-65476884b9-9wcvx" event={"ID":"6268b7fe-8910-4505-b404-6f1df638105c","Type":"ContainerStarted","Data":"c206967f2892cfc5d9ca27cc94cd1d42b6561839a6724e931bbdea13b6e1cde5"} Aug 13 20:00:48 crc kubenswrapper[4183]: I0813 20:00:48.519739 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 20:00:48 crc kubenswrapper[4183]: I0813 20:00:48.519982 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:00:48 crc kubenswrapper[4183]: I0813 20:00:48.520026 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:00:48 crc kubenswrapper[4183]: I0813 20:00:48.607341 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-10-crc" event={"ID":"79050916-d488-4806-b556-1b0078b31e53","Type":"ContainerStarted","Data":"c5d98545d20b61052f0164d192095269601cf3a013453289a4380b9d437de8fc"} Aug 13 20:00:49 crc kubenswrapper[4183]: I0813 20:00:49.547720 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:00:49 crc kubenswrapper[4183]: I0813 20:00:49.549557 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:00:51 crc kubenswrapper[4183]: I0813 20:00:51.371645 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:00:51 crc kubenswrapper[4183]: I0813 20:00:51.372722 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:00:54 crc kubenswrapper[4183]: I0813 20:00:54.696048 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:00:54 crc kubenswrapper[4183]: I0813 20:00:54.696731 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:00:54 crc kubenswrapper[4183]: I0813 20:00:54.696861 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:00:54 crc kubenswrapper[4183]: I0813 20:00:54.696908 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:00:54 crc kubenswrapper[4183]: I0813 20:00:54.696966 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:00:54 crc kubenswrapper[4183]: I0813 20:00:54.881030 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:00:54 crc kubenswrapper[4183]: I0813 20:00:54.882103 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:00:54 crc kubenswrapper[4183]: I0813 20:00:54.881030 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:00:54 crc kubenswrapper[4183]: I0813 20:00:54.882186 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:00:54 crc kubenswrapper[4183]: I0813 20:00:54.884295 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 20:00:54 crc kubenswrapper[4183]: I0813 20:00:54.952035 4183 patch_prober.go:28] interesting pod/console-84fccc7b6-mkncc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Aug 13 20:00:54 crc kubenswrapper[4183]: I0813 20:00:54.954131 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Aug 13 20:00:55 crc kubenswrapper[4183]: I0813 20:00:55.205724 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 20:00:55 crc kubenswrapper[4183]: I0813 20:00:55.978620 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-9-crc_227e3650-2a85-4229-8099-bb53972635b2/installer/0.log" Aug 13 20:00:55 crc kubenswrapper[4183]: I0813 20:00:55.981442 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-9-crc" event={"ID":"227e3650-2a85-4229-8099-bb53972635b2","Type":"ContainerDied","Data":"1bbed3b469cb62a0e76b6e9718249f34f00007dc9f9e6dcd22b346fb357ece99"} Aug 13 20:00:55 crc kubenswrapper[4183]: I0813 20:00:55.986820 4183 generic.go:334] "Generic (PLEG): container finished" podID="227e3650-2a85-4229-8099-bb53972635b2" containerID="1bbed3b469cb62a0e76b6e9718249f34f00007dc9f9e6dcd22b346fb357ece99" exitCode=1 Aug 13 20:00:56 crc kubenswrapper[4183]: I0813 20:00:56.700337 4183 patch_prober.go:28] interesting pod/apiserver-69c565c9b6-vbdpd container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Aug 13 20:00:56 crc kubenswrapper[4183]: [+]log ok Aug 13 20:00:56 crc kubenswrapper[4183]: [+]etcd ok Aug 13 20:00:56 crc kubenswrapper[4183]: [-]etcd-readiness failed: reason withheld Aug 13 20:00:56 crc kubenswrapper[4183]: [+]informer-sync ok Aug 13 20:00:56 crc kubenswrapper[4183]: [+]poststarthook/generic-apiserver-start-informers ok Aug 13 20:00:56 crc kubenswrapper[4183]: [+]poststarthook/max-in-flight-filter ok Aug 13 20:00:56 crc kubenswrapper[4183]: [+]poststarthook/storage-object-count-tracker-hook ok Aug 13 20:00:56 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartOAuthInformer ok Aug 13 20:00:56 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartUserInformer ok Aug 13 20:00:56 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Aug 13 20:00:56 crc kubenswrapper[4183]: [+]shutdown ok Aug 13 20:00:56 crc kubenswrapper[4183]: readyz check failed Aug 13 20:00:56 crc kubenswrapper[4183]: I0813 20:00:56.700486 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:56 crc kubenswrapper[4183]: I0813 20:00:56.700620 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 20:00:57 crc kubenswrapper[4183]: I0813 20:00:57.632304 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 20:00:58 crc kubenswrapper[4183]: I0813 20:00:58.184180 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:59 crc kubenswrapper[4183]: I0813 20:00:59.540555 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:00:59 crc kubenswrapper[4183]: I0813 20:00:59.541338 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:00:59 crc kubenswrapper[4183]: I0813 20:00:59.540701 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" event={"ID":"0b5c38ff-1fa8-4219-994d-15776acd4a4d","Type":"ContainerStarted","Data":"524f541503e673b38ef89e50d9e4dfc8448cecf293a683f236b94f15ea56379f"} Aug 13 20:00:59 crc kubenswrapper[4183]: I0813 20:00:59.623278 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" event={"ID":"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e","Type":"ContainerStarted","Data":"d21952f722a78650eafeaffd3eee446ec3e6f45e0e0dff0fde9b755169ca68a0"} Aug 13 20:00:59 crc kubenswrapper[4183]: I0813 20:00:59.986334 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b"] Aug 13 20:01:00 crc kubenswrapper[4183]: I0813 20:01:00.033563 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-67cbf64bc9-jjfds"] Aug 13 20:01:00 crc kubenswrapper[4183]: W0813 20:01:00.559067 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb23d6435_6431_4905_b41b_a517327385e5.slice/crio-411add17e78de78ccd75f5c0e0dfb380e3bff9047da00adac5d17d33bfb78e58 WatchSource:0}: Error finding container 411add17e78de78ccd75f5c0e0dfb380e3bff9047da00adac5d17d33bfb78e58: Status 404 returned error can't find the container with id 411add17e78de78ccd75f5c0e0dfb380e3bff9047da00adac5d17d33bfb78e58 Aug 13 20:01:00 crc kubenswrapper[4183]: W0813 20:01:00.777733 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01feb2e0_a0f4_4573_8335_34e364e0ef40.slice/crio-ca33bd29c9a026f2de2ac8dc0aaa5c02eb359b8d1ced732874be833c45043404 WatchSource:0}: Error finding container ca33bd29c9a026f2de2ac8dc0aaa5c02eb359b8d1ced732874be833c45043404: Status 404 returned error can't find the container with id ca33bd29c9a026f2de2ac8dc0aaa5c02eb359b8d1ced732874be833c45043404 Aug 13 20:01:01 crc kubenswrapper[4183]: I0813 20:01:01.334242 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" event={"ID":"b23d6435-6431-4905-b41b-a517327385e5","Type":"ContainerStarted","Data":"411add17e78de78ccd75f5c0e0dfb380e3bff9047da00adac5d17d33bfb78e58"} Aug 13 20:01:02 crc kubenswrapper[4183]: I0813 20:01:02.077330 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-9-crc_227e3650-2a85-4229-8099-bb53972635b2/installer/0.log" Aug 13 20:01:02 crc kubenswrapper[4183]: I0813 20:01:02.079077 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-9-crc" Aug 13 20:01:02 crc kubenswrapper[4183]: I0813 20:01:02.701589 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-9-crc_227e3650-2a85-4229-8099-bb53972635b2/installer/0.log" Aug 13 20:01:02 crc kubenswrapper[4183]: I0813 20:01:02.702169 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-9-crc" event={"ID":"227e3650-2a85-4229-8099-bb53972635b2","Type":"ContainerDied","Data":"ca267bd7a205181e470f424d652801f7ec40bf5a8c5b2880b6cf133cd7e518ef"} Aug 13 20:01:02 crc kubenswrapper[4183]: I0813 20:01:02.702390 4183 scope.go:117] "RemoveContainer" containerID="1bbed3b469cb62a0e76b6e9718249f34f00007dc9f9e6dcd22b346fb357ece99" Aug 13 20:01:02 crc kubenswrapper[4183]: I0813 20:01:02.702657 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-9-crc" Aug 13 20:01:03 crc kubenswrapper[4183]: I0813 20:01:03.198645 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" event={"ID":"01feb2e0-a0f4-4573-8335-34e364e0ef40","Type":"ContainerStarted","Data":"ca33bd29c9a026f2de2ac8dc0aaa5c02eb359b8d1ced732874be833c45043404"} Aug 13 20:01:04 crc kubenswrapper[4183]: I0813 20:01:04.873700 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:01:04 crc kubenswrapper[4183]: I0813 20:01:04.874405 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:01:04 crc kubenswrapper[4183]: I0813 20:01:04.876409 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:01:04 crc kubenswrapper[4183]: I0813 20:01:04.876497 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:01:04 crc kubenswrapper[4183]: I0813 20:01:04.949495 4183 patch_prober.go:28] interesting pod/console-84fccc7b6-mkncc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Aug 13 20:01:04 crc kubenswrapper[4183]: I0813 20:01:04.949643 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Aug 13 20:01:05 crc kubenswrapper[4183]: I0813 20:01:05.275984 4183 patch_prober.go:28] interesting pod/apiserver-69c565c9b6-vbdpd container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Aug 13 20:01:05 crc kubenswrapper[4183]: [+]log ok Aug 13 20:01:05 crc kubenswrapper[4183]: [+]etcd ok Aug 13 20:01:05 crc kubenswrapper[4183]: [-]etcd-readiness failed: reason withheld Aug 13 20:01:05 crc kubenswrapper[4183]: [+]informer-sync ok Aug 13 20:01:05 crc kubenswrapper[4183]: [+]poststarthook/generic-apiserver-start-informers ok Aug 13 20:01:05 crc kubenswrapper[4183]: [+]poststarthook/max-in-flight-filter ok Aug 13 20:01:05 crc kubenswrapper[4183]: [+]poststarthook/storage-object-count-tracker-hook ok Aug 13 20:01:05 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartOAuthInformer ok Aug 13 20:01:05 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartUserInformer ok Aug 13 20:01:05 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Aug 13 20:01:05 crc kubenswrapper[4183]: [+]shutdown ok Aug 13 20:01:05 crc kubenswrapper[4183]: readyz check failed Aug 13 20:01:05 crc kubenswrapper[4183]: I0813 20:01:05.276114 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:01:05 crc kubenswrapper[4183]: I0813 20:01:05.481071 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 20:01:06 crc kubenswrapper[4183]: I0813 20:01:06.005457 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/227e3650-2a85-4229-8099-bb53972635b2-kube-api-access\") pod \"227e3650-2a85-4229-8099-bb53972635b2\" (UID: \"227e3650-2a85-4229-8099-bb53972635b2\") " Aug 13 20:01:06 crc kubenswrapper[4183]: I0813 20:01:06.006124 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/227e3650-2a85-4229-8099-bb53972635b2-var-lock\") pod \"227e3650-2a85-4229-8099-bb53972635b2\" (UID: \"227e3650-2a85-4229-8099-bb53972635b2\") " Aug 13 20:01:06 crc kubenswrapper[4183]: I0813 20:01:06.006301 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/227e3650-2a85-4229-8099-bb53972635b2-kubelet-dir\") pod \"227e3650-2a85-4229-8099-bb53972635b2\" (UID: \"227e3650-2a85-4229-8099-bb53972635b2\") " Aug 13 20:01:06 crc kubenswrapper[4183]: I0813 20:01:06.010689 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/227e3650-2a85-4229-8099-bb53972635b2-var-lock" (OuterVolumeSpecName: "var-lock") pod "227e3650-2a85-4229-8099-bb53972635b2" (UID: "227e3650-2a85-4229-8099-bb53972635b2"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:01:06 crc kubenswrapper[4183]: I0813 20:01:06.010732 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/227e3650-2a85-4229-8099-bb53972635b2-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "227e3650-2a85-4229-8099-bb53972635b2" (UID: "227e3650-2a85-4229-8099-bb53972635b2"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:01:06 crc kubenswrapper[4183]: I0813 20:01:06.032166 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/227e3650-2a85-4229-8099-bb53972635b2-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "227e3650-2a85-4229-8099-bb53972635b2" (UID: "227e3650-2a85-4229-8099-bb53972635b2"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:01:06 crc kubenswrapper[4183]: I0813 20:01:06.108676 4183 reconciler_common.go:300] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/227e3650-2a85-4229-8099-bb53972635b2-var-lock\") on node \"crc\" DevicePath \"\"" Aug 13 20:01:06 crc kubenswrapper[4183]: I0813 20:01:06.108732 4183 reconciler_common.go:300] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/227e3650-2a85-4229-8099-bb53972635b2-kubelet-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:01:06 crc kubenswrapper[4183]: I0813 20:01:06.120371 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/227e3650-2a85-4229-8099-bb53972635b2-kube-api-access\") on node \"crc\" DevicePath \"\"" Aug 13 20:01:07 crc kubenswrapper[4183]: I0813 20:01:07.572965 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" podStartSLOduration=42.572913908 podStartE2EDuration="42.572913908s" podCreationTimestamp="2025-08-13 20:00:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:01:06.779492984 +0000 UTC m=+1033.472157982" watchObservedRunningTime="2025-08-13 20:01:07.572913908 +0000 UTC m=+1034.265578806" Aug 13 20:01:07 crc kubenswrapper[4183]: I0813 20:01:07.619329 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-7-crc" event={"ID":"b57cce81-8ea0-4c4d-aae1-ee024d201c15","Type":"ContainerStarted","Data":"c790588ca0e77460d01591ce4be738641e9b039fdf1cb3c3fdd37a9243b55f83"} Aug 13 20:01:08 crc kubenswrapper[4183]: I0813 20:01:08.424319 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-10-crc" event={"ID":"2f155735-a9be-4621-a5f2-5ab4b6957acd","Type":"ContainerStarted","Data":"e7256098c4244337df430457265e378ddf1b268c176bafd4d6fa5a52a80adfe5"} Aug 13 20:01:08 crc kubenswrapper[4183]: I0813 20:01:08.733261 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-7cbd5666ff-bbfrf"] Aug 13 20:01:10 crc kubenswrapper[4183]: I0813 20:01:10.200767 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:01:10 crc kubenswrapper[4183]: I0813 20:01:10.201015 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:01:10 crc kubenswrapper[4183]: I0813 20:01:10.316197 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-console/console-84fccc7b6-mkncc"] Aug 13 20:01:10 crc kubenswrapper[4183]: E0813 20:01:10.498578 4183 cadvisor_stats_provider.go:501] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-pod2f155735_a9be_4621_a5f2_5ab4b6957acd.slice/crio-e7256098c4244337df430457265e378ddf1b268c176bafd4d6fa5a52a80adfe5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-pod2f155735_a9be_4621_a5f2_5ab4b6957acd.slice/crio-conmon-e7256098c4244337df430457265e378ddf1b268c176bafd4d6fa5a52a80adfe5.scope\": RecentStats: unable to find data in memory cache]" Aug 13 20:01:10 crc kubenswrapper[4183]: I0813 20:01:10.967968 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-10-crc" event={"ID":"79050916-d488-4806-b556-1b0078b31e53","Type":"ContainerStarted","Data":"f3271fa1efff9a0885965f0ea8ca31234ba9caefd85007392c549bd273427721"} Aug 13 20:01:12 crc kubenswrapper[4183]: I0813 20:01:12.209177 4183 generic.go:334] "Generic (PLEG): container finished" podID="2f155735-a9be-4621-a5f2-5ab4b6957acd" containerID="e7256098c4244337df430457265e378ddf1b268c176bafd4d6fa5a52a80adfe5" exitCode=0 Aug 13 20:01:12 crc kubenswrapper[4183]: I0813 20:01:12.209422 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-10-crc" event={"ID":"2f155735-a9be-4621-a5f2-5ab4b6957acd","Type":"ContainerDied","Data":"e7256098c4244337df430457265e378ddf1b268c176bafd4d6fa5a52a80adfe5"} Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.357581 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-console/console-644bb77b49-5x5xk"] Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.357749 4183 topology_manager.go:215] "Topology Admit Handler" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" podNamespace="openshift-console" podName="console-644bb77b49-5x5xk" Aug 13 20:01:14 crc kubenswrapper[4183]: E0813 20:01:14.358204 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="227e3650-2a85-4229-8099-bb53972635b2" containerName="installer" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.358223 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="227e3650-2a85-4229-8099-bb53972635b2" containerName="installer" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.358394 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="227e3650-2a85-4229-8099-bb53972635b2" containerName="installer" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.359130 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.485496 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.485604 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.485650 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nz92\" (UniqueName: \"kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.485691 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.485735 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.485888 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.485974 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.589709 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.591564 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2nz92\" (UniqueName: \"kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.591746 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.593750 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.593991 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.594177 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.594646 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.602313 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.603191 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.608153 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.609463 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.612142 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.612556 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.872504 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.872632 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.872695 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.874520 4183 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"c206967f2892cfc5d9ca27cc94cd1d42b6561839a6724e931bbdea13b6e1cde5"} pod="openshift-console/downloads-65476884b9-9wcvx" containerMessage="Container download-server failed liveness probe, will be restarted" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.874583 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" containerID="cri-o://c206967f2892cfc5d9ca27cc94cd1d42b6561839a6724e931bbdea13b6e1cde5" gracePeriod=2 Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.872512 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.874887 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.876616 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.876700 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.985882 4183 patch_prober.go:28] interesting pod/console-operator-5dbbc74dc9-cp5cd container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" start-of-body= Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.985943 4183 patch_prober.go:28] interesting pod/console-operator-5dbbc74dc9-cp5cd container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/readyz\": dial tcp 10.217.0.62:8443: connect: connection refused" start-of-body= Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.985989 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.985997 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.62:8443/readyz\": dial tcp 10.217.0.62:8443: connect: connection refused" Aug 13 20:01:16 crc kubenswrapper[4183]: I0813 20:01:16.667879 4183 patch_prober.go:28] interesting pod/apiserver-69c565c9b6-vbdpd container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Aug 13 20:01:16 crc kubenswrapper[4183]: [+]log ok Aug 13 20:01:16 crc kubenswrapper[4183]: [+]etcd ok Aug 13 20:01:16 crc kubenswrapper[4183]: [-]etcd-readiness failed: reason withheld Aug 13 20:01:16 crc kubenswrapper[4183]: [+]informer-sync ok Aug 13 20:01:16 crc kubenswrapper[4183]: [+]poststarthook/generic-apiserver-start-informers ok Aug 13 20:01:16 crc kubenswrapper[4183]: [+]poststarthook/max-in-flight-filter ok Aug 13 20:01:16 crc kubenswrapper[4183]: [+]poststarthook/storage-object-count-tracker-hook ok Aug 13 20:01:16 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartOAuthInformer ok Aug 13 20:01:16 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartUserInformer ok Aug 13 20:01:16 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Aug 13 20:01:16 crc kubenswrapper[4183]: [+]shutdown ok Aug 13 20:01:16 crc kubenswrapper[4183]: readyz check failed Aug 13 20:01:16 crc kubenswrapper[4183]: I0813 20:01:16.668083 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:01:16 crc kubenswrapper[4183]: I0813 20:01:16.668168 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 20:01:16 crc kubenswrapper[4183]: I0813 20:01:16.745284 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" event={"ID":"01feb2e0-a0f4-4573-8335-34e364e0ef40","Type":"ContainerStarted","Data":"391bd49947a0ae3e13b214a022dc7f8ebc8a0337699d428008fe902a18d050a6"} Aug 13 20:01:17 crc kubenswrapper[4183]: I0813 20:01:17.159036 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-5dbbc74dc9-cp5cd_e9127708-ccfd-4891-8a3a-f0cacb77e0f4/console-operator/0.log" Aug 13 20:01:17 crc kubenswrapper[4183]: I0813 20:01:17.159333 4183 generic.go:334] "Generic (PLEG): container finished" podID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" containerID="47f4fe3d214f9afb61d4c14f1173afecfd243739000ced3d025f9611dbfd4239" exitCode=1 Aug 13 20:01:17 crc kubenswrapper[4183]: I0813 20:01:17.159362 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" event={"ID":"e9127708-ccfd-4891-8a3a-f0cacb77e0f4","Type":"ContainerDied","Data":"47f4fe3d214f9afb61d4c14f1173afecfd243739000ced3d025f9611dbfd4239"} Aug 13 20:01:17 crc kubenswrapper[4183]: I0813 20:01:17.159818 4183 scope.go:117] "RemoveContainer" containerID="47f4fe3d214f9afb61d4c14f1173afecfd243739000ced3d025f9611dbfd4239" Aug 13 20:01:17 crc kubenswrapper[4183]: I0813 20:01:17.614687 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-10-crc" Aug 13 20:01:17 crc kubenswrapper[4183]: I0813 20:01:17.673898 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2f155735-a9be-4621-a5f2-5ab4b6957acd-kube-api-access\") pod \"2f155735-a9be-4621-a5f2-5ab4b6957acd\" (UID: \"2f155735-a9be-4621-a5f2-5ab4b6957acd\") " Aug 13 20:01:17 crc kubenswrapper[4183]: I0813 20:01:17.674125 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2f155735-a9be-4621-a5f2-5ab4b6957acd-kubelet-dir\") pod \"2f155735-a9be-4621-a5f2-5ab4b6957acd\" (UID: \"2f155735-a9be-4621-a5f2-5ab4b6957acd\") " Aug 13 20:01:17 crc kubenswrapper[4183]: I0813 20:01:17.674669 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f155735-a9be-4621-a5f2-5ab4b6957acd-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2f155735-a9be-4621-a5f2-5ab4b6957acd" (UID: "2f155735-a9be-4621-a5f2-5ab4b6957acd"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:01:17 crc kubenswrapper[4183]: I0813 20:01:17.720762 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f155735-a9be-4621-a5f2-5ab4b6957acd-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2f155735-a9be-4621-a5f2-5ab4b6957acd" (UID: "2f155735-a9be-4621-a5f2-5ab4b6957acd"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:01:17 crc kubenswrapper[4183]: I0813 20:01:17.776045 4183 reconciler_common.go:300] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2f155735-a9be-4621-a5f2-5ab4b6957acd-kubelet-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:01:17 crc kubenswrapper[4183]: I0813 20:01:17.776112 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2f155735-a9be-4621-a5f2-5ab4b6957acd-kube-api-access\") on node \"crc\" DevicePath \"\"" Aug 13 20:01:17 crc kubenswrapper[4183]: I0813 20:01:17.947235 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-644bb77b49-5x5xk"] Aug 13 20:01:18 crc kubenswrapper[4183]: I0813 20:01:18.410224 4183 generic.go:334] "Generic (PLEG): container finished" podID="b23d6435-6431-4905-b41b-a517327385e5" containerID="ee7ad10446d56157471e17a6fd0a6c5ffb7cc6177a566dcf214a0b78b5502ef3" exitCode=0 Aug 13 20:01:18 crc kubenswrapper[4183]: I0813 20:01:18.410384 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" event={"ID":"b23d6435-6431-4905-b41b-a517327385e5","Type":"ContainerDied","Data":"ee7ad10446d56157471e17a6fd0a6c5ffb7cc6177a566dcf214a0b78b5502ef3"} Aug 13 20:01:18 crc kubenswrapper[4183]: I0813 20:01:18.613964 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-10-crc" event={"ID":"2f155735-a9be-4621-a5f2-5ab4b6957acd","Type":"ContainerDied","Data":"c05ff35bd00034fcfab3a644cd84bcb84bc4a9c535bd6172e2012a7d16ea6eb5"} Aug 13 20:01:18 crc kubenswrapper[4183]: I0813 20:01:18.615688 4183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c05ff35bd00034fcfab3a644cd84bcb84bc4a9c535bd6172e2012a7d16ea6eb5" Aug 13 20:01:18 crc kubenswrapper[4183]: I0813 20:01:18.615583 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-10-crc" Aug 13 20:01:19 crc kubenswrapper[4183]: I0813 20:01:19.540752 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:01:19 crc kubenswrapper[4183]: I0813 20:01:19.541070 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:01:20 crc kubenswrapper[4183]: I0813 20:01:20.010289 4183 generic.go:334] "Generic (PLEG): container finished" podID="6268b7fe-8910-4505-b404-6f1df638105c" containerID="c206967f2892cfc5d9ca27cc94cd1d42b6561839a6724e931bbdea13b6e1cde5" exitCode=0 Aug 13 20:01:20 crc kubenswrapper[4183]: I0813 20:01:20.010422 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-65476884b9-9wcvx" event={"ID":"6268b7fe-8910-4505-b404-6f1df638105c","Type":"ContainerDied","Data":"c206967f2892cfc5d9ca27cc94cd1d42b6561839a6724e931bbdea13b6e1cde5"} Aug 13 20:01:20 crc kubenswrapper[4183]: I0813 20:01:20.010464 4183 scope.go:117] "RemoveContainer" containerID="50e7a71dc2a39377a3d66cf968c9c75001c5782d329877e2aeb63cfbd66e826a" Aug 13 20:01:20 crc kubenswrapper[4183]: I0813 20:01:20.134694 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 20:01:20 crc kubenswrapper[4183]: I0813 20:01:20.312504 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nz92\" (UniqueName: \"kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:01:20 crc kubenswrapper[4183]: I0813 20:01:20.355962 4183 generic.go:334] "Generic (PLEG): container finished" podID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" containerID="20a713ea366c19c1b427548e8b8ab979d2ae1d350c086fe1a4874181f4de7687" exitCode=0 Aug 13 20:01:20 crc kubenswrapper[4183]: I0813 20:01:20.359304 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" event={"ID":"ed024e5d-8fc2-4c22-803d-73f3c9795f19","Type":"ContainerDied","Data":"20a713ea366c19c1b427548e8b8ab979d2ae1d350c086fe1a4874181f4de7687"} Aug 13 20:01:20 crc kubenswrapper[4183]: I0813 20:01:20.359386 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:01:20 crc kubenswrapper[4183]: I0813 20:01:20.360392 4183 scope.go:117] "RemoveContainer" containerID="20a713ea366c19c1b427548e8b8ab979d2ae1d350c086fe1a4874181f4de7687" Aug 13 20:01:20 crc kubenswrapper[4183]: I0813 20:01:20.468060 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:01:21 crc kubenswrapper[4183]: I0813 20:01:21.024540 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:01:21 crc kubenswrapper[4183]: I0813 20:01:21.602986 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-78589965b8-vmcwt"] Aug 13 20:01:21 crc kubenswrapper[4183]: I0813 20:01:21.603405 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" podUID="00d32440-4cce-4609-96f3-51ac94480aab" containerName="controller-manager" containerID="cri-o://71a0cdc384f9d93ad108bee372da2b3e7dddb9b98c65c36f3ddbf584a54fd830" gracePeriod=30 Aug 13 20:01:22 crc kubenswrapper[4183]: I0813 20:01:22.206371 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-9-crc"] Aug 13 20:01:22 crc kubenswrapper[4183]: I0813 20:01:22.468707 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-5dbbc74dc9-cp5cd_e9127708-ccfd-4891-8a3a-f0cacb77e0f4/console-operator/0.log" Aug 13 20:01:22 crc kubenswrapper[4183]: I0813 20:01:22.471111 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" event={"ID":"e9127708-ccfd-4891-8a3a-f0cacb77e0f4","Type":"ContainerStarted","Data":"de440c5d69c49e4ae9a6d8d6a8c21cebc200a69199b6854aa7edf579fd041ccd"} Aug 13 20:01:22 crc kubenswrapper[4183]: I0813 20:01:22.472858 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 20:01:22 crc kubenswrapper[4183]: I0813 20:01:22.565665 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh"] Aug 13 20:01:22 crc kubenswrapper[4183]: I0813 20:01:22.565985 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" containerName="route-controller-manager" containerID="cri-o://417399fd591cd0cade9e86c96a7f4a9443d365dc57f627f00e02594fd8957bf3" gracePeriod=30 Aug 13 20:01:23 crc kubenswrapper[4183]: I0813 20:01:23.396139 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/installer-9-crc"] Aug 13 20:01:23 crc kubenswrapper[4183]: I0813 20:01:23.473329 4183 patch_prober.go:28] interesting pod/console-operator-5dbbc74dc9-cp5cd container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 20:01:23 crc kubenswrapper[4183]: I0813 20:01:23.473426 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.62:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 20:01:23 crc kubenswrapper[4183]: I0813 20:01:23.625377 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-644bb77b49-5x5xk"] Aug 13 20:01:24 crc kubenswrapper[4183]: I0813 20:01:24.053119 4183 patch_prober.go:28] interesting pod/route-controller-manager-846977c6bc-7gjhh container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": dial tcp 10.217.0.65:8443: connect: connection refused" start-of-body= Aug 13 20:01:24 crc kubenswrapper[4183]: I0813 20:01:24.053229 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": dial tcp 10.217.0.65:8443: connect: connection refused" Aug 13 20:01:24 crc kubenswrapper[4183]: W0813 20:01:24.084861 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e649ef6_bbda_4ad9_8a09_ac3803dd0cc1.slice/crio-48ddb06f60b4f68d09a2a539638fcf41c8d68761518ac0ef54f91af62a4bb107 WatchSource:0}: Error finding container 48ddb06f60b4f68d09a2a539638fcf41c8d68761518ac0ef54f91af62a4bb107: Status 404 returned error can't find the container with id 48ddb06f60b4f68d09a2a539638fcf41c8d68761518ac0ef54f91af62a4bb107 Aug 13 20:01:24 crc kubenswrapper[4183]: I0813 20:01:24.294535 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-65476884b9-9wcvx" event={"ID":"6268b7fe-8910-4505-b404-6f1df638105c","Type":"ContainerStarted","Data":"74df4184eccc1eab0b2fc55559bbac3d87ade106234259f3272b047110a68b24"} Aug 13 20:01:24 crc kubenswrapper[4183]: I0813 20:01:24.295758 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:01:24 crc kubenswrapper[4183]: I0813 20:01:24.295918 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:01:24 crc kubenswrapper[4183]: I0813 20:01:24.297091 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 20:01:24 crc kubenswrapper[4183]: I0813 20:01:24.668324 4183 generic.go:334] "Generic (PLEG): container finished" podID="00d32440-4cce-4609-96f3-51ac94480aab" containerID="71a0cdc384f9d93ad108bee372da2b3e7dddb9b98c65c36f3ddbf584a54fd830" exitCode=0 Aug 13 20:01:24 crc kubenswrapper[4183]: I0813 20:01:24.668470 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" event={"ID":"00d32440-4cce-4609-96f3-51ac94480aab","Type":"ContainerDied","Data":"71a0cdc384f9d93ad108bee372da2b3e7dddb9b98c65c36f3ddbf584a54fd830"} Aug 13 20:01:24 crc kubenswrapper[4183]: I0813 20:01:24.871746 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:01:24 crc kubenswrapper[4183]: I0813 20:01:24.872426 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:01:24 crc kubenswrapper[4183]: I0813 20:01:24.871878 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:01:24 crc kubenswrapper[4183]: I0813 20:01:24.872488 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:01:24 crc kubenswrapper[4183]: I0813 20:01:24.896333 4183 generic.go:334] "Generic (PLEG): container finished" podID="71af81a9-7d43-49b2-9287-c375900aa905" containerID="e2ed40c9bc30c8fdbb04088362ef76212a522ea5070f999ce3dc603f8c7a487e" exitCode=0 Aug 13 20:01:24 crc kubenswrapper[4183]: I0813 20:01:24.897921 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" event={"ID":"71af81a9-7d43-49b2-9287-c375900aa905","Type":"ContainerDied","Data":"e2ed40c9bc30c8fdbb04088362ef76212a522ea5070f999ce3dc603f8c7a487e"} Aug 13 20:01:24 crc kubenswrapper[4183]: I0813 20:01:24.898721 4183 scope.go:117] "RemoveContainer" containerID="e2ed40c9bc30c8fdbb04088362ef76212a522ea5070f999ce3dc603f8c7a487e" Aug 13 20:01:24 crc kubenswrapper[4183]: I0813 20:01:24.909362 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 20:01:25 crc kubenswrapper[4183]: I0813 20:01:25.425912 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="227e3650-2a85-4229-8099-bb53972635b2" path="/var/lib/kubelet/pods/227e3650-2a85-4229-8099-bb53972635b2/volumes" Aug 13 20:01:26 crc kubenswrapper[4183]: I0813 20:01:26.201431 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-644bb77b49-5x5xk" event={"ID":"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1","Type":"ContainerStarted","Data":"48ddb06f60b4f68d09a2a539638fcf41c8d68761518ac0ef54f91af62a4bb107"} Aug 13 20:01:26 crc kubenswrapper[4183]: I0813 20:01:26.469691 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" event={"ID":"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d","Type":"ContainerDied","Data":"417399fd591cd0cade9e86c96a7f4a9443d365dc57f627f00e02594fd8957bf3"} Aug 13 20:01:26 crc kubenswrapper[4183]: I0813 20:01:26.470093 4183 generic.go:334] "Generic (PLEG): container finished" podID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" containerID="417399fd591cd0cade9e86c96a7f4a9443d365dc57f627f00e02594fd8957bf3" exitCode=0 Aug 13 20:01:26 crc kubenswrapper[4183]: I0813 20:01:26.805506 4183 generic.go:334] "Generic (PLEG): container finished" podID="b54e8941-2fc4-432a-9e51-39684df9089e" containerID="dd7033f12f10dfa562ecc04746779666b1a34bddfcb245d6e2353cc2c05cc540" exitCode=0 Aug 13 20:01:26 crc kubenswrapper[4183]: I0813 20:01:26.805810 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" event={"ID":"b54e8941-2fc4-432a-9e51-39684df9089e","Type":"ContainerDied","Data":"dd7033f12f10dfa562ecc04746779666b1a34bddfcb245d6e2353cc2c05cc540"} Aug 13 20:01:26 crc kubenswrapper[4183]: I0813 20:01:26.806954 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:01:26 crc kubenswrapper[4183]: I0813 20:01:26.807062 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:01:26 crc kubenswrapper[4183]: I0813 20:01:26.807600 4183 scope.go:117] "RemoveContainer" containerID="dd7033f12f10dfa562ecc04746779666b1a34bddfcb245d6e2353cc2c05cc540" Aug 13 20:01:27 crc kubenswrapper[4183]: I0813 20:01:27.650207 4183 patch_prober.go:28] interesting pod/openshift-config-operator-77658b5b66-dq5sc container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Aug 13 20:01:27 crc kubenswrapper[4183]: I0813 20:01:27.650662 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Aug 13 20:01:27 crc kubenswrapper[4183]: I0813 20:01:27.653706 4183 patch_prober.go:28] interesting pod/openshift-config-operator-77658b5b66-dq5sc container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Aug 13 20:01:27 crc kubenswrapper[4183]: I0813 20:01:27.654104 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Aug 13 20:01:28 crc kubenswrapper[4183]: I0813 20:01:28.295104 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" event={"ID":"ed024e5d-8fc2-4c22-803d-73f3c9795f19","Type":"ContainerStarted","Data":"2af5bb0c4b139d706151f3201c47d8cc989a3569891ca64ddff1c17afff77399"} Aug 13 20:01:29 crc kubenswrapper[4183]: I0813 20:01:29.540695 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:01:29 crc kubenswrapper[4183]: I0813 20:01:29.541479 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:01:30 crc kubenswrapper[4183]: I0813 20:01:30.649538 4183 patch_prober.go:28] interesting pod/openshift-config-operator-77658b5b66-dq5sc container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Aug 13 20:01:30 crc kubenswrapper[4183]: I0813 20:01:30.650102 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Aug 13 20:01:30 crc kubenswrapper[4183]: I0813 20:01:30.649680 4183 patch_prober.go:28] interesting pod/openshift-config-operator-77658b5b66-dq5sc container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Aug 13 20:01:30 crc kubenswrapper[4183]: I0813 20:01:30.650213 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Aug 13 20:01:30 crc kubenswrapper[4183]: I0813 20:01:30.732117 4183 patch_prober.go:28] interesting pod/controller-manager-78589965b8-vmcwt container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 20:01:30 crc kubenswrapper[4183]: I0813 20:01:30.732259 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" podUID="00d32440-4cce-4609-96f3-51ac94480aab" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 20:01:31 crc kubenswrapper[4183]: I0813 20:01:31.296466 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" event={"ID":"b23d6435-6431-4905-b41b-a517327385e5","Type":"ContainerStarted","Data":"98e20994b78d70c7d9739afcbef1576151aa009516cab8609a2c74b997bfed1a"} Aug 13 20:01:31 crc kubenswrapper[4183]: I0813 20:01:31.307275 4183 patch_prober.go:28] interesting pod/apiserver-69c565c9b6-vbdpd container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Aug 13 20:01:31 crc kubenswrapper[4183]: [+]log ok Aug 13 20:01:31 crc kubenswrapper[4183]: [+]etcd ok Aug 13 20:01:31 crc kubenswrapper[4183]: [-]etcd-readiness failed: reason withheld Aug 13 20:01:31 crc kubenswrapper[4183]: [+]informer-sync ok Aug 13 20:01:31 crc kubenswrapper[4183]: [+]poststarthook/generic-apiserver-start-informers ok Aug 13 20:01:31 crc kubenswrapper[4183]: [+]poststarthook/max-in-flight-filter ok Aug 13 20:01:31 crc kubenswrapper[4183]: [+]poststarthook/storage-object-count-tracker-hook ok Aug 13 20:01:31 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartOAuthInformer ok Aug 13 20:01:31 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartUserInformer ok Aug 13 20:01:31 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Aug 13 20:01:31 crc kubenswrapper[4183]: [+]shutdown ok Aug 13 20:01:31 crc kubenswrapper[4183]: readyz check failed Aug 13 20:01:31 crc kubenswrapper[4183]: I0813 20:01:31.307529 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:01:31 crc kubenswrapper[4183]: I0813 20:01:31.307770 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 20:01:31 crc kubenswrapper[4183]: I0813 20:01:31.525000 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 20:01:32 crc kubenswrapper[4183]: I0813 20:01:32.590474 4183 generic.go:334] "Generic (PLEG): container finished" podID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" containerID="de2b2e2d762c8b359ec567ae879d9fedbdd2fb02f477f190f4465a6d6279b220" exitCode=0 Aug 13 20:01:32 crc kubenswrapper[4183]: I0813 20:01:32.591013 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" event={"ID":"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf","Type":"ContainerDied","Data":"de2b2e2d762c8b359ec567ae879d9fedbdd2fb02f477f190f4465a6d6279b220"} Aug 13 20:01:32 crc kubenswrapper[4183]: I0813 20:01:32.591986 4183 scope.go:117] "RemoveContainer" containerID="de2b2e2d762c8b359ec567ae879d9fedbdd2fb02f477f190f4465a6d6279b220" Aug 13 20:01:32 crc kubenswrapper[4183]: I0813 20:01:32.798229 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-config-operator/0.log" Aug 13 20:01:32 crc kubenswrapper[4183]: I0813 20:01:32.799503 4183 generic.go:334] "Generic (PLEG): container finished" podID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" containerID="a82f834c3402db4242f753141733e4ebdbbd2a9132e9ded819a1a24bce37e03b" exitCode=0 Aug 13 20:01:32 crc kubenswrapper[4183]: I0813 20:01:32.799574 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" event={"ID":"530553aa-0a1d-423e-8a22-f5eb4bdbb883","Type":"ContainerDied","Data":"a82f834c3402db4242f753141733e4ebdbbd2a9132e9ded819a1a24bce37e03b"} Aug 13 20:01:32 crc kubenswrapper[4183]: I0813 20:01:32.799630 4183 scope.go:117] "RemoveContainer" containerID="f78c28c3dccb095318f195e1d81c6ec26e3a25cfb361d9aa9942e4d8a6f9923b" Aug 13 20:01:32 crc kubenswrapper[4183]: I0813 20:01:32.800480 4183 scope.go:117] "RemoveContainer" containerID="a82f834c3402db4242f753141733e4ebdbbd2a9132e9ded819a1a24bce37e03b" Aug 13 20:01:33 crc kubenswrapper[4183]: I0813 20:01:33.649066 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 20:01:33 crc kubenswrapper[4183]: I0813 20:01:33.649137 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 20:01:34 crc kubenswrapper[4183]: I0813 20:01:34.873292 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:01:34 crc kubenswrapper[4183]: I0813 20:01:34.873437 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:01:34 crc kubenswrapper[4183]: I0813 20:01:34.873433 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:01:34 crc kubenswrapper[4183]: I0813 20:01:34.873679 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:01:35 crc kubenswrapper[4183]: I0813 20:01:35.052072 4183 patch_prober.go:28] interesting pod/route-controller-manager-846977c6bc-7gjhh container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": dial tcp 10.217.0.65:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 20:01:35 crc kubenswrapper[4183]: I0813 20:01:35.052240 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": dial tcp 10.217.0.65:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)" Aug 13 20:01:35 crc kubenswrapper[4183]: I0813 20:01:35.307817 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-7-crc" podStartSLOduration=58.307555991 podStartE2EDuration="58.307555991s" podCreationTimestamp="2025-08-13 20:00:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:01:35.303077173 +0000 UTC m=+1061.995741941" watchObservedRunningTime="2025-08-13 20:01:35.307555991 +0000 UTC m=+1062.000220839" Aug 13 20:01:35 crc kubenswrapper[4183]: I0813 20:01:35.309160 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-10-crc" podStartSLOduration=56.309123315 podStartE2EDuration="56.309123315s" podCreationTimestamp="2025-08-13 20:00:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:01:27.138539278 +0000 UTC m=+1053.831204276" watchObservedRunningTime="2025-08-13 20:01:35.309123315 +0000 UTC m=+1062.001788104" Aug 13 20:01:36 crc kubenswrapper[4183]: I0813 20:01:36.078709 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" containerName="registry" containerID="cri-o://32fd955a56de5925978ca9c74fd5477e1123ae91904669c797c57e09bb337d84" gracePeriod=28 Aug 13 20:01:36 crc kubenswrapper[4183]: I0813 20:01:36.273056 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" containerName="console" containerID="cri-o://a4a4a30f20f748c27de48f589b297456dbde26c9c06b9c1e843ce69a376e85a9" gracePeriod=15 Aug 13 20:01:36 crc kubenswrapper[4183]: I0813 20:01:36.668612 4183 patch_prober.go:28] interesting pod/apiserver-69c565c9b6-vbdpd container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Aug 13 20:01:36 crc kubenswrapper[4183]: [+]log ok Aug 13 20:01:36 crc kubenswrapper[4183]: [+]etcd ok Aug 13 20:01:36 crc kubenswrapper[4183]: [-]etcd-readiness failed: reason withheld Aug 13 20:01:36 crc kubenswrapper[4183]: [+]informer-sync ok Aug 13 20:01:36 crc kubenswrapper[4183]: [+]poststarthook/generic-apiserver-start-informers ok Aug 13 20:01:36 crc kubenswrapper[4183]: [+]poststarthook/max-in-flight-filter ok Aug 13 20:01:36 crc kubenswrapper[4183]: [+]poststarthook/storage-object-count-tracker-hook ok Aug 13 20:01:36 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartOAuthInformer ok Aug 13 20:01:36 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartUserInformer ok Aug 13 20:01:36 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Aug 13 20:01:36 crc kubenswrapper[4183]: [+]shutdown ok Aug 13 20:01:36 crc kubenswrapper[4183]: readyz check failed Aug 13 20:01:36 crc kubenswrapper[4183]: I0813 20:01:36.668747 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:01:36 crc kubenswrapper[4183]: I0813 20:01:36.668916 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 20:01:36 crc kubenswrapper[4183]: I0813 20:01:36.890298 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_0f394926-bdb9-425c-b36e-264d7fd34550/openshift-controller-manager-operator/0.log" Aug 13 20:01:36 crc kubenswrapper[4183]: I0813 20:01:36.890423 4183 generic.go:334] "Generic (PLEG): container finished" podID="0f394926-bdb9-425c-b36e-264d7fd34550" containerID="30bf5390313371a8f7b0bd5cd736b789b0d1779681e69eff1d8e1c6c5c72d56d" exitCode=1 Aug 13 20:01:36 crc kubenswrapper[4183]: I0813 20:01:36.890579 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" event={"ID":"0f394926-bdb9-425c-b36e-264d7fd34550","Type":"ContainerDied","Data":"30bf5390313371a8f7b0bd5cd736b789b0d1779681e69eff1d8e1c6c5c72d56d"} Aug 13 20:01:36 crc kubenswrapper[4183]: I0813 20:01:36.891407 4183 scope.go:117] "RemoveContainer" containerID="30bf5390313371a8f7b0bd5cd736b789b0d1779681e69eff1d8e1c6c5c72d56d" Aug 13 20:01:36 crc kubenswrapper[4183]: I0813 20:01:36.895752 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-console_console-84fccc7b6-mkncc_b233d916-bfe3-4ae5-ae39-6b574d1aa05e/console/0.log" Aug 13 20:01:36 crc kubenswrapper[4183]: I0813 20:01:36.895915 4183 generic.go:334] "Generic (PLEG): container finished" podID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" containerID="a4a4a30f20f748c27de48f589b297456dbde26c9c06b9c1e843ce69a376e85a9" exitCode=2 Aug 13 20:01:36 crc kubenswrapper[4183]: I0813 20:01:36.895953 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-84fccc7b6-mkncc" event={"ID":"b233d916-bfe3-4ae5-ae39-6b574d1aa05e","Type":"ContainerDied","Data":"a4a4a30f20f748c27de48f589b297456dbde26c9c06b9c1e843ce69a376e85a9"} Aug 13 20:01:37 crc kubenswrapper[4183]: I0813 20:01:37.616220 4183 patch_prober.go:28] interesting pod/image-registry-7cbd5666ff-bbfrf container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.38:5000/healthz\": dial tcp 10.217.0.38:5000: connect: connection refused" start-of-body= Aug 13 20:01:37 crc kubenswrapper[4183]: I0813 20:01:37.616433 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.38:5000/healthz\": dial tcp 10.217.0.38:5000: connect: connection refused" Aug 13 20:01:39 crc kubenswrapper[4183]: I0813 20:01:39.540023 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:01:39 crc kubenswrapper[4183]: I0813 20:01:39.540131 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:01:39 crc kubenswrapper[4183]: I0813 20:01:39.995494 4183 generic.go:334] "Generic (PLEG): container finished" podID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" containerID="32fd955a56de5925978ca9c74fd5477e1123ae91904669c797c57e09bb337d84" exitCode=0 Aug 13 20:01:39 crc kubenswrapper[4183]: I0813 20:01:39.995692 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" event={"ID":"42b6a393-6194-4620-bf8f-7e4b6cbe5679","Type":"ContainerDied","Data":"32fd955a56de5925978ca9c74fd5477e1123ae91904669c797c57e09bb337d84"} Aug 13 20:01:40 crc kubenswrapper[4183]: I0813 20:01:40.005343 4183 generic.go:334] "Generic (PLEG): container finished" podID="cc291782-27d2-4a74-af79-c7dcb31535d2" containerID="ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce" exitCode=0 Aug 13 20:01:40 crc kubenswrapper[4183]: I0813 20:01:40.005439 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" event={"ID":"cc291782-27d2-4a74-af79-c7dcb31535d2","Type":"ContainerDied","Data":"ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce"} Aug 13 20:01:40 crc kubenswrapper[4183]: I0813 20:01:40.006541 4183 scope.go:117] "RemoveContainer" containerID="ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce" Aug 13 20:01:40 crc kubenswrapper[4183]: I0813 20:01:40.729951 4183 patch_prober.go:28] interesting pod/controller-manager-78589965b8-vmcwt container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 20:01:40 crc kubenswrapper[4183]: I0813 20:01:40.730089 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" podUID="00d32440-4cce-4609-96f3-51ac94480aab" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 20:01:44 crc kubenswrapper[4183]: I0813 20:01:44.098301 4183 generic.go:334] "Generic (PLEG): container finished" podID="6d67253e-2acd-4bc1-8185-793587da4f17" containerID="de7555d542c802e58046a90350e414a08c9d856a865303fa64131537f1cc00fc" exitCode=0 Aug 13 20:01:44 crc kubenswrapper[4183]: I0813 20:01:44.098414 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" event={"ID":"6d67253e-2acd-4bc1-8185-793587da4f17","Type":"ContainerDied","Data":"de7555d542c802e58046a90350e414a08c9d856a865303fa64131537f1cc00fc"} Aug 13 20:01:44 crc kubenswrapper[4183]: I0813 20:01:44.099636 4183 scope.go:117] "RemoveContainer" containerID="de7555d542c802e58046a90350e414a08c9d856a865303fa64131537f1cc00fc" Aug 13 20:01:44 crc kubenswrapper[4183]: I0813 20:01:44.872298 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:01:44 crc kubenswrapper[4183]: I0813 20:01:44.872449 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:01:44 crc kubenswrapper[4183]: I0813 20:01:44.873231 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:01:44 crc kubenswrapper[4183]: I0813 20:01:44.873354 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:01:44 crc kubenswrapper[4183]: I0813 20:01:44.873415 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 20:01:44 crc kubenswrapper[4183]: I0813 20:01:44.875268 4183 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"74df4184eccc1eab0b2fc55559bbac3d87ade106234259f3272b047110a68b24"} pod="openshift-console/downloads-65476884b9-9wcvx" containerMessage="Container download-server failed liveness probe, will be restarted" Aug 13 20:01:44 crc kubenswrapper[4183]: I0813 20:01:44.875340 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" containerID="cri-o://74df4184eccc1eab0b2fc55559bbac3d87ade106234259f3272b047110a68b24" gracePeriod=2 Aug 13 20:01:44 crc kubenswrapper[4183]: I0813 20:01:44.876252 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:01:44 crc kubenswrapper[4183]: I0813 20:01:44.876316 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:01:44 crc kubenswrapper[4183]: I0813 20:01:44.991710 4183 patch_prober.go:28] interesting pod/apiserver-69c565c9b6-vbdpd container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Aug 13 20:01:44 crc kubenswrapper[4183]: [+]log ok Aug 13 20:01:44 crc kubenswrapper[4183]: [+]etcd ok Aug 13 20:01:44 crc kubenswrapper[4183]: [-]etcd-readiness failed: reason withheld Aug 13 20:01:44 crc kubenswrapper[4183]: [+]informer-sync ok Aug 13 20:01:44 crc kubenswrapper[4183]: [+]poststarthook/generic-apiserver-start-informers ok Aug 13 20:01:44 crc kubenswrapper[4183]: [+]poststarthook/max-in-flight-filter ok Aug 13 20:01:44 crc kubenswrapper[4183]: [+]poststarthook/storage-object-count-tracker-hook ok Aug 13 20:01:44 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartOAuthInformer ok Aug 13 20:01:44 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartUserInformer ok Aug 13 20:01:44 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Aug 13 20:01:44 crc kubenswrapper[4183]: [+]shutdown ok Aug 13 20:01:44 crc kubenswrapper[4183]: readyz check failed Aug 13 20:01:44 crc kubenswrapper[4183]: I0813 20:01:44.993555 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:01:45 crc kubenswrapper[4183]: I0813 20:01:45.053241 4183 patch_prober.go:28] interesting pod/route-controller-manager-846977c6bc-7gjhh container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 20:01:45 crc kubenswrapper[4183]: I0813 20:01:45.053396 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 20:01:47 crc kubenswrapper[4183]: I0813 20:01:47.001768 4183 patch_prober.go:28] interesting pod/apiserver-69c565c9b6-vbdpd container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Aug 13 20:01:47 crc kubenswrapper[4183]: [+]log ok Aug 13 20:01:47 crc kubenswrapper[4183]: [+]etcd ok Aug 13 20:01:47 crc kubenswrapper[4183]: [-]etcd-readiness failed: reason withheld Aug 13 20:01:47 crc kubenswrapper[4183]: [+]informer-sync ok Aug 13 20:01:47 crc kubenswrapper[4183]: [+]poststarthook/generic-apiserver-start-informers ok Aug 13 20:01:47 crc kubenswrapper[4183]: [+]poststarthook/max-in-flight-filter ok Aug 13 20:01:47 crc kubenswrapper[4183]: [+]poststarthook/storage-object-count-tracker-hook ok Aug 13 20:01:47 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartOAuthInformer ok Aug 13 20:01:47 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartUserInformer ok Aug 13 20:01:47 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Aug 13 20:01:47 crc kubenswrapper[4183]: [+]shutdown ok Aug 13 20:01:47 crc kubenswrapper[4183]: readyz check failed Aug 13 20:01:47 crc kubenswrapper[4183]: I0813 20:01:47.002276 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:01:47 crc kubenswrapper[4183]: I0813 20:01:47.615729 4183 patch_prober.go:28] interesting pod/image-registry-7cbd5666ff-bbfrf container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.38:5000/healthz\": dial tcp 10.217.0.38:5000: connect: connection refused" start-of-body= Aug 13 20:01:47 crc kubenswrapper[4183]: I0813 20:01:47.616442 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.38:5000/healthz\": dial tcp 10.217.0.38:5000: connect: connection refused" Aug 13 20:01:49 crc kubenswrapper[4183]: I0813 20:01:49.245860 4183 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Liveness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]log ok Aug 13 20:01:49 crc kubenswrapper[4183]: [-]etcd failed: reason withheld Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-api-request-count-filter ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-startkubeinformers ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/start-kube-apiserver-admission-initializer ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/generic-apiserver-start-informers ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/priority-and-fairness-config-consumer ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/priority-and-fairness-filter ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/storage-object-count-tracker-hook ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/start-apiextensions-informers ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/start-apiextensions-controllers ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/crd-informer-synced ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/start-service-ip-repair-controllers ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/rbac/bootstrap-roles ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/priority-and-fairness-config-producer ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/start-system-namespaces-controller ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/bootstrap-controller ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/start-cluster-authentication-info-controller ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/start-legacy-token-tracking-controller ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/start-kube-aggregator-informers ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/apiservice-registration-controller ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/apiservice-status-available-controller ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/apiservice-wait-for-first-sync ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/kube-apiserver-autoregistration ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]autoregister-completion ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/apiservice-openapi-controller ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/apiservice-openapiv3-controller ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/apiservice-discovery-controller ok Aug 13 20:01:49 crc kubenswrapper[4183]: livez check failed Aug 13 20:01:49 crc kubenswrapper[4183]: I0813 20:01:49.246065 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:01:49 crc kubenswrapper[4183]: I0813 20:01:49.540146 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:01:49 crc kubenswrapper[4183]: I0813 20:01:49.540335 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:01:50 crc kubenswrapper[4183]: I0813 20:01:50.580248 4183 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Aug 13 20:01:50 crc kubenswrapper[4183]: I0813 20:01:50.580359 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Aug 13 20:01:50 crc kubenswrapper[4183]: I0813 20:01:50.729450 4183 patch_prober.go:28] interesting pod/controller-manager-78589965b8-vmcwt container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 20:01:50 crc kubenswrapper[4183]: I0813 20:01:50.729579 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" podUID="00d32440-4cce-4609-96f3-51ac94480aab" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 20:01:52 crc kubenswrapper[4183]: I0813 20:01:52.008964 4183 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]log ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]etcd ok Aug 13 20:01:52 crc kubenswrapper[4183]: [-]etcd-readiness failed: reason withheld Aug 13 20:01:52 crc kubenswrapper[4183]: [+]api-openshift-apiserver-available ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]api-openshift-oauth-apiserver-available ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]informer-sync ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-api-request-count-filter ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-startkubeinformers ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/start-kube-apiserver-admission-initializer ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/generic-apiserver-start-informers ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/priority-and-fairness-config-consumer ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/priority-and-fairness-filter ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/storage-object-count-tracker-hook ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/start-apiextensions-informers ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/start-apiextensions-controllers ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/crd-informer-synced ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/start-service-ip-repair-controllers ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/rbac/bootstrap-roles ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/priority-and-fairness-config-producer ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/start-system-namespaces-controller ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/bootstrap-controller ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/start-cluster-authentication-info-controller ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/start-legacy-token-tracking-controller ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/start-kube-aggregator-informers ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/apiservice-registration-controller ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/apiservice-status-available-controller ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/apiservice-wait-for-first-sync ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/kube-apiserver-autoregistration ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]autoregister-completion ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/apiservice-openapi-controller ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/apiservice-openapiv3-controller ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/apiservice-discovery-controller ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]shutdown ok Aug 13 20:01:52 crc kubenswrapper[4183]: readyz check failed Aug 13 20:01:52 crc kubenswrapper[4183]: I0813 20:01:52.011833 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:01:52 crc kubenswrapper[4183]: I0813 20:01:52.012278 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:01:52 crc kubenswrapper[4183]: I0813 20:01:52.362490 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/cluster-policy-controller/5.log" Aug 13 20:01:52 crc kubenswrapper[4183]: I0813 20:01:52.486931 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/kube-controller-manager/0.log" Aug 13 20:01:52 crc kubenswrapper[4183]: I0813 20:01:52.487071 4183 generic.go:334] "Generic (PLEG): container finished" podID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerID="28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509" exitCode=1 Aug 13 20:01:52 crc kubenswrapper[4183]: I0813 20:01:52.487115 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"2eb2b200bca0d10cf0fe16fb7c0caf80","Type":"ContainerDied","Data":"28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509"} Aug 13 20:01:52 crc kubenswrapper[4183]: I0813 20:01:52.489136 4183 scope.go:117] "RemoveContainer" containerID="28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509" Aug 13 20:01:53 crc kubenswrapper[4183]: I0813 20:01:53.149519 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:01:53 crc kubenswrapper[4183]: I0813 20:01:53.513200 4183 generic.go:334] "Generic (PLEG): container finished" podID="6268b7fe-8910-4505-b404-6f1df638105c" containerID="74df4184eccc1eab0b2fc55559bbac3d87ade106234259f3272b047110a68b24" exitCode=0 Aug 13 20:01:53 crc kubenswrapper[4183]: I0813 20:01:53.513465 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-65476884b9-9wcvx" event={"ID":"6268b7fe-8910-4505-b404-6f1df638105c","Type":"ContainerDied","Data":"74df4184eccc1eab0b2fc55559bbac3d87ade106234259f3272b047110a68b24"} Aug 13 20:01:54 crc kubenswrapper[4183]: I0813 20:01:54.654140 4183 patch_prober.go:28] interesting pod/apiserver-69c565c9b6-vbdpd container/oauth-apiserver namespace/openshift-oauth-apiserver: Liveness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 20:01:54 crc kubenswrapper[4183]: I0813 20:01:54.654271 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Aug 13 20:01:54 crc kubenswrapper[4183]: I0813 20:01:54.662178 4183 patch_prober.go:28] interesting pod/apiserver-69c565c9b6-vbdpd container/oauth-apiserver namespace/openshift-oauth-apiserver: Liveness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Aug 13 20:01:54 crc kubenswrapper[4183]: [+]log ok Aug 13 20:01:54 crc kubenswrapper[4183]: [-]etcd failed: reason withheld Aug 13 20:01:54 crc kubenswrapper[4183]: [+]poststarthook/generic-apiserver-start-informers ok Aug 13 20:01:54 crc kubenswrapper[4183]: [+]poststarthook/max-in-flight-filter ok Aug 13 20:01:54 crc kubenswrapper[4183]: [+]poststarthook/storage-object-count-tracker-hook ok Aug 13 20:01:54 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartOAuthInformer ok Aug 13 20:01:54 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartUserInformer ok Aug 13 20:01:54 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Aug 13 20:01:54 crc kubenswrapper[4183]: healthz check failed Aug 13 20:01:54 crc kubenswrapper[4183]: I0813 20:01:54.662334 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:01:54 crc kubenswrapper[4183]: I0813 20:01:54.697503 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:01:54 crc kubenswrapper[4183]: I0813 20:01:54.697616 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:01:54 crc kubenswrapper[4183]: I0813 20:01:54.697708 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:01:54 crc kubenswrapper[4183]: I0813 20:01:54.697940 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:01:54 crc kubenswrapper[4183]: I0813 20:01:54.697999 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:01:54 crc kubenswrapper[4183]: I0813 20:01:54.872519 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:01:54 crc kubenswrapper[4183]: I0813 20:01:54.872695 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:01:55 crc kubenswrapper[4183]: I0813 20:01:55.052469 4183 patch_prober.go:28] interesting pod/route-controller-manager-846977c6bc-7gjhh container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 20:01:55 crc kubenswrapper[4183]: I0813 20:01:55.052615 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 20:01:56 crc kubenswrapper[4183]: I0813 20:01:56.187358 4183 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]log ok Aug 13 20:01:56 crc kubenswrapper[4183]: [-]etcd failed: reason withheld Aug 13 20:01:56 crc kubenswrapper[4183]: [+]etcd-readiness ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]api-openshift-apiserver-available ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]api-openshift-oauth-apiserver-available ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]informer-sync ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-api-request-count-filter ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-startkubeinformers ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/start-kube-apiserver-admission-initializer ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/generic-apiserver-start-informers ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/priority-and-fairness-config-consumer ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/priority-and-fairness-filter ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/storage-object-count-tracker-hook ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/start-apiextensions-informers ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/start-apiextensions-controllers ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/crd-informer-synced ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/start-service-ip-repair-controllers ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/rbac/bootstrap-roles ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/priority-and-fairness-config-producer ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/start-system-namespaces-controller ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/bootstrap-controller ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/start-cluster-authentication-info-controller ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/start-legacy-token-tracking-controller ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/start-kube-aggregator-informers ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/apiservice-registration-controller ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/apiservice-status-available-controller ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/apiservice-wait-for-first-sync ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/kube-apiserver-autoregistration ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]autoregister-completion ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/apiservice-openapi-controller ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/apiservice-openapiv3-controller ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/apiservice-discovery-controller ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]shutdown ok Aug 13 20:01:56 crc kubenswrapper[4183]: readyz check failed Aug 13 20:01:56 crc kubenswrapper[4183]: I0813 20:01:56.188201 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:01:57 crc kubenswrapper[4183]: I0813 20:01:57.615874 4183 patch_prober.go:28] interesting pod/image-registry-7cbd5666ff-bbfrf container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.38:5000/healthz\": dial tcp 10.217.0.38:5000: connect: connection refused" start-of-body= Aug 13 20:01:57 crc kubenswrapper[4183]: I0813 20:01:57.616124 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.38:5000/healthz\": dial tcp 10.217.0.38:5000: connect: connection refused" Aug 13 20:01:57 crc kubenswrapper[4183]: I0813 20:01:57.616274 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 20:01:57 crc kubenswrapper[4183]: I0813 20:01:57.705528 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:01:58 crc kubenswrapper[4183]: I0813 20:01:58.104674 4183 patch_prober.go:28] interesting pod/apiserver-69c565c9b6-vbdpd container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Aug 13 20:01:58 crc kubenswrapper[4183]: [+]log ok Aug 13 20:01:58 crc kubenswrapper[4183]: [+]etcd ok Aug 13 20:01:58 crc kubenswrapper[4183]: [-]etcd-readiness failed: reason withheld Aug 13 20:01:58 crc kubenswrapper[4183]: [+]informer-sync ok Aug 13 20:01:58 crc kubenswrapper[4183]: [+]poststarthook/generic-apiserver-start-informers ok Aug 13 20:01:58 crc kubenswrapper[4183]: [+]poststarthook/max-in-flight-filter ok Aug 13 20:01:58 crc kubenswrapper[4183]: [+]poststarthook/storage-object-count-tracker-hook ok Aug 13 20:01:58 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartOAuthInformer ok Aug 13 20:01:58 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartUserInformer ok Aug 13 20:01:58 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Aug 13 20:01:58 crc kubenswrapper[4183]: [+]shutdown ok Aug 13 20:01:58 crc kubenswrapper[4183]: readyz check failed Aug 13 20:01:58 crc kubenswrapper[4183]: I0813 20:01:58.104897 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:01:58 crc kubenswrapper[4183]: I0813 20:01:58.249211 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podStartSLOduration=81.249140383 podStartE2EDuration="1m21.249140383s" podCreationTimestamp="2025-08-13 20:00:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:01:58.246314082 +0000 UTC m=+1084.938978760" watchObservedRunningTime="2025-08-13 20:01:58.249140383 +0000 UTC m=+1084.941805101" Aug 13 20:01:59 crc kubenswrapper[4183]: I0813 20:01:59.540096 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:01:59 crc kubenswrapper[4183]: I0813 20:01:59.540175 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:02:00 crc kubenswrapper[4183]: I0813 20:02:00.577590 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:02:00 crc kubenswrapper[4183]: I0813 20:02:00.729112 4183 patch_prober.go:28] interesting pod/controller-manager-78589965b8-vmcwt container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 20:02:00 crc kubenswrapper[4183]: I0813 20:02:00.729322 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" podUID="00d32440-4cce-4609-96f3-51ac94480aab" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 20:02:01 crc kubenswrapper[4183]: I0813 20:02:01.333608 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 20:02:01 crc kubenswrapper[4183]: I0813 20:02:01.334488 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:02:03 crc kubenswrapper[4183]: I0813 20:02:03.281117 4183 patch_prober.go:28] interesting pod/apiserver-69c565c9b6-vbdpd container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Aug 13 20:02:03 crc kubenswrapper[4183]: [+]log ok Aug 13 20:02:03 crc kubenswrapper[4183]: [+]etcd ok Aug 13 20:02:03 crc kubenswrapper[4183]: [-]etcd-readiness failed: reason withheld Aug 13 20:02:03 crc kubenswrapper[4183]: [+]informer-sync ok Aug 13 20:02:03 crc kubenswrapper[4183]: [+]poststarthook/generic-apiserver-start-informers ok Aug 13 20:02:03 crc kubenswrapper[4183]: [+]poststarthook/max-in-flight-filter ok Aug 13 20:02:03 crc kubenswrapper[4183]: [+]poststarthook/storage-object-count-tracker-hook ok Aug 13 20:02:03 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartOAuthInformer ok Aug 13 20:02:03 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartUserInformer ok Aug 13 20:02:03 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Aug 13 20:02:03 crc kubenswrapper[4183]: [+]shutdown ok Aug 13 20:02:03 crc kubenswrapper[4183]: readyz check failed Aug 13 20:02:03 crc kubenswrapper[4183]: I0813 20:02:03.281331 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:02:03 crc kubenswrapper[4183]: I0813 20:02:03.281457 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 20:02:03 crc kubenswrapper[4183]: I0813 20:02:03.477433 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 20:02:04 crc kubenswrapper[4183]: I0813 20:02:04.871283 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:02:04 crc kubenswrapper[4183]: I0813 20:02:04.871391 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:02:05 crc kubenswrapper[4183]: I0813 20:02:05.052147 4183 patch_prober.go:28] interesting pod/route-controller-manager-846977c6bc-7gjhh container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 20:02:05 crc kubenswrapper[4183]: I0813 20:02:05.052528 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 20:02:07 crc kubenswrapper[4183]: I0813 20:02:07.615652 4183 patch_prober.go:28] interesting pod/image-registry-7cbd5666ff-bbfrf container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.38:5000/healthz\": dial tcp 10.217.0.38:5000: connect: connection refused" start-of-body= Aug 13 20:02:07 crc kubenswrapper[4183]: I0813 20:02:07.617086 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.38:5000/healthz\": dial tcp 10.217.0.38:5000: connect: connection refused" Aug 13 20:02:09 crc kubenswrapper[4183]: I0813 20:02:09.539284 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:02:09 crc kubenswrapper[4183]: I0813 20:02:09.539527 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:02:10 crc kubenswrapper[4183]: I0813 20:02:10.729873 4183 patch_prober.go:28] interesting pod/controller-manager-78589965b8-vmcwt container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 20:02:10 crc kubenswrapper[4183]: I0813 20:02:10.729972 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" podUID="00d32440-4cce-4609-96f3-51ac94480aab" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 20:02:13 crc kubenswrapper[4183]: I0813 20:02:13.884598 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-mtx25_23eb88d6-6aea-4542-a2b9-8f3fd106b4ab/openshift-apiserver/0.log" Aug 13 20:02:13 crc kubenswrapper[4183]: I0813 20:02:13.891375 4183 generic.go:334] "Generic (PLEG): container finished" podID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" containerID="a9c5c60859fe5965d3e56b1f36415e36c4ebccf094bcf5a836013b9db4262143" exitCode=137 Aug 13 20:02:14 crc kubenswrapper[4183]: I0813 20:02:14.871947 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:02:14 crc kubenswrapper[4183]: I0813 20:02:14.872055 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:02:15 crc kubenswrapper[4183]: I0813 20:02:15.044158 4183 patch_prober.go:28] interesting pod/apiserver-69c565c9b6-vbdpd container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Aug 13 20:02:15 crc kubenswrapper[4183]: [+]log ok Aug 13 20:02:15 crc kubenswrapper[4183]: [+]etcd ok Aug 13 20:02:15 crc kubenswrapper[4183]: [-]etcd-readiness failed: reason withheld Aug 13 20:02:15 crc kubenswrapper[4183]: [+]informer-sync ok Aug 13 20:02:15 crc kubenswrapper[4183]: [+]poststarthook/generic-apiserver-start-informers ok Aug 13 20:02:15 crc kubenswrapper[4183]: [+]poststarthook/max-in-flight-filter ok Aug 13 20:02:15 crc kubenswrapper[4183]: [+]poststarthook/storage-object-count-tracker-hook ok Aug 13 20:02:15 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartOAuthInformer ok Aug 13 20:02:15 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartUserInformer ok Aug 13 20:02:15 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Aug 13 20:02:15 crc kubenswrapper[4183]: [+]shutdown ok Aug 13 20:02:15 crc kubenswrapper[4183]: readyz check failed Aug 13 20:02:15 crc kubenswrapper[4183]: I0813 20:02:15.044241 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:02:15 crc kubenswrapper[4183]: I0813 20:02:15.044717 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 20:02:15 crc kubenswrapper[4183]: I0813 20:02:15.053155 4183 patch_prober.go:28] interesting pod/route-controller-manager-846977c6bc-7gjhh container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 20:02:15 crc kubenswrapper[4183]: I0813 20:02:15.053264 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 20:02:15 crc kubenswrapper[4183]: I0813 20:02:15.105045 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 20:02:15 crc kubenswrapper[4183]: I0813 20:02:15.908592 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/0.log" Aug 13 20:02:15 crc kubenswrapper[4183]: I0813 20:02:15.908860 4183 generic.go:334] "Generic (PLEG): container finished" podID="7d51f445-054a-4e4f-a67b-a828f5a32511" containerID="957c48a64bf505f55933cfc9cf99bce461d72f89938aa38299be4b2e4c832fb2" exitCode=1 Aug 13 20:02:15 crc kubenswrapper[4183]: I0813 20:02:15.908964 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" event={"ID":"7d51f445-054a-4e4f-a67b-a828f5a32511","Type":"ContainerDied","Data":"957c48a64bf505f55933cfc9cf99bce461d72f89938aa38299be4b2e4c832fb2"} Aug 13 20:02:15 crc kubenswrapper[4183]: I0813 20:02:15.910700 4183 scope.go:117] "RemoveContainer" containerID="957c48a64bf505f55933cfc9cf99bce461d72f89938aa38299be4b2e4c832fb2" Aug 13 20:02:17 crc kubenswrapper[4183]: I0813 20:02:17.616356 4183 patch_prober.go:28] interesting pod/image-registry-7cbd5666ff-bbfrf container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.38:5000/healthz\": dial tcp 10.217.0.38:5000: connect: connection refused" start-of-body= Aug 13 20:02:17 crc kubenswrapper[4183]: I0813 20:02:17.616544 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.38:5000/healthz\": dial tcp 10.217.0.38:5000: connect: connection refused" Aug 13 20:02:19 crc kubenswrapper[4183]: I0813 20:02:19.539668 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:02:19 crc kubenswrapper[4183]: I0813 20:02:19.540042 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:02:20 crc kubenswrapper[4183]: I0813 20:02:20.730015 4183 patch_prober.go:28] interesting pod/controller-manager-78589965b8-vmcwt container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 20:02:20 crc kubenswrapper[4183]: I0813 20:02:20.730523 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" podUID="00d32440-4cce-4609-96f3-51ac94480aab" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.122979 4183 kubelet.go:2439] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.123459 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver" containerID="cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5" gracePeriod=15 Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.123664 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-check-endpoints" containerID="cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92" gracePeriod=15 Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.123708 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a" gracePeriod=15 Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.123747 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325" gracePeriod=15 Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.123873 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-cert-syncer" containerID="cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2" gracePeriod=15 Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.127333 4183 kubelet.go:2429] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.127486 4183 topology_manager.go:215] "Topology Admit Handler" podUID="48128e8d38b5cbcd2691da698bd9cac3" podNamespace="openshift-kube-apiserver" podName="kube-apiserver-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: E0813 20:02:21.127694 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="2f155735-a9be-4621-a5f2-5ab4b6957acd" containerName="pruner" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.127710 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f155735-a9be-4621-a5f2-5ab4b6957acd" containerName="pruner" Aug 13 20:02:21 crc kubenswrapper[4183]: E0813 20:02:21.127721 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="setup" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.127729 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="setup" Aug 13 20:02:21 crc kubenswrapper[4183]: E0813 20:02:21.127742 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-cert-syncer" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.127750 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-cert-syncer" Aug 13 20:02:21 crc kubenswrapper[4183]: E0813 20:02:21.127763 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-check-endpoints" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.127770 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-check-endpoints" Aug 13 20:02:21 crc kubenswrapper[4183]: E0813 20:02:21.127864 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-check-endpoints" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.127876 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-check-endpoints" Aug 13 20:02:21 crc kubenswrapper[4183]: E0813 20:02:21.127900 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-check-endpoints" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.127912 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-check-endpoints" Aug 13 20:02:21 crc kubenswrapper[4183]: E0813 20:02:21.127925 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-check-endpoints" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.127932 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-check-endpoints" Aug 13 20:02:21 crc kubenswrapper[4183]: E0813 20:02:21.127943 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-check-endpoints" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.127952 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-check-endpoints" Aug 13 20:02:21 crc kubenswrapper[4183]: E0813 20:02:21.127962 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-insecure-readyz" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.127970 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-insecure-readyz" Aug 13 20:02:21 crc kubenswrapper[4183]: E0813 20:02:21.127979 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-cert-regeneration-controller" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.127987 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-cert-regeneration-controller" Aug 13 20:02:21 crc kubenswrapper[4183]: E0813 20:02:21.127996 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.128003 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.128154 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.128165 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-cert-syncer" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.128178 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-check-endpoints" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.128187 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-check-endpoints" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.128197 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f155735-a9be-4621-a5f2-5ab4b6957acd" containerName="pruner" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.128208 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-cert-regeneration-controller" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.128220 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-check-endpoints" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.128228 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-check-endpoints" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.128235 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-check-endpoints" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.128246 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-insecure-readyz" Aug 13 20:02:21 crc kubenswrapper[4183]: E0813 20:02:21.128466 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-check-endpoints" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.128480 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-check-endpoints" Aug 13 20:02:21 crc kubenswrapper[4183]: E0813 20:02:21.128492 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-check-endpoints" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.128500 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-check-endpoints" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.128688 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-check-endpoints" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.128704 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-check-endpoints" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.133575 4183 kubelet.go:2429] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.133659 4183 topology_manager.go:215] "Topology Admit Handler" podUID="bf055e84f32193b9c1c21b0c34a61f01" podNamespace="openshift-kube-apiserver" podName="kube-apiserver-startup-monitor-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.134289 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.158390 4183 kubelet.go:2439] "SyncLoop REMOVE" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.158498 4183 kubelet.go:2429] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.158611 4183 topology_manager.go:215] "Topology Admit Handler" podUID="92b2a8634cfe8a21cffcc98cc8c87160" podNamespace="openshift-kube-scheduler" podName="openshift-kube-scheduler-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: E0813 20:02:21.159084 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="631cdb37fbb54e809ecc5e719aebd371" containerName="kube-scheduler-cert-syncer" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.159105 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="631cdb37fbb54e809ecc5e719aebd371" containerName="kube-scheduler-cert-syncer" Aug 13 20:02:21 crc kubenswrapper[4183]: E0813 20:02:21.159116 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="631cdb37fbb54e809ecc5e719aebd371" containerName="kube-scheduler-recovery-controller" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.159124 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="631cdb37fbb54e809ecc5e719aebd371" containerName="kube-scheduler-recovery-controller" Aug 13 20:02:21 crc kubenswrapper[4183]: E0813 20:02:21.159135 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="631cdb37fbb54e809ecc5e719aebd371" containerName="wait-for-host-port" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.159142 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="631cdb37fbb54e809ecc5e719aebd371" containerName="wait-for-host-port" Aug 13 20:02:21 crc kubenswrapper[4183]: E0813 20:02:21.159158 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="631cdb37fbb54e809ecc5e719aebd371" containerName="kube-scheduler" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.159170 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="631cdb37fbb54e809ecc5e719aebd371" containerName="kube-scheduler" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.159295 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="631cdb37fbb54e809ecc5e719aebd371" containerName="kube-scheduler-recovery-controller" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.159313 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="631cdb37fbb54e809ecc5e719aebd371" containerName="kube-scheduler-cert-syncer" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.159323 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="631cdb37fbb54e809ecc5e719aebd371" containerName="kube-scheduler" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.160382 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="631cdb37fbb54e809ecc5e719aebd371" containerName="kube-scheduler" containerID="cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52" gracePeriod=30 Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.160501 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="631cdb37fbb54e809ecc5e719aebd371" containerName="kube-scheduler-recovery-controller" containerID="cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e" gracePeriod=30 Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.160637 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="631cdb37fbb54e809ecc5e719aebd371" containerName="kube-scheduler-cert-syncer" containerID="cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff" gracePeriod=30 Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.304205 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"bf055e84f32193b9c1c21b0c34a61f01\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.304341 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48128e8d38b5cbcd2691da698bd9cac3-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"48128e8d38b5cbcd2691da698bd9cac3\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.304373 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/48128e8d38b5cbcd2691da698bd9cac3-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"48128e8d38b5cbcd2691da698bd9cac3\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.304395 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"bf055e84f32193b9c1c21b0c34a61f01\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.304438 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"bf055e84f32193b9c1c21b0c34a61f01\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.304469 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/48128e8d38b5cbcd2691da698bd9cac3-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"48128e8d38b5cbcd2691da698bd9cac3\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.304508 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/92b2a8634cfe8a21cffcc98cc8c87160-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"92b2a8634cfe8a21cffcc98cc8c87160\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.304547 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/92b2a8634cfe8a21cffcc98cc8c87160-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"92b2a8634cfe8a21cffcc98cc8c87160\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.304579 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"bf055e84f32193b9c1c21b0c34a61f01\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.304617 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"bf055e84f32193b9c1c21b0c34a61f01\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.406097 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48128e8d38b5cbcd2691da698bd9cac3-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"48128e8d38b5cbcd2691da698bd9cac3\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.406182 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/48128e8d38b5cbcd2691da698bd9cac3-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"48128e8d38b5cbcd2691da698bd9cac3\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.406209 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"bf055e84f32193b9c1c21b0c34a61f01\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.406246 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"bf055e84f32193b9c1c21b0c34a61f01\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.406269 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/48128e8d38b5cbcd2691da698bd9cac3-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"48128e8d38b5cbcd2691da698bd9cac3\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.406296 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/92b2a8634cfe8a21cffcc98cc8c87160-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"92b2a8634cfe8a21cffcc98cc8c87160\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.406324 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/92b2a8634cfe8a21cffcc98cc8c87160-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"92b2a8634cfe8a21cffcc98cc8c87160\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.406356 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"bf055e84f32193b9c1c21b0c34a61f01\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.406348 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48128e8d38b5cbcd2691da698bd9cac3-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"48128e8d38b5cbcd2691da698bd9cac3\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.406385 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"bf055e84f32193b9c1c21b0c34a61f01\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.406418 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"bf055e84f32193b9c1c21b0c34a61f01\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.406426 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"bf055e84f32193b9c1c21b0c34a61f01\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.407344 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/92b2a8634cfe8a21cffcc98cc8c87160-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"92b2a8634cfe8a21cffcc98cc8c87160\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.407523 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/48128e8d38b5cbcd2691da698bd9cac3-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"48128e8d38b5cbcd2691da698bd9cac3\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.407600 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"bf055e84f32193b9c1c21b0c34a61f01\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.407640 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/48128e8d38b5cbcd2691da698bd9cac3-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"48128e8d38b5cbcd2691da698bd9cac3\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.407669 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/92b2a8634cfe8a21cffcc98cc8c87160-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"92b2a8634cfe8a21cffcc98cc8c87160\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.407700 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"bf055e84f32193b9c1c21b0c34a61f01\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.407732 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"bf055e84f32193b9c1c21b0c34a61f01\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.407761 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"bf055e84f32193b9c1c21b0c34a61f01\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.976484 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_631cdb37fbb54e809ecc5e719aebd371/kube-scheduler-cert-syncer/0.log" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.979513 4183 generic.go:334] "Generic (PLEG): container finished" podID="631cdb37fbb54e809ecc5e719aebd371" containerID="e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff" exitCode=2 Aug 13 20:02:22 crc kubenswrapper[4183]: I0813 20:02:22.004564 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/5.log" Aug 13 20:02:22 crc kubenswrapper[4183]: I0813 20:02:22.007470 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-cert-syncer/0.log" Aug 13 20:02:22 crc kubenswrapper[4183]: I0813 20:02:22.011128 4183 generic.go:334] "Generic (PLEG): container finished" podID="53c1db1508241fbac1bedf9130341ffe" containerID="d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92" exitCode=0 Aug 13 20:02:22 crc kubenswrapper[4183]: I0813 20:02:22.011262 4183 generic.go:334] "Generic (PLEG): container finished" podID="53c1db1508241fbac1bedf9130341ffe" containerID="fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a" exitCode=0 Aug 13 20:02:22 crc kubenswrapper[4183]: I0813 20:02:22.011369 4183 generic.go:334] "Generic (PLEG): container finished" podID="53c1db1508241fbac1bedf9130341ffe" containerID="2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2" exitCode=2 Aug 13 20:02:23 crc kubenswrapper[4183]: I0813 20:02:23.023749 4183 generic.go:334] "Generic (PLEG): container finished" podID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" containerID="7be671fc50422e885dbb1fec6a6c30037cba5481e39185347522a94f177d9763" exitCode=0 Aug 13 20:02:23 crc kubenswrapper[4183]: I0813 20:02:23.023924 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"2ad657a4-8b02-4373-8d0d-b0e25345dc90","Type":"ContainerDied","Data":"7be671fc50422e885dbb1fec6a6c30037cba5481e39185347522a94f177d9763"} Aug 13 20:02:23 crc kubenswrapper[4183]: I0813 20:02:23.029132 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_631cdb37fbb54e809ecc5e719aebd371/kube-scheduler-cert-syncer/0.log" Aug 13 20:02:23 crc kubenswrapper[4183]: I0813 20:02:23.031121 4183 generic.go:334] "Generic (PLEG): container finished" podID="631cdb37fbb54e809ecc5e719aebd371" containerID="7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e" exitCode=0 Aug 13 20:02:23 crc kubenswrapper[4183]: I0813 20:02:23.036474 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/5.log" Aug 13 20:02:23 crc kubenswrapper[4183]: I0813 20:02:23.039619 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-cert-syncer/0.log" Aug 13 20:02:23 crc kubenswrapper[4183]: I0813 20:02:23.040716 4183 generic.go:334] "Generic (PLEG): container finished" podID="53c1db1508241fbac1bedf9130341ffe" containerID="138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325" exitCode=0 Aug 13 20:02:24 crc kubenswrapper[4183]: I0813 20:02:24.050510 4183 generic.go:334] "Generic (PLEG): container finished" podID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" containerID="c790588ca0e77460d01591ce4be738641e9b039fdf1cb3c3fdd37a9243b55f83" exitCode=0 Aug 13 20:02:24 crc kubenswrapper[4183]: I0813 20:02:24.050563 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-7-crc" event={"ID":"b57cce81-8ea0-4c4d-aae1-ee024d201c15","Type":"ContainerDied","Data":"c790588ca0e77460d01591ce4be738641e9b039fdf1cb3c3fdd37a9243b55f83"} Aug 13 20:02:24 crc kubenswrapper[4183]: I0813 20:02:24.058308 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_631cdb37fbb54e809ecc5e719aebd371/kube-scheduler-cert-syncer/0.log" Aug 13 20:02:24 crc kubenswrapper[4183]: I0813 20:02:24.064708 4183 generic.go:334] "Generic (PLEG): container finished" podID="631cdb37fbb54e809ecc5e719aebd371" containerID="51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52" exitCode=0 Aug 13 20:02:24 crc kubenswrapper[4183]: I0813 20:02:24.503045 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:02:24 crc kubenswrapper[4183]: I0813 20:02:24.507980 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Aug 13 20:02:24 crc kubenswrapper[4183]: I0813 20:02:24.871920 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:02:24 crc kubenswrapper[4183]: I0813 20:02:24.872057 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:02:25 crc kubenswrapper[4183]: I0813 20:02:25.054230 4183 patch_prober.go:28] interesting pod/route-controller-manager-846977c6bc-7gjhh container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 20:02:25 crc kubenswrapper[4183]: I0813 20:02:25.054353 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 20:02:27 crc kubenswrapper[4183]: I0813 20:02:27.616315 4183 patch_prober.go:28] interesting pod/image-registry-7cbd5666ff-bbfrf container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.38:5000/healthz\": dial tcp 10.217.0.38:5000: connect: connection refused" start-of-body= Aug 13 20:02:27 crc kubenswrapper[4183]: I0813 20:02:27.616946 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.38:5000/healthz\": dial tcp 10.217.0.38:5000: connect: connection refused" Aug 13 20:02:29 crc kubenswrapper[4183]: I0813 20:02:29.539666 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:02:29 crc kubenswrapper[4183]: I0813 20:02:29.539760 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:02:30 crc kubenswrapper[4183]: I0813 20:02:30.729509 4183 patch_prober.go:28] interesting pod/controller-manager-78589965b8-vmcwt container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: i/o timeout" start-of-body= Aug 13 20:02:30 crc kubenswrapper[4183]: I0813 20:02:30.730239 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" podUID="00d32440-4cce-4609-96f3-51ac94480aab" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: i/o timeout" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.144042 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/5.log" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.429055 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-cert-syncer/0.log" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.431535 4183 generic.go:334] "Generic (PLEG): container finished" podID="53c1db1508241fbac1bedf9130341ffe" containerID="7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5" exitCode=0 Aug 13 20:02:31 crc kubenswrapper[4183]: E0813 20:02:31.866061 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/events\": dial tcp 192.168.130.11:6443: connect: connection refused" event="&Event{ObjectMeta:{openshift-controller-manager-operator-7978d7d7f6-2nt8z.185b6beb073764b5 openshift-controller-manager-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-controller-manager-operator,Name:openshift-controller-manager-operator-7978d7d7f6-2nt8z,UID:0f394926-bdb9-425c-b36e-264d7fd34550,APIVersion:v1,ResourceVersion:23715,FieldPath:spec.containers{openshift-controller-manager-operator},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:58:56.973497525 +0000 UTC m=+903.666162213,LastTimestamp:2025-08-13 20:01:36.894280615 +0000 UTC m=+1063.586945253,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.891324 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.894905 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.897310 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.898116 4183 status_manager.go:853] "Failed to get status for pod" podUID="53c1db1508241fbac1bedf9130341ffe" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.898976 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.902627 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.912313 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.919507 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.923328 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.925575 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.927066 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.937900 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.939973 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.942267 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.945280 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.949082 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.953861 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.954953 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.956319 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.959661 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.960501 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.962225 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.963159 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.964075 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.967216 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.969357 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.974407 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.976307 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.978201 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.979062 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.981029 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.983325 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.984602 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.985322 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.986095 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.986957 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.988177 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:32 crc kubenswrapper[4183]: E0813 20:02:32.271926 4183 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:32 crc kubenswrapper[4183]: E0813 20:02:32.272938 4183 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:32 crc kubenswrapper[4183]: E0813 20:02:32.274592 4183 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:32 crc kubenswrapper[4183]: E0813 20:02:32.275658 4183 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:32 crc kubenswrapper[4183]: E0813 20:02:32.276688 4183 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:32 crc kubenswrapper[4183]: I0813 20:02:32.276739 4183 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Aug 13 20:02:32 crc kubenswrapper[4183]: E0813 20:02:32.277635 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="200ms" Aug 13 20:02:32 crc kubenswrapper[4183]: E0813 20:02:32.480426 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="400ms" Aug 13 20:02:32 crc kubenswrapper[4183]: E0813 20:02:32.886290 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="800ms" Aug 13 20:02:33 crc kubenswrapper[4183]: E0813 20:02:33.131135 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/events/openshift-controller-manager-operator-7978d7d7f6-2nt8z.185b6beb073764b5\": dial tcp 192.168.130.11:6443: connect: connection refused" event="&Event{ObjectMeta:{openshift-controller-manager-operator-7978d7d7f6-2nt8z.185b6beb073764b5 openshift-controller-manager-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-controller-manager-operator,Name:openshift-controller-manager-operator-7978d7d7f6-2nt8z,UID:0f394926-bdb9-425c-b36e-264d7fd34550,APIVersion:v1,ResourceVersion:23715,FieldPath:spec.containers{openshift-controller-manager-operator},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:58:56.973497525 +0000 UTC m=+903.666162213,LastTimestamp:2025-08-13 20:01:36.894280615 +0000 UTC m=+1063.586945253,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.474262 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-10-crc_79050916-d488-4806-b556-1b0078b31e53/installer/0.log" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.474347 4183 generic.go:334] "Generic (PLEG): container finished" podID="79050916-d488-4806-b556-1b0078b31e53" containerID="f3271fa1efff9a0885965f0ea8ca31234ba9caefd85007392c549bd273427721" exitCode=1 Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.474548 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-10-crc" event={"ID":"79050916-d488-4806-b556-1b0078b31e53","Type":"ContainerDied","Data":"f3271fa1efff9a0885965f0ea8ca31234ba9caefd85007392c549bd273427721"} Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.476760 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.478490 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.479453 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.480291 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.481111 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.483928 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.485227 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.485599 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/0.log" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.486055 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.487543 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.488271 4183 generic.go:334] "Generic (PLEG): container finished" podID="b23d6435-6431-4905-b41b-a517327385e5" containerID="98e20994b78d70c7d9739afcbef1576151aa009516cab8609a2c74b997bfed1a" exitCode=255 Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.488325 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" event={"ID":"b23d6435-6431-4905-b41b-a517327385e5","Type":"ContainerDied","Data":"98e20994b78d70c7d9739afcbef1576151aa009516cab8609a2c74b997bfed1a"} Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.488552 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.491152 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.491753 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.492511 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.493378 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.630867 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.635107 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.637395 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.640083 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.640704 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.642214 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.643737 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.644623 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.645209 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.649266 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.649862 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.650680 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.651510 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.653001 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.654423 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.656190 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.658048 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.659026 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.659894 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.660903 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.661440 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.663152 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.665048 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.665610 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.666446 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.667012 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.667883 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.669062 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.669996 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.670695 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.672064 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.673439 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.675534 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: E0813 20:02:33.693056 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="1.6s" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.776134 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.779418 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.780020 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.780612 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.781261 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.782027 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.782951 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.784489 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.785098 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.785578 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.786645 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.787280 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.787737 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.788288 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.788949 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.789443 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.790632 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.814858 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-7-crc" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.817226 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-client-ca\") pod \"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d\" (UID: \"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d\") " Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.817278 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/00d32440-4cce-4609-96f3-51ac94480aab-client-ca\") pod \"00d32440-4cce-4609-96f3-51ac94480aab\" (UID: \"00d32440-4cce-4609-96f3-51ac94480aab\") " Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.817336 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-serving-cert\") pod \"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d\" (UID: \"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d\") " Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.817359 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/00d32440-4cce-4609-96f3-51ac94480aab-proxy-ca-bundles\") pod \"00d32440-4cce-4609-96f3-51ac94480aab\" (UID: \"00d32440-4cce-4609-96f3-51ac94480aab\") " Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.817431 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hdnq\" (UniqueName: \"kubernetes.io/projected/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-kube-api-access-5hdnq\") pod \"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d\" (UID: \"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d\") " Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.817456 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00d32440-4cce-4609-96f3-51ac94480aab-serving-cert\") pod \"00d32440-4cce-4609-96f3-51ac94480aab\" (UID: \"00d32440-4cce-4609-96f3-51ac94480aab\") " Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.817484 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqzj5\" (UniqueName: \"kubernetes.io/projected/00d32440-4cce-4609-96f3-51ac94480aab-kube-api-access-hqzj5\") pod \"00d32440-4cce-4609-96f3-51ac94480aab\" (UID: \"00d32440-4cce-4609-96f3-51ac94480aab\") " Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.817511 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-config\") pod \"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d\" (UID: \"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d\") " Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.817533 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00d32440-4cce-4609-96f3-51ac94480aab-config\") pod \"00d32440-4cce-4609-96f3-51ac94480aab\" (UID: \"00d32440-4cce-4609-96f3-51ac94480aab\") " Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.823308 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00d32440-4cce-4609-96f3-51ac94480aab-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "00d32440-4cce-4609-96f3-51ac94480aab" (UID: "00d32440-4cce-4609-96f3-51ac94480aab"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.824086 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-client-ca" (OuterVolumeSpecName: "client-ca") pod "ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" (UID: "ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.824283 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00d32440-4cce-4609-96f3-51ac94480aab-client-ca" (OuterVolumeSpecName: "client-ca") pod "00d32440-4cce-4609-96f3-51ac94480aab" (UID: "00d32440-4cce-4609-96f3-51ac94480aab"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.829595 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.831321 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.831916 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.832096 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-config" (OuterVolumeSpecName: "config") pod "ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" (UID: "ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.839907 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.842529 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.849603 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00d32440-4cce-4609-96f3-51ac94480aab-kube-api-access-hqzj5" (OuterVolumeSpecName: "kube-api-access-hqzj5") pod "00d32440-4cce-4609-96f3-51ac94480aab" (UID: "00d32440-4cce-4609-96f3-51ac94480aab"). InnerVolumeSpecName "kube-api-access-hqzj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.849899 4183 reconciler_common.go:300] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-client-ca\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.849943 4183 reconciler_common.go:300] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/00d32440-4cce-4609-96f3-51ac94480aab-client-ca\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.849964 4183 reconciler_common.go:300] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/00d32440-4cce-4609-96f3-51ac94480aab-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.850010 4183 reconciler_common.go:300] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-config\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.853018 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" (UID: "ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.855311 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00d32440-4cce-4609-96f3-51ac94480aab-config" (OuterVolumeSpecName: "config") pod "00d32440-4cce-4609-96f3-51ac94480aab" (UID: "00d32440-4cce-4609-96f3-51ac94480aab"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.857175 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00d32440-4cce-4609-96f3-51ac94480aab-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "00d32440-4cce-4609-96f3-51ac94480aab" (UID: "00d32440-4cce-4609-96f3-51ac94480aab"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.857435 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-kube-api-access-5hdnq" (OuterVolumeSpecName: "kube-api-access-5hdnq") pod "ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" (UID: "ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d"). InnerVolumeSpecName "kube-api-access-5hdnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.854495 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.858698 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.859277 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.860308 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.861870 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.867239 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.869475 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.877766 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.878742 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.880544 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.881319 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.952876 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b57cce81-8ea0-4c4d-aae1-ee024d201c15-var-lock\") pod \"b57cce81-8ea0-4c4d-aae1-ee024d201c15\" (UID: \"b57cce81-8ea0-4c4d-aae1-ee024d201c15\") " Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.952928 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2ad657a4-8b02-4373-8d0d-b0e25345dc90-var-lock\") pod \"2ad657a4-8b02-4373-8d0d-b0e25345dc90\" (UID: \"2ad657a4-8b02-4373-8d0d-b0e25345dc90\") " Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.953030 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2ad657a4-8b02-4373-8d0d-b0e25345dc90-kubelet-dir\") pod \"2ad657a4-8b02-4373-8d0d-b0e25345dc90\" (UID: \"2ad657a4-8b02-4373-8d0d-b0e25345dc90\") " Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.953060 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2ad657a4-8b02-4373-8d0d-b0e25345dc90-kube-api-access\") pod \"2ad657a4-8b02-4373-8d0d-b0e25345dc90\" (UID: \"2ad657a4-8b02-4373-8d0d-b0e25345dc90\") " Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.953117 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b57cce81-8ea0-4c4d-aae1-ee024d201c15-kubelet-dir\") pod \"b57cce81-8ea0-4c4d-aae1-ee024d201c15\" (UID: \"b57cce81-8ea0-4c4d-aae1-ee024d201c15\") " Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.953202 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b57cce81-8ea0-4c4d-aae1-ee024d201c15-kube-api-access\") pod \"b57cce81-8ea0-4c4d-aae1-ee024d201c15\" (UID: \"b57cce81-8ea0-4c4d-aae1-ee024d201c15\") " Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.953432 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-5hdnq\" (UniqueName: \"kubernetes.io/projected/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-kube-api-access-5hdnq\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.953448 4183 reconciler_common.go:300] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00d32440-4cce-4609-96f3-51ac94480aab-serving-cert\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.953461 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-hqzj5\" (UniqueName: \"kubernetes.io/projected/00d32440-4cce-4609-96f3-51ac94480aab-kube-api-access-hqzj5\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.953475 4183 reconciler_common.go:300] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00d32440-4cce-4609-96f3-51ac94480aab-config\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.953486 4183 reconciler_common.go:300] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-serving-cert\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.953865 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ad657a4-8b02-4373-8d0d-b0e25345dc90-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2ad657a4-8b02-4373-8d0d-b0e25345dc90" (UID: "2ad657a4-8b02-4373-8d0d-b0e25345dc90"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.953916 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b57cce81-8ea0-4c4d-aae1-ee024d201c15-var-lock" (OuterVolumeSpecName: "var-lock") pod "b57cce81-8ea0-4c4d-aae1-ee024d201c15" (UID: "b57cce81-8ea0-4c4d-aae1-ee024d201c15"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.954018 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ad657a4-8b02-4373-8d0d-b0e25345dc90-var-lock" (OuterVolumeSpecName: "var-lock") pod "2ad657a4-8b02-4373-8d0d-b0e25345dc90" (UID: "2ad657a4-8b02-4373-8d0d-b0e25345dc90"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.954018 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b57cce81-8ea0-4c4d-aae1-ee024d201c15-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b57cce81-8ea0-4c4d-aae1-ee024d201c15" (UID: "b57cce81-8ea0-4c4d-aae1-ee024d201c15"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.962464 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b57cce81-8ea0-4c4d-aae1-ee024d201c15-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b57cce81-8ea0-4c4d-aae1-ee024d201c15" (UID: "b57cce81-8ea0-4c4d-aae1-ee024d201c15"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.965156 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ad657a4-8b02-4373-8d0d-b0e25345dc90-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2ad657a4-8b02-4373-8d0d-b0e25345dc90" (UID: "2ad657a4-8b02-4373-8d0d-b0e25345dc90"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.054315 4183 reconciler_common.go:300] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b57cce81-8ea0-4c4d-aae1-ee024d201c15-var-lock\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.054364 4183 reconciler_common.go:300] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2ad657a4-8b02-4373-8d0d-b0e25345dc90-var-lock\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.054379 4183 reconciler_common.go:300] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2ad657a4-8b02-4373-8d0d-b0e25345dc90-kubelet-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.054393 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2ad657a4-8b02-4373-8d0d-b0e25345dc90-kube-api-access\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.054406 4183 reconciler_common.go:300] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b57cce81-8ea0-4c4d-aae1-ee024d201c15-kubelet-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.054418 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b57cce81-8ea0-4c4d-aae1-ee024d201c15-kube-api-access\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.496521 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"2ad657a4-8b02-4373-8d0d-b0e25345dc90","Type":"ContainerDied","Data":"9b70547ed21fdd52e8499a4a8257b914c8e7ffca7487e1b746ab6e52f3ad42e8"} Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.496557 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.496587 4183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b70547ed21fdd52e8499a4a8257b914c8e7ffca7487e1b746ab6e52f3ad42e8" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.497818 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.498384 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.499163 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.500878 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.501436 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.502971 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.504043 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.504689 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" event={"ID":"00d32440-4cce-4609-96f3-51ac94480aab","Type":"ContainerDied","Data":"97945bb2ed21e57bfdbc9492cf4d12c73fca9904379ba3b00d1adaaec35574f9"} Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.504911 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.510569 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.511628 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.512494 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.513769 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.515004 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.515683 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-7-crc" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.515687 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-7-crc" event={"ID":"b57cce81-8ea0-4c4d-aae1-ee024d201c15","Type":"ContainerDied","Data":"639e0e9093fe7c92ed967648091e3738a0b9f70f4bdb231708a7ad902081cdab"} Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.515875 4183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="639e0e9093fe7c92ed967648091e3738a0b9f70f4bdb231708a7ad902081cdab" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.517041 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.518184 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.519256 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.520329 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.521510 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.522740 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.522921 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.523083 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" event={"ID":"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d","Type":"ContainerDied","Data":"7b8bdc9f188dc335dab87669dac72f597c63109a9725099d338fac6691b46d6e"} Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.523679 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.524237 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.525267 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.533218 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.535188 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.537986 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.538638 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.539522 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.540650 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.541552 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.542377 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.543332 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.546395 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.547282 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.548264 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.549312 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.550070 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.550576 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.551271 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.553470 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.554170 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.555246 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.556157 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.556904 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.557767 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.564338 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.567869 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.568709 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.569440 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.570700 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.571439 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.572174 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.573967 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.576134 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.577151 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.577686 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.578274 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.578869 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.579466 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.580407 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.583300 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.584394 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.585512 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.587040 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.587641 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.588412 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.871918 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.872067 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: E0813 20:02:34.956115 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?resourceVersion=0&timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: E0813 20:02:34.956951 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: E0813 20:02:34.957575 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: E0813 20:02:34.958710 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: E0813 20:02:34.959960 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: E0813 20:02:34.960004 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 20:02:35 crc kubenswrapper[4183]: I0813 20:02:35.217194 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:35 crc kubenswrapper[4183]: I0813 20:02:35.218923 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:35 crc kubenswrapper[4183]: I0813 20:02:35.219565 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:35 crc kubenswrapper[4183]: I0813 20:02:35.221954 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:35 crc kubenswrapper[4183]: I0813 20:02:35.223049 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:35 crc kubenswrapper[4183]: I0813 20:02:35.224121 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:35 crc kubenswrapper[4183]: I0813 20:02:35.224713 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:35 crc kubenswrapper[4183]: I0813 20:02:35.225338 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:35 crc kubenswrapper[4183]: I0813 20:02:35.226106 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:35 crc kubenswrapper[4183]: I0813 20:02:35.227234 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:35 crc kubenswrapper[4183]: I0813 20:02:35.228098 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:35 crc kubenswrapper[4183]: I0813 20:02:35.229299 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:35 crc kubenswrapper[4183]: I0813 20:02:35.230995 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:35 crc kubenswrapper[4183]: I0813 20:02:35.231916 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:35 crc kubenswrapper[4183]: I0813 20:02:35.232540 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:35 crc kubenswrapper[4183]: I0813 20:02:35.233328 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:35 crc kubenswrapper[4183]: E0813 20:02:35.295244 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="3.2s" Aug 13 20:02:38 crc kubenswrapper[4183]: E0813 20:02:38.497532 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="6.4s" Aug 13 20:02:39 crc kubenswrapper[4183]: I0813 20:02:39.539274 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:02:39 crc kubenswrapper[4183]: I0813 20:02:39.539381 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:02:39 crc kubenswrapper[4183]: I0813 20:02:39.971048 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_631cdb37fbb54e809ecc5e719aebd371/kube-scheduler-cert-syncer/0.log" Aug 13 20:02:39 crc kubenswrapper[4183]: I0813 20:02:39.976426 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:02:39 crc kubenswrapper[4183]: I0813 20:02:39.980409 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:39 crc kubenswrapper[4183]: I0813 20:02:39.983726 4183 status_manager.go:853] "Failed to get status for pod" podUID="631cdb37fbb54e809ecc5e719aebd371" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:39 crc kubenswrapper[4183]: I0813 20:02:39.986091 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:39 crc kubenswrapper[4183]: I0813 20:02:39.993431 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:39 crc kubenswrapper[4183]: I0813 20:02:39.996708 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:39 crc kubenswrapper[4183]: I0813 20:02:39.996719 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/5.log" Aug 13 20:02:39 crc kubenswrapper[4183]: I0813 20:02:39.999005 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.005357 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.009100 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-mtx25_23eb88d6-6aea-4542-a2b9-8f3fd106b4ab/openshift-apiserver/0.log" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.009423 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.012959 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-cert-syncer/0.log" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.013871 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.014300 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.015256 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.016421 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.017243 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.017766 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.020040 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.021231 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.023635 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-console_console-84fccc7b6-mkncc_b233d916-bfe3-4ae5-ae39-6b574d1aa05e/console/0.log" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.023942 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.024124 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.025519 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.029754 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.031242 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.032249 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.033030 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.034299 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-10-crc_79050916-d488-4806-b556-1b0078b31e53/installer/0.log" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.034354 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.034382 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-10-crc" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.035124 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.036126 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.036459 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.037488 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.038454 4183 status_manager.go:853] "Failed to get status for pod" podUID="631cdb37fbb54e809ecc5e719aebd371" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.039382 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.040466 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.041496 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.042642 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.043611 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.044625 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.045613 4183 status_manager.go:853] "Failed to get status for pod" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-mtx25\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.047488 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.049417 4183 status_manager.go:853] "Failed to get status for pod" podUID="53c1db1508241fbac1bedf9130341ffe" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.050515 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.051643 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.053272 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.057935 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.061766 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.062904 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.063535 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.064534 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.066270 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.067941 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.068702 4183 status_manager.go:853] "Failed to get status for pod" podUID="631cdb37fbb54e809ecc5e719aebd371" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.070618 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.071518 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.073352 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.075716 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.077205 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.079158 4183 status_manager.go:853] "Failed to get status for pod" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-mtx25\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.084023 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.086202 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.088068 4183 status_manager.go:853] "Failed to get status for pod" podUID="53c1db1508241fbac1bedf9130341ffe" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.089629 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.090453 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/631cdb37fbb54e809ecc5e719aebd371-cert-dir\") pod \"631cdb37fbb54e809ecc5e719aebd371\" (UID: \"631cdb37fbb54e809ecc5e719aebd371\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.090596 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/631cdb37fbb54e809ecc5e719aebd371-resource-dir\") pod \"631cdb37fbb54e809ecc5e719aebd371\" (UID: \"631cdb37fbb54e809ecc5e719aebd371\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.090899 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/631cdb37fbb54e809ecc5e719aebd371-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "631cdb37fbb54e809ecc5e719aebd371" (UID: "631cdb37fbb54e809ecc5e719aebd371"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.090942 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/631cdb37fbb54e809ecc5e719aebd371-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "631cdb37fbb54e809ecc5e719aebd371" (UID: "631cdb37fbb54e809ecc5e719aebd371"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.092038 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.093259 4183 reconciler_common.go:300] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/631cdb37fbb54e809ecc5e719aebd371-resource-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.093311 4183 reconciler_common.go:300] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/631cdb37fbb54e809ecc5e719aebd371-cert-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.093608 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.193911 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit-dir\") pod \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.193988 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca\") pod \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194026 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/42b6a393-6194-4620-bf8f-7e4b6cbe5679-registry-certificates\") pod \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194058 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-node-pullsecrets\") pod \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194094 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert\") pod \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194121 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/79050916-d488-4806-b556-1b0078b31e53-kube-api-access\") pod \"79050916-d488-4806-b556-1b0078b31e53\" (UID: \"79050916-d488-4806-b556-1b0078b31e53\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194161 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9qh\" (UniqueName: \"kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh\") pod \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194187 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config\") pod \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194206 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client\") pod \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194228 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle\") pod \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194249 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/53c1db1508241fbac1bedf9130341ffe-cert-dir\") pod \"53c1db1508241fbac1bedf9130341ffe\" (UID: \"53c1db1508241fbac1bedf9130341ffe\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194277 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert\") pod \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194297 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/79050916-d488-4806-b556-1b0078b31e53-kubelet-dir\") pod \"79050916-d488-4806-b556-1b0078b31e53\" (UID: \"79050916-d488-4806-b556-1b0078b31e53\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194324 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config\") pod \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194346 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca\") pod \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194382 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit\") pod \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194409 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config\") pod \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194436 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/53c1db1508241fbac1bedf9130341ffe-audit-dir\") pod \"53c1db1508241fbac1bedf9130341ffe\" (UID: \"53c1db1508241fbac1bedf9130341ffe\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194711 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194747 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca\") pod \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194821 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config\") pod \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194884 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4f9ss\" (UniqueName: \"kubernetes.io/projected/42b6a393-6194-4620-bf8f-7e4b6cbe5679-kube-api-access-4f9ss\") pod \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194926 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert\") pod \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194946 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/53c1db1508241fbac1bedf9130341ffe-resource-dir\") pod \"53c1db1508241fbac1bedf9130341ffe\" (UID: \"53c1db1508241fbac1bedf9130341ffe\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194967 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42b6a393-6194-4620-bf8f-7e4b6cbe5679-trusted-ca\") pod \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194991 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/42b6a393-6194-4620-bf8f-7e4b6cbe5679-registry-tls\") pod \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.195019 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/42b6a393-6194-4620-bf8f-7e4b6cbe5679-installation-pull-secrets\") pod \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.195045 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8qj9\" (UniqueName: \"kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9\") pod \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.195075 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/42b6a393-6194-4620-bf8f-7e4b6cbe5679-bound-sa-token\") pod \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.195096 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle\") pod \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.195119 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/42b6a393-6194-4620-bf8f-7e4b6cbe5679-ca-trust-extracted\") pod \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.195148 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/79050916-d488-4806-b556-1b0078b31e53-var-lock\") pod \"79050916-d488-4806-b556-1b0078b31e53\" (UID: \"79050916-d488-4806-b556-1b0078b31e53\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.195296 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79050916-d488-4806-b556-1b0078b31e53-var-lock" (OuterVolumeSpecName: "var-lock") pod "79050916-d488-4806-b556-1b0078b31e53" (UID: "79050916-d488-4806-b556-1b0078b31e53"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.195599 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.195961 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config" (OuterVolumeSpecName: "config") pod "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.196677 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.196746 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "b233d916-bfe3-4ae5-ae39-6b574d1aa05e" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.197177 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.197289 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit" (OuterVolumeSpecName: "audit") pod "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.197696 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53c1db1508241fbac1bedf9130341ffe-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "53c1db1508241fbac1bedf9130341ffe" (UID: "53c1db1508241fbac1bedf9130341ffe"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.198116 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53c1db1508241fbac1bedf9130341ffe-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "53c1db1508241fbac1bedf9130341ffe" (UID: "53c1db1508241fbac1bedf9130341ffe"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.198903 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42b6a393-6194-4620-bf8f-7e4b6cbe5679-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "42b6a393-6194-4620-bf8f-7e4b6cbe5679" (UID: "42b6a393-6194-4620-bf8f-7e4b6cbe5679"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.199238 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.199301 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53c1db1508241fbac1bedf9130341ffe-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "53c1db1508241fbac1bedf9130341ffe" (UID: "53c1db1508241fbac1bedf9130341ffe"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.199638 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.199721 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca" (OuterVolumeSpecName: "service-ca") pod "b233d916-bfe3-4ae5-ae39-6b574d1aa05e" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.200026 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79050916-d488-4806-b556-1b0078b31e53-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "79050916-d488-4806-b556-1b0078b31e53" (UID: "79050916-d488-4806-b556-1b0078b31e53"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.202489 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42b6a393-6194-4620-bf8f-7e4b6cbe5679-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "42b6a393-6194-4620-bf8f-7e4b6cbe5679" (UID: "42b6a393-6194-4620-bf8f-7e4b6cbe5679"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.204030 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config" (OuterVolumeSpecName: "console-config") pod "b233d916-bfe3-4ae5-ae39-6b574d1aa05e" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.208569 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9" (OuterVolumeSpecName: "kube-api-access-r8qj9") pod "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab"). InnerVolumeSpecName "kube-api-access-r8qj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.218292 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "b233d916-bfe3-4ae5-ae39-6b574d1aa05e" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.220721 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "b233d916-bfe3-4ae5-ae39-6b574d1aa05e" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.221921 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42b6a393-6194-4620-bf8f-7e4b6cbe5679-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "42b6a393-6194-4620-bf8f-7e4b6cbe5679" (UID: "42b6a393-6194-4620-bf8f-7e4b6cbe5679"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.227524 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.227679 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.228713 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42b6a393-6194-4620-bf8f-7e4b6cbe5679-kube-api-access-4f9ss" (OuterVolumeSpecName: "kube-api-access-4f9ss") pod "42b6a393-6194-4620-bf8f-7e4b6cbe5679" (UID: "42b6a393-6194-4620-bf8f-7e4b6cbe5679"). InnerVolumeSpecName "kube-api-access-4f9ss". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.229019 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42b6a393-6194-4620-bf8f-7e4b6cbe5679-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "42b6a393-6194-4620-bf8f-7e4b6cbe5679" (UID: "42b6a393-6194-4620-bf8f-7e4b6cbe5679"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.229133 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.231737 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79050916-d488-4806-b556-1b0078b31e53-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "79050916-d488-4806-b556-1b0078b31e53" (UID: "79050916-d488-4806-b556-1b0078b31e53"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.236227 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "b233d916-bfe3-4ae5-ae39-6b574d1aa05e" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.237452 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (OuterVolumeSpecName: "registry-storage") pod "42b6a393-6194-4620-bf8f-7e4b6cbe5679" (UID: "42b6a393-6194-4620-bf8f-7e4b6cbe5679"). InnerVolumeSpecName "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97". PluginName "kubernetes.io/csi", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.238634 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42b6a393-6194-4620-bf8f-7e4b6cbe5679-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "42b6a393-6194-4620-bf8f-7e4b6cbe5679" (UID: "42b6a393-6194-4620-bf8f-7e4b6cbe5679"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.239584 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42b6a393-6194-4620-bf8f-7e4b6cbe5679-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "42b6a393-6194-4620-bf8f-7e4b6cbe5679" (UID: "42b6a393-6194-4620-bf8f-7e4b6cbe5679"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.241981 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh" (OuterVolumeSpecName: "kube-api-access-lz9qh") pod "b233d916-bfe3-4ae5-ae39-6b574d1aa05e" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e"). InnerVolumeSpecName "kube-api-access-lz9qh". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297045 4183 reconciler_common.go:300] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297115 4183 reconciler_common.go:300] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297139 4183 reconciler_common.go:300] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/53c1db1508241fbac1bedf9130341ffe-cert-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297153 4183 reconciler_common.go:300] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/79050916-d488-4806-b556-1b0078b31e53-kubelet-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297170 4183 reconciler_common.go:300] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297185 4183 reconciler_common.go:300] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297199 4183 reconciler_common.go:300] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297212 4183 reconciler_common.go:300] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297227 4183 reconciler_common.go:300] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/53c1db1508241fbac1bedf9130341ffe-audit-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297240 4183 reconciler_common.go:300] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297254 4183 reconciler_common.go:300] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297271 4183 reconciler_common.go:300] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297288 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-4f9ss\" (UniqueName: \"kubernetes.io/projected/42b6a393-6194-4620-bf8f-7e4b6cbe5679-kube-api-access-4f9ss\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297304 4183 reconciler_common.go:300] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297318 4183 reconciler_common.go:300] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/53c1db1508241fbac1bedf9130341ffe-resource-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297333 4183 reconciler_common.go:300] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42b6a393-6194-4620-bf8f-7e4b6cbe5679-trusted-ca\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297347 4183 reconciler_common.go:300] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/42b6a393-6194-4620-bf8f-7e4b6cbe5679-registry-tls\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297364 4183 reconciler_common.go:300] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/42b6a393-6194-4620-bf8f-7e4b6cbe5679-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297398 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-r8qj9\" (UniqueName: \"kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297413 4183 reconciler_common.go:300] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/42b6a393-6194-4620-bf8f-7e4b6cbe5679-bound-sa-token\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297429 4183 reconciler_common.go:300] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/42b6a393-6194-4620-bf8f-7e4b6cbe5679-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297444 4183 reconciler_common.go:300] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297458 4183 reconciler_common.go:300] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/79050916-d488-4806-b556-1b0078b31e53-var-lock\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297472 4183 reconciler_common.go:300] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297485 4183 reconciler_common.go:300] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297501 4183 reconciler_common.go:300] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/42b6a393-6194-4620-bf8f-7e4b6cbe5679-registry-certificates\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297515 4183 reconciler_common.go:300] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297529 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/79050916-d488-4806-b556-1b0078b31e53-kube-api-access\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297542 4183 reconciler_common.go:300] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297559 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-lz9qh\" (UniqueName: \"kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297573 4183 reconciler_common.go:300] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.588367 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-console_console-84fccc7b6-mkncc_b233d916-bfe3-4ae5-ae39-6b574d1aa05e/console/0.log" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.588554 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.588685 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-84fccc7b6-mkncc" event={"ID":"b233d916-bfe3-4ae5-ae39-6b574d1aa05e","Type":"ContainerDied","Data":"e6ed8c1e93f8bc476d05eff439933a75e91865b1b913300d2de272ffc970fd9f"} Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.591107 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.592722 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.593893 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.596348 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.598081 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.598716 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_631cdb37fbb54e809ecc5e719aebd371/kube-scheduler-cert-syncer/0.log" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.599917 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.602294 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.604509 4183 status_manager.go:853] "Failed to get status for pod" podUID="631cdb37fbb54e809ecc5e719aebd371" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.605512 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.608720 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.613287 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.614356 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.615596 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.616744 4183 status_manager.go:853] "Failed to get status for pod" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-mtx25\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.617542 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.618533 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/5.log" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.624663 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-cert-syncer/0.log" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.626103 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.628269 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.629763 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.630956 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.632720 4183 status_manager.go:853] "Failed to get status for pod" podUID="53c1db1508241fbac1bedf9130341ffe" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.633709 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.634588 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.643673 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.644669 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.647267 4183 status_manager.go:853] "Failed to get status for pod" podUID="53c1db1508241fbac1bedf9130341ffe" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.649110 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.650116 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.650878 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.652045 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" event={"ID":"42b6a393-6194-4620-bf8f-7e4b6cbe5679","Type":"ContainerDied","Data":"958ba1ee8e9afa1cbcf49a3010aa63c2343b2e7ad70d6958e858075ed46bd0f4"} Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.655957 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-10-crc_79050916-d488-4806-b556-1b0078b31e53/installer/0.log" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.656491 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-10-crc" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.656635 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-10-crc" event={"ID":"79050916-d488-4806-b556-1b0078b31e53","Type":"ContainerDied","Data":"c5d98545d20b61052f0164d192095269601cf3a013453289a4380b9d437de8fc"} Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.656685 4183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5d98545d20b61052f0164d192095269601cf3a013453289a4380b9d437de8fc" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.658394 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.661451 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.662678 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-mtx25_23eb88d6-6aea-4542-a2b9-8f3fd106b4ab/openshift-apiserver/0.log" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.662727 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.663485 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.664381 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.665472 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.666156 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.667619 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.677670 4183 status_manager.go:853] "Failed to get status for pod" podUID="631cdb37fbb54e809ecc5e719aebd371" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.679546 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.681101 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.683452 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.684923 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.686295 4183 status_manager.go:853] "Failed to get status for pod" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-mtx25\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.687519 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.688643 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.690579 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.692375 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.695015 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.710178 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.715430 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.717448 4183 status_manager.go:853] "Failed to get status for pod" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-mtx25\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.720003 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.721741 4183 status_manager.go:853] "Failed to get status for pod" podUID="53c1db1508241fbac1bedf9130341ffe" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.722877 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.723600 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.724325 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.725055 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.725735 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.728397 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.731248 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.738267 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.740283 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.742713 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.743524 4183 status_manager.go:853] "Failed to get status for pod" podUID="631cdb37fbb54e809ecc5e719aebd371" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.747326 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.748566 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.749716 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.754477 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.755827 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.756452 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.757134 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.757716 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.758331 4183 status_manager.go:853] "Failed to get status for pod" podUID="631cdb37fbb54e809ecc5e719aebd371" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.759046 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.759607 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.760155 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.760650 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.761316 4183 status_manager.go:853] "Failed to get status for pod" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-mtx25\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.761945 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.762517 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.763554 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.764555 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.765964 4183 status_manager.go:853] "Failed to get status for pod" podUID="53c1db1508241fbac1bedf9130341ffe" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.767552 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.770117 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:41 crc kubenswrapper[4183]: I0813 20:02:41.220590 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" path="/var/lib/kubelet/pods/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab/volumes" Aug 13 20:02:41 crc kubenswrapper[4183]: I0813 20:02:41.223978 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53c1db1508241fbac1bedf9130341ffe" path="/var/lib/kubelet/pods/53c1db1508241fbac1bedf9130341ffe/volumes" Aug 13 20:02:41 crc kubenswrapper[4183]: I0813 20:02:41.228241 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="631cdb37fbb54e809ecc5e719aebd371" path="/var/lib/kubelet/pods/631cdb37fbb54e809ecc5e719aebd371/volumes" Aug 13 20:02:42 crc kubenswrapper[4183]: I0813 20:02:42.615716 4183 patch_prober.go:28] interesting pod/image-registry-7cbd5666ff-bbfrf container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.38:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 20:02:42 crc kubenswrapper[4183]: I0813 20:02:42.615907 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.38:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 20:02:43 crc kubenswrapper[4183]: E0813 20:02:43.133995 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/events/openshift-controller-manager-operator-7978d7d7f6-2nt8z.185b6beb073764b5\": dial tcp 192.168.130.11:6443: connect: connection refused" event="&Event{ObjectMeta:{openshift-controller-manager-operator-7978d7d7f6-2nt8z.185b6beb073764b5 openshift-controller-manager-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-controller-manager-operator,Name:openshift-controller-manager-operator-7978d7d7f6-2nt8z,UID:0f394926-bdb9-425c-b36e-264d7fd34550,APIVersion:v1,ResourceVersion:23715,FieldPath:spec.containers{openshift-controller-manager-operator},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:58:56.973497525 +0000 UTC m=+903.666162213,LastTimestamp:2025-08-13 20:01:36.894280615 +0000 UTC m=+1063.586945253,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 20:02:44 crc kubenswrapper[4183]: I0813 20:02:44.871378 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:02:44 crc kubenswrapper[4183]: I0813 20:02:44.872024 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:02:44 crc kubenswrapper[4183]: E0813 20:02:44.899307 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="7s" Aug 13 20:02:45 crc kubenswrapper[4183]: E0813 20:02:45.134320 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?resourceVersion=0&timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:45 crc kubenswrapper[4183]: E0813 20:02:45.136079 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:45 crc kubenswrapper[4183]: E0813 20:02:45.137078 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:45 crc kubenswrapper[4183]: E0813 20:02:45.138687 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:45 crc kubenswrapper[4183]: E0813 20:02:45.140025 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:45 crc kubenswrapper[4183]: E0813 20:02:45.140097 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 20:02:45 crc kubenswrapper[4183]: I0813 20:02:45.213624 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:45 crc kubenswrapper[4183]: I0813 20:02:45.215267 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:45 crc kubenswrapper[4183]: I0813 20:02:45.218619 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:45 crc kubenswrapper[4183]: I0813 20:02:45.221977 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:45 crc kubenswrapper[4183]: I0813 20:02:45.222751 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:45 crc kubenswrapper[4183]: I0813 20:02:45.223611 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:45 crc kubenswrapper[4183]: I0813 20:02:45.224466 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:45 crc kubenswrapper[4183]: I0813 20:02:45.225551 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:45 crc kubenswrapper[4183]: I0813 20:02:45.226547 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:45 crc kubenswrapper[4183]: I0813 20:02:45.227405 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:45 crc kubenswrapper[4183]: I0813 20:02:45.229145 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:45 crc kubenswrapper[4183]: I0813 20:02:45.229898 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:45 crc kubenswrapper[4183]: I0813 20:02:45.230641 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:45 crc kubenswrapper[4183]: I0813 20:02:45.231662 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:45 crc kubenswrapper[4183]: I0813 20:02:45.232379 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:45 crc kubenswrapper[4183]: I0813 20:02:45.233232 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:45 crc kubenswrapper[4183]: I0813 20:02:45.234537 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:46 crc kubenswrapper[4183]: I0813 20:02:46.208317 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:02:46 crc kubenswrapper[4183]: I0813 20:02:46.210866 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:46 crc kubenswrapper[4183]: I0813 20:02:46.211828 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:46 crc kubenswrapper[4183]: I0813 20:02:46.212948 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:46 crc kubenswrapper[4183]: I0813 20:02:46.213960 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:46 crc kubenswrapper[4183]: I0813 20:02:46.214838 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:46 crc kubenswrapper[4183]: I0813 20:02:46.216124 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:46 crc kubenswrapper[4183]: I0813 20:02:46.217011 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:46 crc kubenswrapper[4183]: I0813 20:02:46.218117 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:46 crc kubenswrapper[4183]: I0813 20:02:46.219027 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:46 crc kubenswrapper[4183]: I0813 20:02:46.220223 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:46 crc kubenswrapper[4183]: I0813 20:02:46.221319 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:46 crc kubenswrapper[4183]: I0813 20:02:46.222379 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:46 crc kubenswrapper[4183]: I0813 20:02:46.223687 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:46 crc kubenswrapper[4183]: I0813 20:02:46.225764 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:46 crc kubenswrapper[4183]: I0813 20:02:46.226823 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:46 crc kubenswrapper[4183]: I0813 20:02:46.227763 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:46 crc kubenswrapper[4183]: I0813 20:02:46.228582 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:46 crc kubenswrapper[4183]: I0813 20:02:46.229549 4183 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="09143b32-bfcb-4682-a82f-e0bfa420e445" Aug 13 20:02:46 crc kubenswrapper[4183]: I0813 20:02:46.229580 4183 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="09143b32-bfcb-4682-a82f-e0bfa420e445" Aug 13 20:02:46 crc kubenswrapper[4183]: E0813 20:02:46.230413 4183 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:02:46 crc kubenswrapper[4183]: I0813 20:02:46.231018 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:02:49 crc kubenswrapper[4183]: I0813 20:02:49.208426 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:02:49 crc kubenswrapper[4183]: I0813 20:02:49.212466 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:49 crc kubenswrapper[4183]: I0813 20:02:49.213743 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:49 crc kubenswrapper[4183]: I0813 20:02:49.215143 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:49 crc kubenswrapper[4183]: I0813 20:02:49.216187 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:49 crc kubenswrapper[4183]: I0813 20:02:49.216927 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:49 crc kubenswrapper[4183]: I0813 20:02:49.218184 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:49 crc kubenswrapper[4183]: I0813 20:02:49.219320 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:49 crc kubenswrapper[4183]: I0813 20:02:49.220300 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:49 crc kubenswrapper[4183]: I0813 20:02:49.221351 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:49 crc kubenswrapper[4183]: I0813 20:02:49.223186 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:49 crc kubenswrapper[4183]: I0813 20:02:49.223737 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:49 crc kubenswrapper[4183]: I0813 20:02:49.224717 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:49 crc kubenswrapper[4183]: I0813 20:02:49.227581 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:49 crc kubenswrapper[4183]: I0813 20:02:49.228651 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:49 crc kubenswrapper[4183]: I0813 20:02:49.229338 4183 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="5e53e26d-e94d-45dc-b706-677ed667c8ce" Aug 13 20:02:49 crc kubenswrapper[4183]: I0813 20:02:49.229363 4183 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="5e53e26d-e94d-45dc-b706-677ed667c8ce" Aug 13 20:02:49 crc kubenswrapper[4183]: I0813 20:02:49.230133 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:49 crc kubenswrapper[4183]: E0813 20:02:49.230266 4183 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:02:49 crc kubenswrapper[4183]: I0813 20:02:49.230940 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:02:49 crc kubenswrapper[4183]: I0813 20:02:49.231155 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:49 crc kubenswrapper[4183]: I0813 20:02:49.232316 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:49 crc kubenswrapper[4183]: I0813 20:02:49.539512 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:02:49 crc kubenswrapper[4183]: I0813 20:02:49.539728 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:02:51 crc kubenswrapper[4183]: E0813 20:02:51.901264 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="7s" Aug 13 20:02:53 crc kubenswrapper[4183]: E0813 20:02:53.137504 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/events/openshift-controller-manager-operator-7978d7d7f6-2nt8z.185b6beb073764b5\": dial tcp 192.168.130.11:6443: connect: connection refused" event="&Event{ObjectMeta:{openshift-controller-manager-operator-7978d7d7f6-2nt8z.185b6beb073764b5 openshift-controller-manager-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-controller-manager-operator,Name:openshift-controller-manager-operator-7978d7d7f6-2nt8z,UID:0f394926-bdb9-425c-b36e-264d7fd34550,APIVersion:v1,ResourceVersion:23715,FieldPath:spec.containers{openshift-controller-manager-operator},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:58:56.973497525 +0000 UTC m=+903.666162213,LastTimestamp:2025-08-13 20:01:36.894280615 +0000 UTC m=+1063.586945253,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 20:02:54 crc kubenswrapper[4183]: I0813 20:02:54.707203 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Pending" Aug 13 20:02:54 crc kubenswrapper[4183]: I0813 20:02:54.707366 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Pending" Aug 13 20:02:54 crc kubenswrapper[4183]: I0813 20:02:54.707420 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:02:54 crc kubenswrapper[4183]: I0813 20:02:54.707468 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:02:54 crc kubenswrapper[4183]: I0813 20:02:54.707503 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" status="Pending" Aug 13 20:02:54 crc kubenswrapper[4183]: I0813 20:02:54.707532 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:02:54 crc kubenswrapper[4183]: I0813 20:02:54.872090 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:02:54 crc kubenswrapper[4183]: I0813 20:02:54.872231 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: I0813 20:02:55.219044 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: I0813 20:02:55.220296 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: I0813 20:02:55.222133 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: I0813 20:02:55.223240 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: I0813 20:02:55.224009 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: I0813 20:02:55.224820 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: I0813 20:02:55.226944 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: I0813 20:02:55.228494 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: I0813 20:02:55.230011 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: I0813 20:02:55.231203 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: I0813 20:02:55.231769 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: I0813 20:02:55.232434 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: I0813 20:02:55.233162 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: I0813 20:02:55.234290 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: I0813 20:02:55.239215 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: I0813 20:02:55.240931 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: I0813 20:02:55.242716 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: I0813 20:02:55.244399 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: I0813 20:02:55.245681 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: E0813 20:02:55.336066 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?resourceVersion=0&timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: E0813 20:02:55.337683 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: E0813 20:02:55.340507 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: E0813 20:02:55.341480 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: E0813 20:02:55.342210 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: E0813 20:02:55.342229 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 20:02:58 crc kubenswrapper[4183]: E0813 20:02:58.904133 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="7s" Aug 13 20:02:59 crc kubenswrapper[4183]: I0813 20:02:59.541340 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:02:59 crc kubenswrapper[4183]: I0813 20:02:59.541485 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:03:03 crc kubenswrapper[4183]: E0813 20:03:03.139563 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/events/openshift-controller-manager-operator-7978d7d7f6-2nt8z.185b6beb073764b5\": dial tcp 192.168.130.11:6443: connect: connection refused" event="&Event{ObjectMeta:{openshift-controller-manager-operator-7978d7d7f6-2nt8z.185b6beb073764b5 openshift-controller-manager-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-controller-manager-operator,Name:openshift-controller-manager-operator-7978d7d7f6-2nt8z,UID:0f394926-bdb9-425c-b36e-264d7fd34550,APIVersion:v1,ResourceVersion:23715,FieldPath:spec.containers{openshift-controller-manager-operator},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:58:56.973497525 +0000 UTC m=+903.666162213,LastTimestamp:2025-08-13 20:01:36.894280615 +0000 UTC m=+1063.586945253,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 20:03:04 crc kubenswrapper[4183]: I0813 20:03:04.871666 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:03:04 crc kubenswrapper[4183]: I0813 20:03:04.871934 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: I0813 20:03:05.210563 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: I0813 20:03:05.211517 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: I0813 20:03:05.212300 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: I0813 20:03:05.213267 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: I0813 20:03:05.214501 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: I0813 20:03:05.215662 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: I0813 20:03:05.217155 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: I0813 20:03:05.218226 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: I0813 20:03:05.219282 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: I0813 20:03:05.220280 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: I0813 20:03:05.221003 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: I0813 20:03:05.221764 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: I0813 20:03:05.222425 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: I0813 20:03:05.223649 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: I0813 20:03:05.224408 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: I0813 20:03:05.225165 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: I0813 20:03:05.226077 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: I0813 20:03:05.226826 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: I0813 20:03:05.227494 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: E0813 20:03:05.444295 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?resourceVersion=0&timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: E0813 20:03:05.445355 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: E0813 20:03:05.446196 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: E0813 20:03:05.447314 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: E0813 20:03:05.448427 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: E0813 20:03:05.448472 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 20:03:05 crc kubenswrapper[4183]: E0813 20:03:05.908710 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="7s" Aug 13 20:03:09 crc kubenswrapper[4183]: I0813 20:03:09.540596 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:03:09 crc kubenswrapper[4183]: I0813 20:03:09.540878 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.947144 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/approver/0.log" Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.948536 4183 generic.go:334] "Generic (PLEG): container finished" podID="51a02bbf-2d40-4f84-868a-d399ea18a846" containerID="91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f" exitCode=1 Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.948600 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" event={"ID":"51a02bbf-2d40-4f84-868a-d399ea18a846","Type":"ContainerDied","Data":"91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f"} Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.950159 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.950921 4183 scope.go:117] "RemoveContainer" containerID="91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f" Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.951127 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.952515 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.953682 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.954986 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.956447 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.957937 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.959092 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.961099 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.962411 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.962999 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.963527 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.964159 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.965230 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.966427 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.967529 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.970578 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.971704 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.972739 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.973474 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:12 crc kubenswrapper[4183]: E0813 20:03:12.913055 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="7s" Aug 13 20:03:13 crc kubenswrapper[4183]: E0813 20:03:13.142309 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/events/openshift-controller-manager-operator-7978d7d7f6-2nt8z.185b6beb073764b5\": dial tcp 192.168.130.11:6443: connect: connection refused" event="&Event{ObjectMeta:{openshift-controller-manager-operator-7978d7d7f6-2nt8z.185b6beb073764b5 openshift-controller-manager-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-controller-manager-operator,Name:openshift-controller-manager-operator-7978d7d7f6-2nt8z,UID:0f394926-bdb9-425c-b36e-264d7fd34550,APIVersion:v1,ResourceVersion:23715,FieldPath:spec.containers{openshift-controller-manager-operator},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:58:56.973497525 +0000 UTC m=+903.666162213,LastTimestamp:2025-08-13 20:01:36.894280615 +0000 UTC m=+1063.586945253,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 20:03:14 crc kubenswrapper[4183]: I0813 20:03:14.873139 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:03:14 crc kubenswrapper[4183]: I0813 20:03:14.873303 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: I0813 20:03:15.214539 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: I0813 20:03:15.215659 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: I0813 20:03:15.217023 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: I0813 20:03:15.218560 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: I0813 20:03:15.219446 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: I0813 20:03:15.220423 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: I0813 20:03:15.221418 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: I0813 20:03:15.222705 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: I0813 20:03:15.223572 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: I0813 20:03:15.224623 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: I0813 20:03:15.225457 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: I0813 20:03:15.226282 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: I0813 20:03:15.227309 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: I0813 20:03:15.227988 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: I0813 20:03:15.228621 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: I0813 20:03:15.230261 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: I0813 20:03:15.235597 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: I0813 20:03:15.236756 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: I0813 20:03:15.238064 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: I0813 20:03:15.239153 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: E0813 20:03:15.649213 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?resourceVersion=0&timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: E0813 20:03:15.650252 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: E0813 20:03:15.651715 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: E0813 20:03:15.652691 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: E0813 20:03:15.653510 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: E0813 20:03:15.653526 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 20:03:19 crc kubenswrapper[4183]: I0813 20:03:19.540153 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:03:19 crc kubenswrapper[4183]: I0813 20:03:19.540272 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:03:19 crc kubenswrapper[4183]: E0813 20:03:19.915210 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="7s" Aug 13 20:03:22 crc kubenswrapper[4183]: E0813 20:03:22.278613 4183 desired_state_of_world_populator.go:320] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 192.168.130.11:6443: connect: connection refused" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" volumeName="registry-storage" Aug 13 20:03:23 crc kubenswrapper[4183]: E0813 20:03:23.144835 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/events/openshift-controller-manager-operator-7978d7d7f6-2nt8z.185b6beb073764b5\": dial tcp 192.168.130.11:6443: connect: connection refused" event="&Event{ObjectMeta:{openshift-controller-manager-operator-7978d7d7f6-2nt8z.185b6beb073764b5 openshift-controller-manager-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-controller-manager-operator,Name:openshift-controller-manager-operator-7978d7d7f6-2nt8z,UID:0f394926-bdb9-425c-b36e-264d7fd34550,APIVersion:v1,ResourceVersion:23715,FieldPath:spec.containers{openshift-controller-manager-operator},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:58:56.973497525 +0000 UTC m=+903.666162213,LastTimestamp:2025-08-13 20:01:36.894280615 +0000 UTC m=+1063.586945253,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 20:03:24 crc kubenswrapper[4183]: E0813 20:03:24.609959 4183 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" podSandboxID="48ddb06f60b4f68d09a2a539638fcf41c8d68761518ac0ef54f91af62a4bb107" Aug 13 20:03:24 crc kubenswrapper[4183]: E0813 20:03:24.610356 4183 kuberuntime_manager.go:1262] container &Container{Name:console,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae,Command:[/opt/bridge/bin/bridge --public-dir=/opt/bridge/static --config=/var/console-config/console-config.yaml --service-ca-file=/var/service-ca/service-ca.crt --v=2],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{104857600 0} {} 100Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:console-serving-cert,ReadOnly:true,MountPath:/var/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:console-oauth-config,ReadOnly:true,MountPath:/var/oauth-config,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:console-config,ReadOnly:true,MountPath:/var/console-config,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:service-ca,ReadOnly:true,MountPath:/var/service-ca,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:trusted-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:oauth-serving-cert,ReadOnly:true,MountPath:/var/oauth-serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-2nz92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:1,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[sleep 25],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000590000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:30,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod console-644bb77b49-5x5xk_openshift-console(9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1): CreateContainerError: context deadline exceeded Aug 13 20:03:24 crc kubenswrapper[4183]: E0813 20:03:24.610451 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"console\" with CreateContainerError: \"context deadline exceeded\"" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Aug 13 20:03:24 crc kubenswrapper[4183]: I0813 20:03:24.872084 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:03:24 crc kubenswrapper[4183]: I0813 20:03:24.872210 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.047833 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.049084 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.050205 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.051015 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.051935 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.052827 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.053835 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.054432 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.055227 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.055950 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.056836 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.057551 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.058188 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.058752 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.059343 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.059963 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.060567 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.061288 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.061997 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.062546 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.063426 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.212956 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.214088 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.215231 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.216167 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.217076 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.218506 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.219432 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.220191 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.221977 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.226475 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.227704 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.229071 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.229894 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.230754 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.231917 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.232972 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.233637 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.234455 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.235441 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.236316 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.237150 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: E0813 20:03:25.422534 4183 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" podSandboxID="07c341dd7186a1b00e23f13a401a9b19e5d1744c38a4a91d135cf6cc1891fe61" Aug 13 20:03:25 crc kubenswrapper[4183]: E0813 20:03:25.422867 4183 kuberuntime_manager.go:1262] container &Container{Name:kube-scheduler-operator-container,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f,Command:[cluster-kube-scheduler-operator operator],Args:[--config=/var/run/configmaps/config/config.yaml],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.16.0,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:1.29.5,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openshift-kube-scheduler-operator-5d9b995f6b-fcgd7_openshift-kube-scheduler-operator(71af81a9-7d43-49b2-9287-c375900aa905): CreateContainerError: context deadline exceeded Aug 13 20:03:25 crc kubenswrapper[4183]: E0813 20:03:25.422934 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler-operator-container\" with CreateContainerError: \"context deadline exceeded\"" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 20:03:26 crc kubenswrapper[4183]: E0813 20:03:26.008298 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?resourceVersion=0&timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: E0813 20:03:26.009152 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: E0813 20:03:26.009639 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: E0813 20:03:26.010249 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: E0813 20:03:26.010877 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: E0813 20:03:26.010914 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 20:03:26 crc kubenswrapper[4183]: I0813 20:03:26.052150 4183 scope.go:117] "RemoveContainer" containerID="e2ed40c9bc30c8fdbb04088362ef76212a522ea5070f999ce3dc603f8c7a487e" Aug 13 20:03:26 crc kubenswrapper[4183]: I0813 20:03:26.053483 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: I0813 20:03:26.055448 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: I0813 20:03:26.056550 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: I0813 20:03:26.057467 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: I0813 20:03:26.058261 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: I0813 20:03:26.059259 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: I0813 20:03:26.060223 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: I0813 20:03:26.061058 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: I0813 20:03:26.061933 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: I0813 20:03:26.062691 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: I0813 20:03:26.063579 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: I0813 20:03:26.064438 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: I0813 20:03:26.065181 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: I0813 20:03:26.065991 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: I0813 20:03:26.066908 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: I0813 20:03:26.067756 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: I0813 20:03:26.068570 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: I0813 20:03:26.069641 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: I0813 20:03:26.071225 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: I0813 20:03:26.072344 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: I0813 20:03:26.073650 4183 status_manager.go:853] "Failed to get status for pod" podUID="71af81a9-7d43-49b2-9287-c375900aa905" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/pods/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: I0813 20:03:26.074939 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: E0813 20:03:26.917366 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="7s" Aug 13 20:03:27 crc kubenswrapper[4183]: E0813 20:03:27.231826 4183 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" podSandboxID="fe503da15decef9b50942972e3f741dba12102460aee1b1db682f945b69c1239" Aug 13 20:03:27 crc kubenswrapper[4183]: E0813 20:03:27.232062 4183 kuberuntime_manager.go:1262] container &Container{Name:cluster-image-registry-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d,Command:[],Args:[--files=/var/run/configmaps/trusted-ca/tls-ca-bundle.pem --files=/etc/secrets/tls.crt --files=/etc/secrets/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:60000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.16.0,ValueFrom:nil,},EnvVar{Name:WATCH_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:OPERATOR_NAME,Value:cluster-image-registry-operator,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d,ValueFrom:nil,},EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8,ValueFrom:nil,},EnvVar{Name:IMAGE_PRUNER,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce,ValueFrom:nil,},EnvVar{Name:AZURE_ENVIRONMENT_FILEPATH,Value:/tmp/azurestackcloud.json,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:trusted-ca,ReadOnly:false,MountPath:/var/run/configmaps/trusted-ca/,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:image-registry-operator-tls,ReadOnly:false,MountPath:/etc/secrets,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:bound-sa-token,ReadOnly:true,MountPath:/var/run/secrets/openshift/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-9x6dp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000290000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cluster-image-registry-operator-7769bd8d7d-q5cvv_openshift-image-registry(b54e8941-2fc4-432a-9e51-39684df9089e): CreateContainerError: context deadline exceeded Aug 13 20:03:27 crc kubenswrapper[4183]: E0813 20:03:27.232162 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-image-registry-operator\" with CreateContainerError: \"context deadline exceeded\"" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 20:03:28 crc kubenswrapper[4183]: I0813 20:03:28.067346 4183 scope.go:117] "RemoveContainer" containerID="dd7033f12f10dfa562ecc04746779666b1a34bddfcb245d6e2353cc2c05cc540" Aug 13 20:03:28 crc kubenswrapper[4183]: I0813 20:03:28.067614 4183 status_manager.go:853] "Failed to get status for pod" podUID="71af81a9-7d43-49b2-9287-c375900aa905" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/pods/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:28 crc kubenswrapper[4183]: I0813 20:03:28.068524 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:28 crc kubenswrapper[4183]: I0813 20:03:28.069916 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:28 crc kubenswrapper[4183]: I0813 20:03:28.070591 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:28 crc kubenswrapper[4183]: I0813 20:03:28.071345 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:28 crc kubenswrapper[4183]: I0813 20:03:28.072227 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:28 crc kubenswrapper[4183]: I0813 20:03:28.073426 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:28 crc kubenswrapper[4183]: I0813 20:03:28.074561 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:28 crc kubenswrapper[4183]: I0813 20:03:28.075600 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:28 crc kubenswrapper[4183]: I0813 20:03:28.076508 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:28 crc kubenswrapper[4183]: I0813 20:03:28.077389 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:28 crc kubenswrapper[4183]: I0813 20:03:28.078278 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:28 crc kubenswrapper[4183]: I0813 20:03:28.078943 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:28 crc kubenswrapper[4183]: I0813 20:03:28.079522 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:28 crc kubenswrapper[4183]: I0813 20:03:28.080234 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:28 crc kubenswrapper[4183]: I0813 20:03:28.080923 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:28 crc kubenswrapper[4183]: I0813 20:03:28.081510 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:28 crc kubenswrapper[4183]: I0813 20:03:28.082587 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:28 crc kubenswrapper[4183]: I0813 20:03:28.085724 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:28 crc kubenswrapper[4183]: I0813 20:03:28.088098 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:28 crc kubenswrapper[4183]: I0813 20:03:28.089261 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:28 crc kubenswrapper[4183]: I0813 20:03:28.089892 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:29 crc kubenswrapper[4183]: I0813 20:03:29.540064 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:03:29 crc kubenswrapper[4183]: I0813 20:03:29.540268 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:03:31 crc kubenswrapper[4183]: E0813 20:03:31.361546 4183 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" podSandboxID="411add17e78de78ccd75f5c0e0dfb380e3bff9047da00adac5d17d33bfb78e58" Aug 13 20:03:31 crc kubenswrapper[4183]: E0813 20:03:31.362141 4183 kuberuntime_manager.go:1262] container &Container{Name:openshift-apiserver-check-endpoints,Image:quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69,Command:[cluster-kube-apiserver-operator check-endpoints],Args:[--listen 0.0.0.0:17698 --namespace $(POD_NAMESPACE) --v 2],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:check-endpoints,HostPort:0,ContainerPort:17698,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6j2kj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5): CreateContainerError: context deadline exceeded Aug 13 20:03:31 crc kubenswrapper[4183]: E0813 20:03:31.362199 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-apiserver-check-endpoints\" with CreateContainerError: \"context deadline exceeded\"" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.110567 4183 scope.go:117] "RemoveContainer" containerID="98e20994b78d70c7d9739afcbef1576151aa009516cab8609a2c74b997bfed1a" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.112827 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.114285 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.115013 4183 status_manager.go:853] "Failed to get status for pod" podUID="71af81a9-7d43-49b2-9287-c375900aa905" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/pods/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.115521 4183 status_manager.go:853] "Failed to get status for pod" podUID="b23d6435-6431-4905-b41b-a517327385e5" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-jjfds\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.116366 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.117287 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.118542 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.119645 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.120606 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.121994 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.123110 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.125717 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.126669 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.127456 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.128200 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.128897 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.131474 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.132164 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.132706 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.134032 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.134677 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.135378 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.136175 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:32 crc kubenswrapper[4183]: E0813 20:03:32.804096 4183 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="f78c28c3dccb095318f195e1d81c6ec26e3a25cfb361d9aa9942e4d8a6f9923b" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.804196 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f78c28c3dccb095318f195e1d81c6ec26e3a25cfb361d9aa9942e4d8a6f9923b"} err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.804225 4183 scope.go:117] "RemoveContainer" containerID="c206967f2892cfc5d9ca27cc94cd1d42b6561839a6724e931bbdea13b6e1cde5" Aug 13 20:03:32 crc kubenswrapper[4183]: E0813 20:03:32.955395 4183 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" podSandboxID="2aed5bade7f294b09e25840fe64b91ca7e8460e350e656827bd2648f0721976d" Aug 13 20:03:32 crc kubenswrapper[4183]: E0813 20:03:32.955915 4183 kuberuntime_manager.go:1262] container &Container{Name:kube-controller-manager-operator,Image:quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f,Command:[cluster-kube-controller-manager-operator operator],Args:[--config=/var/run/configmaps/config/config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE,Value:quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f,ValueFrom:nil,},EnvVar{Name:CLUSTER_POLICY_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791,ValueFrom:nil,},EnvVar{Name:TOOLS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9d6201c776053346ebce8f90c34797a7a7c05898008e17f3ba9673f5f14507b0,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.16.0,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:1.29.5,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-controller-manager-operator-6f6cb54958-rbddb_openshift-kube-controller-manager-operator(c1620f19-8aa3-45cf-931b-7ae0e5cd14cf): CreateContainerError: context deadline exceeded Aug 13 20:03:32 crc kubenswrapper[4183]: E0813 20:03:32.956046 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager-operator\" with CreateContainerError: \"context deadline exceeded\"" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 20:03:32 crc kubenswrapper[4183]: E0813 20:03:32.957927 4183 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" podSandboxID="d3db60615905e44dc8f118e1544f7eb252e9b396f1af3b926339817c7ce1ed71" Aug 13 20:03:32 crc kubenswrapper[4183]: E0813 20:03:32.958531 4183 kuberuntime_manager.go:1262] container &Container{Name:openshift-config-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc,Command:[cluster-config-operator operator --operator-version=$(OPERATOR_IMAGE_VERSION) --authoritative-feature-gate-dir=/available-featuregates],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.16.0,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:4.16.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:available-featuregates,ReadOnly:false,MountPath:/available-featuregates,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-8dcvj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:1,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:1,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openshift-config-operator-77658b5b66-dq5sc_openshift-config-operator(530553aa-0a1d-423e-8a22-f5eb4bdbb883): CreateContainerError: context deadline exceeded Aug 13 20:03:32 crc kubenswrapper[4183]: E0813 20:03:32.958662 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CreateContainerError: \"context deadline exceeded\"" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.139999 4183 scope.go:117] "RemoveContainer" containerID="de2b2e2d762c8b359ec567ae879d9fedbdd2fb02f477f190f4465a6d6279b220" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.143820 4183 scope.go:117] "RemoveContainer" containerID="a82f834c3402db4242f753141733e4ebdbbd2a9132e9ded819a1a24bce37e03b" Aug 13 20:03:33 crc kubenswrapper[4183]: E0813 20:03:33.146579 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/events/openshift-controller-manager-operator-7978d7d7f6-2nt8z.185b6beb073764b5\": dial tcp 192.168.130.11:6443: connect: connection refused" event="&Event{ObjectMeta:{openshift-controller-manager-operator-7978d7d7f6-2nt8z.185b6beb073764b5 openshift-controller-manager-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-controller-manager-operator,Name:openshift-controller-manager-operator-7978d7d7f6-2nt8z,UID:0f394926-bdb9-425c-b36e-264d7fd34550,APIVersion:v1,ResourceVersion:23715,FieldPath:spec.containers{openshift-controller-manager-operator},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:58:56.973497525 +0000 UTC m=+903.666162213,LastTimestamp:2025-08-13 20:01:36.894280615 +0000 UTC m=+1063.586945253,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.146712 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.148155 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.152577 4183 status_manager.go:853] "Failed to get status for pod" podUID="71af81a9-7d43-49b2-9287-c375900aa905" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/pods/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.154245 4183 status_manager.go:853] "Failed to get status for pod" podUID="b23d6435-6431-4905-b41b-a517327385e5" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-jjfds\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.156673 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.160183 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.163263 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.164587 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.165673 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.166966 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.167635 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.170179 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.171476 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.179570 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.180585 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.181576 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.182543 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.184442 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.185063 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.185589 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.186180 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.186691 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.187497 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.188824 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.192558 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.193641 4183 status_manager.go:853] "Failed to get status for pod" podUID="b23d6435-6431-4905-b41b-a517327385e5" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-jjfds\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.195080 4183 status_manager.go:853] "Failed to get status for pod" podUID="71af81a9-7d43-49b2-9287-c375900aa905" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/pods/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.195730 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.197338 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.198623 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.200950 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.201666 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.202457 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.204072 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.205686 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.207140 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.208048 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.209113 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.209910 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.210405 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.211084 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.211709 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.212357 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.213086 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.213621 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.214235 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: E0813 20:03:33.919739 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="7s" Aug 13 20:03:34 crc kubenswrapper[4183]: I0813 20:03:34.872349 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:03:34 crc kubenswrapper[4183]: I0813 20:03:34.872962 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.211419 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.212210 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.213376 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.214993 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.216000 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.217673 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.220219 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.223477 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.224896 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.226685 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.234192 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.235357 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.237326 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.239180 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.240549 4183 status_manager.go:853] "Failed to get status for pod" podUID="71af81a9-7d43-49b2-9287-c375900aa905" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/pods/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.241331 4183 status_manager.go:853] "Failed to get status for pod" podUID="b23d6435-6431-4905-b41b-a517327385e5" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-jjfds\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.242495 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.243645 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.244446 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.245583 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.247018 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.247945 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.249169 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.665696 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.665987 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.666063 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.666083 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:03:36 crc kubenswrapper[4183]: E0813 20:03:36.259121 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?resourceVersion=0&timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:36 crc kubenswrapper[4183]: E0813 20:03:36.260281 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:36 crc kubenswrapper[4183]: E0813 20:03:36.261425 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:36 crc kubenswrapper[4183]: E0813 20:03:36.262254 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:36 crc kubenswrapper[4183]: E0813 20:03:36.263093 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:36 crc kubenswrapper[4183]: E0813 20:03:36.263115 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 20:03:36 crc kubenswrapper[4183]: E0813 20:03:36.932530 4183 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" podSandboxID="489c96bd95d523f4b7e59e72e928433dfb6870d719899f788f393fc315d5c1f5" Aug 13 20:03:36 crc kubenswrapper[4183]: E0813 20:03:36.932730 4183 kuberuntime_manager.go:1262] container &Container{Name:openshift-controller-manager-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611,Command:[cluster-openshift-controller-manager-operator operator],Args:[--config=/var/run/configmaps/config/config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.16.0,ValueFrom:nil,},EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.16.0,ValueFrom:nil,},EnvVar{Name:ROUTE_CONTROLLER_MANAGER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-l8bxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openshift-controller-manager-operator-7978d7d7f6-2nt8z_openshift-controller-manager-operator(0f394926-bdb9-425c-b36e-264d7fd34550): CreateContainerError: context deadline exceeded Aug 13 20:03:36 crc kubenswrapper[4183]: E0813 20:03:36.933059 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-controller-manager-operator\" with CreateContainerError: \"context deadline exceeded\"" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.189487 4183 scope.go:117] "RemoveContainer" containerID="30bf5390313371a8f7b0bd5cd736b789b0d1779681e69eff1d8e1c6c5c72d56d" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.191418 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.192501 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.193612 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.197451 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.199123 4183 status_manager.go:853] "Failed to get status for pod" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-7978d7d7f6-2nt8z\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.200252 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.201146 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.201952 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.202673 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.203381 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.204067 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.204738 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.205462 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.206116 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.206760 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.207586 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.213151 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.213950 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.215625 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.216425 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.217261 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.218475 4183 status_manager.go:853] "Failed to get status for pod" podUID="71af81a9-7d43-49b2-9287-c375900aa905" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/pods/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.219215 4183 status_manager.go:853] "Failed to get status for pod" podUID="b23d6435-6431-4905-b41b-a517327385e5" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-jjfds\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.221347 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:39 crc kubenswrapper[4183]: I0813 20:03:39.541123 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:03:39 crc kubenswrapper[4183]: I0813 20:03:39.541261 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:03:39 crc kubenswrapper[4183]: I0813 20:03:39.872486 4183 scope.go:117] "RemoveContainer" containerID="42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf" Aug 13 20:03:40 crc kubenswrapper[4183]: E0813 20:03:40.238198 4183 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" podSandboxID="8d494f516ab462fe0efca4e10a5bd10552cb52fe8198ca66dbb92b9402c1eae4" Aug 13 20:03:40 crc kubenswrapper[4183]: E0813 20:03:40.238937 4183 kuberuntime_manager.go:1262] container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc,Command:[/bin/bash -c #!/bin/bash Aug 13 20:03:40 crc kubenswrapper[4183]: set -o allexport Aug 13 20:03:40 crc kubenswrapper[4183]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Aug 13 20:03:40 crc kubenswrapper[4183]: source /etc/kubernetes/apiserver-url.env Aug 13 20:03:40 crc kubenswrapper[4183]: else Aug 13 20:03:40 crc kubenswrapper[4183]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Aug 13 20:03:40 crc kubenswrapper[4183]: exit 1 Aug 13 20:03:40 crc kubenswrapper[4183]: fi Aug 13 20:03:40 crc kubenswrapper[4183]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Aug 13 20:03:40 crc kubenswrapper[4183]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.16.0,ValueFrom:nil,},EnvVar{Name:SDN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9ec002699d6fa111b93b08bda974586ae4018f4a52d1cbfd0995e6dc9c732151,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce3a9355a4497b51899867170943d34bbc2d2b7996d9a002c103797bd828d71b,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f0791454224e2ec76fd43916220bd5ae55bf18f37f0cd571cb05c76e1d791453,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc5f4b6565d37bd875cdb42e95372128231218fb8741f640b09565d9dcea2cb1,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-4sfhc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-767c585db5-zd56b_openshift-network-operator(cc291782-27d2-4a74-af79-c7dcb31535d2): CreateContainerError: context deadline exceeded Aug 13 20:03:40 crc kubenswrapper[4183]: E0813 20:03:40.239006 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerError: \"context deadline exceeded\"" pod="openshift-network-operator/network-operator-767c585db5-zd56b" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" Aug 13 20:03:40 crc kubenswrapper[4183]: E0813 20:03:40.921336 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="7s" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.220155 4183 scope.go:117] "RemoveContainer" containerID="ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.221970 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.223472 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.224279 4183 status_manager.go:853] "Failed to get status for pod" podUID="b23d6435-6431-4905-b41b-a517327385e5" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-jjfds\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.225119 4183 status_manager.go:853] "Failed to get status for pod" podUID="71af81a9-7d43-49b2-9287-c375900aa905" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/pods/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.225675 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.226532 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.227446 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.228282 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.229134 4183 status_manager.go:853] "Failed to get status for pod" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-7978d7d7f6-2nt8z\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.230321 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.231455 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.232479 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.233494 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.235245 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.236420 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.237317 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.238312 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.239691 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.241177 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.242645 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.243418 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.244192 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.244936 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.245929 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:43 crc kubenswrapper[4183]: E0813 20:03:43.150624 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/events/openshift-controller-manager-operator-7978d7d7f6-2nt8z.185b6beb073764b5\": dial tcp 192.168.130.11:6443: connect: connection refused" event="&Event{ObjectMeta:{openshift-controller-manager-operator-7978d7d7f6-2nt8z.185b6beb073764b5 openshift-controller-manager-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-controller-manager-operator,Name:openshift-controller-manager-operator-7978d7d7f6-2nt8z,UID:0f394926-bdb9-425c-b36e-264d7fd34550,APIVersion:v1,ResourceVersion:23715,FieldPath:spec.containers{openshift-controller-manager-operator},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:58:56.973497525 +0000 UTC m=+903.666162213,LastTimestamp:2025-08-13 20:01:36.894280615 +0000 UTC m=+1063.586945253,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 20:03:44 crc kubenswrapper[4183]: E0813 20:03:44.431158 4183 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" podSandboxID="282af480c29eba88e80ad94d58f4ba7eb51ae6c6558514585728acae3448d722" Aug 13 20:03:44 crc kubenswrapper[4183]: E0813 20:03:44.431657 4183 kuberuntime_manager.go:1262] container &Container{Name:service-ca-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d,Command:[service-ca-operator operator],Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=2],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.16.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{83886080 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-d9vhj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod service-ca-operator-546b4f8984-pwccz_openshift-service-ca-operator(6d67253e-2acd-4bc1-8185-793587da4f17): CreateContainerError: context deadline exceeded Aug 13 20:03:44 crc kubenswrapper[4183]: E0813 20:03:44.431702 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-operator\" with CreateContainerError: \"context deadline exceeded\"" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 20:03:44 crc kubenswrapper[4183]: I0813 20:03:44.872013 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:03:44 crc kubenswrapper[4183]: I0813 20:03:44.872130 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.212536 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.214267 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.215631 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.217100 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.219131 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.220211 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.221167 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.222126 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.223070 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.223960 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.224621 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.225944 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.226706 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.227767 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.229005 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.230031 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.231325 4183 status_manager.go:853] "Failed to get status for pod" podUID="71af81a9-7d43-49b2-9287-c375900aa905" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/pods/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.233490 4183 status_manager.go:853] "Failed to get status for pod" podUID="b23d6435-6431-4905-b41b-a517327385e5" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-jjfds\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.234690 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.235763 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.236752 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.237925 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.238925 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.239881 4183 status_manager.go:853] "Failed to get status for pod" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-7978d7d7f6-2nt8z\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.245310 4183 scope.go:117] "RemoveContainer" containerID="de7555d542c802e58046a90350e414a08c9d856a865303fa64131537f1cc00fc" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.245885 4183 status_manager.go:853] "Failed to get status for pod" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/pods/service-ca-operator-546b4f8984-pwccz\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.247417 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.248220 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.249427 4183 status_manager.go:853] "Failed to get status for pod" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-7978d7d7f6-2nt8z\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.250417 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.251017 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.251583 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.252204 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.252927 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.253378 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.254047 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.256380 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.257620 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.258610 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.259600 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.260899 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.261555 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.262370 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.263113 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.263691 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.265312 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.266454 4183 status_manager.go:853] "Failed to get status for pod" podUID="71af81a9-7d43-49b2-9287-c375900aa905" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/pods/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.267516 4183 status_manager.go:853] "Failed to get status for pod" podUID="b23d6435-6431-4905-b41b-a517327385e5" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-jjfds\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.268704 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.269974 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:46 crc kubenswrapper[4183]: E0813 20:03:46.647315 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?resourceVersion=0&timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:46 crc kubenswrapper[4183]: E0813 20:03:46.648097 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:46 crc kubenswrapper[4183]: E0813 20:03:46.648578 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:46 crc kubenswrapper[4183]: E0813 20:03:46.649118 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:46 crc kubenswrapper[4183]: E0813 20:03:46.649679 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:46 crc kubenswrapper[4183]: E0813 20:03:46.649721 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 20:03:47 crc kubenswrapper[4183]: E0813 20:03:47.924557 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="7s" Aug 13 20:03:49 crc kubenswrapper[4183]: I0813 20:03:49.540076 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:03:49 crc kubenswrapper[4183]: I0813 20:03:49.540191 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:03:51 crc kubenswrapper[4183]: I0813 20:03:51.547620 4183 scope.go:117] "RemoveContainer" containerID="d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92" Aug 13 20:03:51 crc kubenswrapper[4183]: I0813 20:03:51.815311 4183 scope.go:117] "RemoveContainer" containerID="42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf" Aug 13 20:03:51 crc kubenswrapper[4183]: E0813 20:03:51.818337 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\": container with ID starting with 42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf not found: ID does not exist" containerID="42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf" Aug 13 20:03:51 crc kubenswrapper[4183]: I0813 20:03:51.818414 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf"} err="failed to get container status \"42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\": rpc error: code = NotFound desc = could not find container \"42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\": container with ID starting with 42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf not found: ID does not exist" Aug 13 20:03:51 crc kubenswrapper[4183]: I0813 20:03:51.818438 4183 scope.go:117] "RemoveContainer" containerID="71a0cdc384f9d93ad108bee372da2b3e7dddb9b98c65c36f3ddbf584a54fd830" Aug 13 20:03:51 crc kubenswrapper[4183]: I0813 20:03:51.908296 4183 scope.go:117] "RemoveContainer" containerID="51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52" Aug 13 20:03:51 crc kubenswrapper[4183]: I0813 20:03:51.973248 4183 scope.go:117] "RemoveContainer" containerID="417399fd591cd0cade9e86c96a7f4a9443d365dc57f627f00e02594fd8957bf3" Aug 13 20:03:51 crc kubenswrapper[4183]: I0813 20:03:51.999520 4183 scope.go:117] "RemoveContainer" containerID="7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.136716 4183 scope.go:117] "RemoveContainer" containerID="a4a4a30f20f748c27de48f589b297456dbde26c9c06b9c1e843ce69a376e85a9" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.251946 4183 scope.go:117] "RemoveContainer" containerID="2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.332974 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/cluster-policy-controller/5.log" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.334677 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/kube-controller-manager/0.log" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.334969 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"2eb2b200bca0d10cf0fe16fb7c0caf80","Type":"ContainerStarted","Data":"2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa"} Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.347377 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.352028 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.352959 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.353908 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.354585 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.355237 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.355963 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.357058 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.359662 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.360466 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.361210 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.362273 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.363085 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.374354 4183 status_manager.go:853] "Failed to get status for pod" podUID="71af81a9-7d43-49b2-9287-c375900aa905" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/pods/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.377143 4183 status_manager.go:853] "Failed to get status for pod" podUID="b23d6435-6431-4905-b41b-a517327385e5" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-jjfds\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.379331 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.381240 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.382386 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.383532 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.384450 4183 status_manager.go:853] "Failed to get status for pod" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/pods/service-ca-operator-546b4f8984-pwccz\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.385304 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.386031 4183 status_manager.go:853] "Failed to get status for pod" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-7978d7d7f6-2nt8z\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.386926 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.388027 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.389206 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.591196 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-65476884b9-9wcvx" event={"ID":"6268b7fe-8910-4505-b404-6f1df638105c","Type":"ContainerStarted","Data":"9b7878320974e3985f5732deb5170463e1dafc9265287376679a29ea7923e84c"} Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.594312 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.594452 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.594543 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.594627 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.595310 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.596506 4183 status_manager.go:853] "Failed to get status for pod" podUID="b23d6435-6431-4905-b41b-a517327385e5" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-jjfds\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.597199 4183 status_manager.go:853] "Failed to get status for pod" podUID="71af81a9-7d43-49b2-9287-c375900aa905" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/pods/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.598261 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.599060 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.599094 4183 scope.go:117] "RemoveContainer" containerID="7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.599826 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.601014 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: E0813 20:03:52.601085 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\": container with ID starting with 7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e not found: ID does not exist" containerID="7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.601130 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e"} err="failed to get container status \"7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\": rpc error: code = NotFound desc = could not find container \"7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\": container with ID starting with 7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e not found: ID does not exist" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.601144 4183 scope.go:117] "RemoveContainer" containerID="e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.604198 4183 status_manager.go:853] "Failed to get status for pod" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/pods/service-ca-operator-546b4f8984-pwccz\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.605258 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.606558 4183 status_manager.go:853] "Failed to get status for pod" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-7978d7d7f6-2nt8z\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.608023 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.609312 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.610283 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.611159 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.611766 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.612431 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.613495 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.615178 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.616312 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.618643 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.625019 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.626334 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.628113 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.631859 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.645070 4183 scope.go:117] "RemoveContainer" containerID="e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.650987 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/0.log" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.651253 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" event={"ID":"7d51f445-054a-4e4f-a67b-a828f5a32511","Type":"ContainerStarted","Data":"5591be2de8956909e600e69f97a9f842da06662ddb70dc80595c060706c1d24b"} Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.655764 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.657134 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.657711 4183 status_manager.go:853] "Failed to get status for pod" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/pods/service-ca-operator-546b4f8984-pwccz\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.658417 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.659137 4183 status_manager.go:853] "Failed to get status for pod" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-7978d7d7f6-2nt8z\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.659921 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.663326 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.667993 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.670400 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.673032 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.675751 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.680620 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.689708 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.691103 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.694349 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.699256 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.703504 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.705175 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.719389 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.724042 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.730489 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.737357 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.740380 4183 status_manager.go:853] "Failed to get status for pod" podUID="b23d6435-6431-4905-b41b-a517327385e5" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-jjfds\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.746116 4183 status_manager.go:853] "Failed to get status for pod" podUID="71af81a9-7d43-49b2-9287-c375900aa905" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/pods/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.747167 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.815913 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"92b2a8634cfe8a21cffcc98cc8c87160","Type":"ContainerStarted","Data":"a3aeac3b3f0abd9616c32591e8c03ee04ad93d9eaa1a57f5f009d1e5534dc9bf"} Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.836479 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"48128e8d38b5cbcd2691da698bd9cac3","Type":"ContainerStarted","Data":"4df62f5cb9c66f562c10ea184889e69acedbf4f895667310c68697db48fd553b"} Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.846168 4183 scope.go:117] "RemoveContainer" containerID="51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52" Aug 13 20:03:52 crc kubenswrapper[4183]: E0813 20:03:52.847149 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\": container with ID starting with 51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52 not found: ID does not exist" containerID="51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.847236 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52"} err="failed to get container status \"51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\": rpc error: code = NotFound desc = could not find container \"51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\": container with ID starting with 51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52 not found: ID does not exist" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.847256 4183 scope.go:117] "RemoveContainer" containerID="d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624" Aug 13 20:03:52 crc kubenswrapper[4183]: E0813 20:03:52.847353 4183 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to delete container k8s_kube-scheduler-cert-syncer_openshift-kube-scheduler-crc_openshift-kube-scheduler_631cdb37fbb54e809ecc5e719aebd371_0 in pod sandbox 970bf8339a8e8001b60c124abd60c2b2381265f54d5bcdb460515789626b6ba9 from index: no such id: 'e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff'" containerID="e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff" Aug 13 20:03:52 crc kubenswrapper[4183]: E0813 20:03:52.847399 4183 kuberuntime_gc.go:150] "Failed to remove container" err="rpc error: code = Unknown desc = failed to delete container k8s_kube-scheduler-cert-syncer_openshift-kube-scheduler-crc_openshift-kube-scheduler_631cdb37fbb54e809ecc5e719aebd371_0 in pod sandbox 970bf8339a8e8001b60c124abd60c2b2381265f54d5bcdb460515789626b6ba9 from index: no such id: 'e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff'" containerID="e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.847419 4183 scope.go:117] "RemoveContainer" containerID="d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.865429 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"bf055e84f32193b9c1c21b0c34a61f01","Type":"ContainerStarted","Data":"da0d5a4673db72bf057aaca9add937d2dd33d15edccefb4817f17da3759c2927"} Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.884076 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/approver/0.log" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.923425 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.924626 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.925393 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.926622 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.930474 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.931532 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.932827 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.933481 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.934358 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.938533 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.939640 4183 status_manager.go:853] "Failed to get status for pod" podUID="71af81a9-7d43-49b2-9287-c375900aa905" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/pods/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.941010 4183 status_manager.go:853] "Failed to get status for pod" podUID="b23d6435-6431-4905-b41b-a517327385e5" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-jjfds\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.945088 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.946475 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.956057 4183 status_manager.go:853] "Failed to get status for pod" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/pods/service-ca-operator-546b4f8984-pwccz\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.956738 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.962403 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.970510 4183 status_manager.go:853] "Failed to get status for pod" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-7978d7d7f6-2nt8z\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.972115 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.975619 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.978427 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.996070 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.997568 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.001222 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.007673 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:53 crc kubenswrapper[4183]: E0813 20:03:53.153513 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/events/openshift-controller-manager-operator-7978d7d7f6-2nt8z.185b6beb073764b5\": dial tcp 192.168.130.11:6443: connect: connection refused" event="&Event{ObjectMeta:{openshift-controller-manager-operator-7978d7d7f6-2nt8z.185b6beb073764b5 openshift-controller-manager-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-controller-manager-operator,Name:openshift-controller-manager-operator-7978d7d7f6-2nt8z,UID:0f394926-bdb9-425c-b36e-264d7fd34550,APIVersion:v1,ResourceVersion:23715,FieldPath:spec.containers{openshift-controller-manager-operator},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:58:56.973497525 +0000 UTC m=+903.666162213,LastTimestamp:2025-08-13 20:01:36.894280615 +0000 UTC m=+1063.586945253,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 20:03:53 crc kubenswrapper[4183]: E0813 20:03:53.161396 4183 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to delete container k8s_wait-for-host-port_openshift-kube-scheduler-crc_openshift-kube-scheduler_631cdb37fbb54e809ecc5e719aebd371_0 in pod sandbox 970bf8339a8e8001b60c124abd60c2b2381265f54d5bcdb460515789626b6ba9 from index: no such id: 'd1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624'" containerID="d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624" Aug 13 20:03:53 crc kubenswrapper[4183]: E0813 20:03:53.161515 4183 kuberuntime_gc.go:150] "Failed to remove container" err="rpc error: code = Unknown desc = failed to delete container k8s_wait-for-host-port_openshift-kube-scheduler-crc_openshift-kube-scheduler_631cdb37fbb54e809ecc5e719aebd371_0 in pod sandbox 970bf8339a8e8001b60c124abd60c2b2381265f54d5bcdb460515789626b6ba9 from index: no such id: 'd1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624'" containerID="d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.161545 4183 scope.go:117] "RemoveContainer" containerID="138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.161685 4183 scope.go:117] "RemoveContainer" containerID="d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92" Aug 13 20:03:53 crc kubenswrapper[4183]: E0813 20:03:53.165607 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\": container with ID starting with d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92 not found: ID does not exist" containerID="d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.165661 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92"} err="failed to get container status \"d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\": rpc error: code = NotFound desc = could not find container \"d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\": container with ID starting with d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92 not found: ID does not exist" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.165680 4183 scope.go:117] "RemoveContainer" containerID="42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.166373 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf"} err="failed to get container status \"42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\": rpc error: code = NotFound desc = could not find container \"42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\": container with ID starting with 42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf not found: ID does not exist" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.166417 4183 scope.go:117] "RemoveContainer" containerID="fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.388109 4183 scope.go:117] "RemoveContainer" containerID="f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.489002 4183 scope.go:117] "RemoveContainer" containerID="138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325" Aug 13 20:03:53 crc kubenswrapper[4183]: E0813 20:03:53.490441 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\": container with ID starting with 138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325 not found: ID does not exist" containerID="138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.490514 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325"} err="failed to get container status \"138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\": rpc error: code = NotFound desc = could not find container \"138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\": container with ID starting with 138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325 not found: ID does not exist" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.490537 4183 scope.go:117] "RemoveContainer" containerID="2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2" Aug 13 20:03:53 crc kubenswrapper[4183]: E0813 20:03:53.492177 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\": container with ID starting with 2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2 not found: ID does not exist" containerID="2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.492257 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2"} err="failed to get container status \"2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\": rpc error: code = NotFound desc = could not find container \"2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\": container with ID starting with 2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2 not found: ID does not exist" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.492291 4183 scope.go:117] "RemoveContainer" containerID="7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.554953 4183 scope.go:117] "RemoveContainer" containerID="fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a" Aug 13 20:03:53 crc kubenswrapper[4183]: E0813 20:03:53.558249 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\": container with ID starting with fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a not found: ID does not exist" containerID="fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a" Aug 13 20:03:53 crc kubenswrapper[4183]: E0813 20:03:53.558305 4183 kuberuntime_gc.go:150] "Failed to remove container" err="failed to get container status \"fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\": rpc error: code = NotFound desc = could not find container \"fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\": container with ID starting with fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a not found: ID does not exist" containerID="fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.558335 4183 scope.go:117] "RemoveContainer" containerID="7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.900996 4183 scope.go:117] "RemoveContainer" containerID="f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480" Aug 13 20:03:53 crc kubenswrapper[4183]: E0813 20:03:53.901228 4183 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to delete container k8s_kube-apiserver_kube-apiserver-crc_openshift-kube-apiserver_53c1db1508241fbac1bedf9130341ffe_0 in pod sandbox e09ebdd208d66afb0ba856fe61dfd2ca4a4d9b0d5aab8790984ba43fbfd18d83 from index: no such id: '7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5'" containerID="7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5" Aug 13 20:03:53 crc kubenswrapper[4183]: E0813 20:03:53.901273 4183 kuberuntime_gc.go:150] "Failed to remove container" err="rpc error: code = Unknown desc = failed to delete container k8s_kube-apiserver_kube-apiserver-crc_openshift-kube-apiserver_53c1db1508241fbac1bedf9130341ffe_0 in pod sandbox e09ebdd208d66afb0ba856fe61dfd2ca4a4d9b0d5aab8790984ba43fbfd18d83 from index: no such id: '7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5'" containerID="7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5" Aug 13 20:03:53 crc kubenswrapper[4183]: E0813 20:03:53.914540 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\": container with ID starting with f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480 not found: ID does not exist" containerID="f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.914650 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480"} err="failed to get container status \"f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\": rpc error: code = NotFound desc = could not find container \"f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\": container with ID starting with f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480 not found: ID does not exist" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.914676 4183 scope.go:117] "RemoveContainer" containerID="32fd955a56de5925978ca9c74fd5477e1123ae91904669c797c57e09bb337d84" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.985211 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k9qqb" event={"ID":"ccdf38cf-634a-41a2-9c8b-74bb86af80a7","Type":"ContainerStarted","Data":"be5d91aad199c1c8bd5b2b79223d42aced870eea5f8ee3c624591deb82d9bd24"} Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.989633 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.990768 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.992256 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.993070 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.994202 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.995251 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.997368 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.998538 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.999235 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.999727 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.000364 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.000917 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.001581 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.005208 4183 status_manager.go:853] "Failed to get status for pod" podUID="b23d6435-6431-4905-b41b-a517327385e5" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-jjfds\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.006195 4183 status_manager.go:853] "Failed to get status for pod" podUID="71af81a9-7d43-49b2-9287-c375900aa905" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/pods/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.006867 4183 status_manager.go:853] "Failed to get status for pod" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" pod="openshift-marketplace/community-operators-k9qqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-k9qqb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.007503 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.008135 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.010212 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.012267 4183 status_manager.go:853] "Failed to get status for pod" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/pods/service-ca-operator-546b4f8984-pwccz\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.013308 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.014224 4183 status_manager.go:853] "Failed to get status for pod" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-7978d7d7f6-2nt8z\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.015561 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.017215 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.018054 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.018769 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.034042 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"48128e8d38b5cbcd2691da698bd9cac3","Type":"ContainerStarted","Data":"c71c0072a7c08ea4ae494694be88f8491b485a84b46f62cedff5223a7c75b5ba"} Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.050142 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jhz6" event={"ID":"3f4dca86-e6ee-4ec9-8324-86aff960225e","Type":"ContainerStarted","Data":"3e919419d7e26f5e613ad3f3c9052fdc42524d23434e8deabbaeb09b182eb8f6"} Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.067978 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"bf055e84f32193b9c1c21b0c34a61f01","Type":"ContainerStarted","Data":"15820ab514a1ec9c31d0791a36dbd2a502fe86541e3878da038ece782fc81268"} Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.070249 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.071425 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.073964 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.077460 4183 status_manager.go:853] "Failed to get status for pod" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/pods/service-ca-operator-546b4f8984-pwccz\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.078588 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.081030 4183 status_manager.go:853] "Failed to get status for pod" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-7978d7d7f6-2nt8z\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.082476 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.084958 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/approver/0.log" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.086267 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" event={"ID":"51a02bbf-2d40-4f84-868a-d399ea18a846","Type":"ContainerStarted","Data":"e302077a679b703dfa8553f1ea474302e86cc72bc23b53926bdc62ce33df0f64"} Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.088211 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.094913 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.097251 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.102324 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.102620 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g4v97" event={"ID":"bb917686-edfb-4158-86ad-6fce0abec64c","Type":"ContainerStarted","Data":"c3dbff7f4c3117da13658584d3a507d50302df8be0d31802f8e4e5b93ddec694"} Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.103968 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.106639 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.113311 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.116123 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.118679 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.123027 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.124242 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.125181 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.125924 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.126600 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.128062 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.129082 4183 status_manager.go:853] "Failed to get status for pod" podUID="71af81a9-7d43-49b2-9287-c375900aa905" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/pods/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.129903 4183 status_manager.go:853] "Failed to get status for pod" podUID="b23d6435-6431-4905-b41b-a517327385e5" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-jjfds\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.135059 4183 status_manager.go:853] "Failed to get status for pod" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" pod="openshift-marketplace/community-operators-k9qqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-k9qqb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.136239 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4jkp" event={"ID":"4092a9f8-5acc-4932-9e90-ef962eeb301a","Type":"ContainerStarted","Data":"319ec802f9a442097e69485c29cd0a5e07ea7f1fe43cf8778e08e37b4cf9f85f"} Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.138270 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.152304 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.153725 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.155006 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" event={"ID":"c782cf62-a827-4677-b3c2-6f82c5f09cbb","Type":"ContainerStarted","Data":"0faea5dd6bb8aefd0e2039a30acf20b3bfe9e917754e8d9b2a898f4051a2c5dc"} Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.156625 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.159271 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.164315 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.165074 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.165661 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.166382 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.167048 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.172278 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.176069 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.176915 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.181046 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.183126 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.189981 4183 status_manager.go:853] "Failed to get status for pod" podUID="71af81a9-7d43-49b2-9287-c375900aa905" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/pods/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.193940 4183 status_manager.go:853] "Failed to get status for pod" podUID="b23d6435-6431-4905-b41b-a517327385e5" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-jjfds\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.198031 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.199125 4183 status_manager.go:853] "Failed to get status for pod" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" pod="openshift-marketplace/community-operators-k9qqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-k9qqb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.200183 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.205213 4183 status_manager.go:853] "Failed to get status for pod" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" pod="openshift-marketplace/redhat-operators-f4jkp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f4jkp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.211008 4183 status_manager.go:853] "Failed to get status for pod" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" pod="openshift-marketplace/certified-operators-g4v97" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g4v97\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.211825 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.222035 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.222627 4183 status_manager.go:853] "Failed to get status for pod" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/pods/service-ca-operator-546b4f8984-pwccz\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.230992 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.233069 4183 status_manager.go:853] "Failed to get status for pod" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-7978d7d7f6-2nt8z\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.233933 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.234623 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.235869 4183 status_manager.go:853] "Failed to get status for pod" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" pod="openshift-marketplace/redhat-operators-f4jkp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f4jkp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.242517 4183 status_manager.go:853] "Failed to get status for pod" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8s8pc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.243296 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.245137 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.246348 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.249618 4183 status_manager.go:853] "Failed to get status for pod" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/pods/service-ca-operator-546b4f8984-pwccz\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.250358 4183 status_manager.go:853] "Failed to get status for pod" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" pod="openshift-marketplace/certified-operators-g4v97" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g4v97\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.251385 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.252168 4183 status_manager.go:853] "Failed to get status for pod" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-7978d7d7f6-2nt8z\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.252716 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.253704 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.254575 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.255223 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.261472 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.281818 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.287989 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.289834 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.300725 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.321118 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.343664 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.361538 4183 scope.go:117] "RemoveContainer" containerID="850160bdc6ea5ea83ea4c13388d6776a10113289f49f21b1ead74f152e5a1512" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.368418 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.382082 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.408899 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.425935 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.431358 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/0.log" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.436109 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" event={"ID":"b23d6435-6431-4905-b41b-a517327385e5","Type":"ContainerStarted","Data":"21969208e6f9e5d5177b9a170e1a6076e7e4022118a21462b693bf056d71642a"} Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.437653 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.439496 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.441269 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.475968 4183 status_manager.go:853] "Failed to get status for pod" podUID="b23d6435-6431-4905-b41b-a517327385e5" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-jjfds\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.481505 4183 status_manager.go:853] "Failed to get status for pod" podUID="71af81a9-7d43-49b2-9287-c375900aa905" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/pods/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.502828 4183 status_manager.go:853] "Failed to get status for pod" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" pod="openshift-marketplace/community-operators-k9qqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-k9qqb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.525338 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.629285 4183 scope.go:117] "RemoveContainer" containerID="a9c5c60859fe5965d3e56b1f36415e36c4ebccf094bcf5a836013b9db4262143" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.708140 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.708286 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" status="Running" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.708320 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.708378 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Pending" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.708414 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.708451 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Pending" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.738941 4183 scope.go:117] "RemoveContainer" containerID="b52df8e62a367664028244f096d775f6f9e6f572cd730e4e147620381f6880c3" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.875372 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.875453 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.875544 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.875464 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: E0813 20:03:54.928188 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="7s" Aug 13 20:03:54 crc kubenswrapper[4183]: E0813 20:03:54.960376 4183 cadvisor_stats_provider.go:501] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92b2a8634cfe8a21cffcc98cc8c87160.slice/crio-dc3b34e8b871f3bd864f0c456c6ee0a0f7a97f171f4c0c5d20a5a451b26196e9.scope\": RecentStats: unable to find data in memory cache]" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.214351 4183 status_manager.go:853] "Failed to get status for pod" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" pod="openshift-marketplace/redhat-operators-f4jkp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f4jkp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.218089 4183 status_manager.go:853] "Failed to get status for pod" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8s8pc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.219195 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.219961 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.224599 4183 status_manager.go:853] "Failed to get status for pod" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/pods/service-ca-operator-546b4f8984-pwccz\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.228904 4183 status_manager.go:853] "Failed to get status for pod" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" pod="openshift-marketplace/certified-operators-g4v97" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g4v97\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.229919 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.231029 4183 status_manager.go:853] "Failed to get status for pod" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-7978d7d7f6-2nt8z\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.231920 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.235357 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.236962 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.238438 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.240074 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.241611 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.245553 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.249464 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.251421 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.254160 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.255417 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.256743 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.257566 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.260917 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.264107 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.266770 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.277921 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.279402 4183 status_manager.go:853] "Failed to get status for pod" podUID="b23d6435-6431-4905-b41b-a517327385e5" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-jjfds\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.283013 4183 status_manager.go:853] "Failed to get status for pod" podUID="71af81a9-7d43-49b2-9287-c375900aa905" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/pods/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.285316 4183 status_manager.go:853] "Failed to get status for pod" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" pod="openshift-marketplace/community-operators-k9qqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-k9qqb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.290481 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.620454 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/1.log" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.621742 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/0.log" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.622475 4183 generic.go:334] "Generic (PLEG): container finished" podID="b23d6435-6431-4905-b41b-a517327385e5" containerID="21969208e6f9e5d5177b9a170e1a6076e7e4022118a21462b693bf056d71642a" exitCode=255 Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.622574 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" event={"ID":"b23d6435-6431-4905-b41b-a517327385e5","Type":"ContainerDied","Data":"21969208e6f9e5d5177b9a170e1a6076e7e4022118a21462b693bf056d71642a"} Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.622611 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" event={"ID":"b23d6435-6431-4905-b41b-a517327385e5","Type":"ContainerStarted","Data":"1a09e11981ae9c63bb4ca1d27de2b7a914e1b4ad8edd3d0d73f1ad5239373316"} Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.622633 4183 scope.go:117] "RemoveContainer" containerID="98e20994b78d70c7d9739afcbef1576151aa009516cab8609a2c74b997bfed1a" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.627053 4183 scope.go:117] "RemoveContainer" containerID="21969208e6f9e5d5177b9a170e1a6076e7e4022118a21462b693bf056d71642a" Aug 13 20:03:59 crc kubenswrapper[4183]: E0813 20:03:55.628078 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=openshift-apiserver pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\"" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.629064 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.629596 4183 generic.go:334] "Generic (PLEG): container finished" podID="92b2a8634cfe8a21cffcc98cc8c87160" containerID="dc3b34e8b871f3bd864f0c456c6ee0a0f7a97f171f4c0c5d20a5a451b26196e9" exitCode=0 Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.629704 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"92b2a8634cfe8a21cffcc98cc8c87160","Type":"ContainerDied","Data":"dc3b34e8b871f3bd864f0c456c6ee0a0f7a97f171f4c0c5d20a5a451b26196e9"} Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.630399 4183 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="5e53e26d-e94d-45dc-b706-677ed667c8ce" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.630462 4183 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="5e53e26d-e94d-45dc-b706-677ed667c8ce" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.632367 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: E0813 20:03:55.632479 4183 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.633693 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.648340 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dcqzh" event={"ID":"6db26b71-4e04-4688-a0c0-00e06e8c888d","Type":"ContainerStarted","Data":"5dfab3908e38ec4c78ee676439e402432e22c1d28963eb816627f094e1f7ffed"} Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.650425 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.652757 4183 status_manager.go:853] "Failed to get status for pod" podUID="b23d6435-6431-4905-b41b-a517327385e5" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-jjfds\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.655106 4183 status_manager.go:853] "Failed to get status for pod" podUID="71af81a9-7d43-49b2-9287-c375900aa905" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/pods/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.656986 4183 status_manager.go:853] "Failed to get status for pod" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" pod="openshift-marketplace/community-operators-k9qqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-k9qqb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.658549 4183 generic.go:334] "Generic (PLEG): container finished" podID="48128e8d38b5cbcd2691da698bd9cac3" containerID="c71c0072a7c08ea4ae494694be88f8491b485a84b46f62cedff5223a7c75b5ba" exitCode=0 Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.658644 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"48128e8d38b5cbcd2691da698bd9cac3","Type":"ContainerDied","Data":"c71c0072a7c08ea4ae494694be88f8491b485a84b46f62cedff5223a7c75b5ba"} Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.660075 4183 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="09143b32-bfcb-4682-a82f-e0bfa420e445" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.660097 4183 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="09143b32-bfcb-4682-a82f-e0bfa420e445" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.663572 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: E0813 20:03:55.663943 4183 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.664898 4183 status_manager.go:853] "Failed to get status for pod" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" pod="openshift-marketplace/redhat-operators-f4jkp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f4jkp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.665145 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.665404 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.665450 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.665467 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.665996 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7287f" event={"ID":"887d596e-c519-4bfa-af90-3edd9e1b2f0f","Type":"ContainerStarted","Data":"a56163bd96976ea74aba1c86f22da617d6a03538ac47eacc7910be637d7bf8ff"} Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.666515 4183 status_manager.go:853] "Failed to get status for pod" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8s8pc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.667399 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.671709 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.676322 4183 status_manager.go:853] "Failed to get status for pod" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/pods/service-ca-operator-546b4f8984-pwccz\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.676514 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" event={"ID":"9ad279b4-d9dc-42a8-a1c8-a002bd063482","Type":"ContainerStarted","Data":"5dbac91dc644a8b25317c807e75f64e96be88bcfa9dc60fb2f4e72c80656206a"} Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.681983 4183 status_manager.go:853] "Failed to get status for pod" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" pod="openshift-marketplace/certified-operators-g4v97" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g4v97\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.682718 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.683350 4183 status_manager.go:853] "Failed to get status for pod" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-7978d7d7f6-2nt8z\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.683933 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.684530 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.685091 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.685546 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.686263 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.686545 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.686586 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.686900 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.687592 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.693863 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.694675 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.718511 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.755648 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.758261 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.759512 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.761201 4183 status_manager.go:853] "Failed to get status for pod" podUID="b23d6435-6431-4905-b41b-a517327385e5" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-jjfds\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.765354 4183 status_manager.go:853] "Failed to get status for pod" podUID="71af81a9-7d43-49b2-9287-c375900aa905" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/pods/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.785133 4183 status_manager.go:853] "Failed to get status for pod" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" pod="openshift-marketplace/community-operators-8jhz6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8jhz6\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.801950 4183 status_manager.go:853] "Failed to get status for pod" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" pod="openshift-marketplace/community-operators-k9qqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-k9qqb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.834459 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.842708 4183 status_manager.go:853] "Failed to get status for pod" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8s8pc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.869897 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.901900 4183 status_manager.go:853] "Failed to get status for pod" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" pod="openshift-marketplace/redhat-operators-f4jkp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f4jkp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.907735 4183 status_manager.go:853] "Failed to get status for pod" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/pods/service-ca-operator-546b4f8984-pwccz\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.922435 4183 status_manager.go:853] "Failed to get status for pod" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" pod="openshift-marketplace/certified-operators-g4v97" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g4v97\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.942988 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.963378 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.983100 4183 status_manager.go:853] "Failed to get status for pod" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-7978d7d7f6-2nt8z\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.004700 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.024106 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.047217 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.061301 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.081932 4183 status_manager.go:853] "Failed to get status for pod" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-rmwfn\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.101674 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.122544 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.157367 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.167833 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.181304 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.201007 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.221447 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.246117 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.262127 4183 status_manager.go:853] "Failed to get status for pod" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" pod="openshift-marketplace/certified-operators-7287f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-7287f\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.286681 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.301302 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.321179 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.340915 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.696466 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver-check-endpoints/1.log" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.697332 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/1.log" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.697924 4183 generic.go:334] "Generic (PLEG): container finished" podID="b23d6435-6431-4905-b41b-a517327385e5" containerID="1a09e11981ae9c63bb4ca1d27de2b7a914e1b4ad8edd3d0d73f1ad5239373316" exitCode=255 Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.697963 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" event={"ID":"b23d6435-6431-4905-b41b-a517327385e5","Type":"ContainerDied","Data":"1a09e11981ae9c63bb4ca1d27de2b7a914e1b4ad8edd3d0d73f1ad5239373316"} Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.698501 4183 scope.go:117] "RemoveContainer" containerID="21969208e6f9e5d5177b9a170e1a6076e7e4022118a21462b693bf056d71642a" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.698518 4183 scope.go:117] "RemoveContainer" containerID="1a09e11981ae9c63bb4ca1d27de2b7a914e1b4ad8edd3d0d73f1ad5239373316" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:57.706332 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:57.715764 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"92b2a8634cfe8a21cffcc98cc8c87160","Type":"ContainerStarted","Data":"5b04274f5ebeb54ec142f28db67158b3f20014bf0046505512a20f576eb7c4b4"} Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:57.723053 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"48128e8d38b5cbcd2691da698bd9cac3","Type":"ContainerStarted","Data":"cc3b998787ca6834bc0a8e76f29b082be5c1e343717bbe7707559989e9554f12"} Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:57.726435 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:58.737374 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"48128e8d38b5cbcd2691da698bd9cac3","Type":"ContainerStarted","Data":"bb37d165f1c10d3b09fbe44a52f35b204201086505dc6f64b89245df7312c343"} Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:58.739719 4183 generic.go:334] "Generic (PLEG): container finished" podID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerID="7b2c6478f4940bab46ab22fb59aeffb640ce0f0e8ccd61b80c50a3afdd842157" exitCode=0 Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:58.739832 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" event={"ID":"3482be94-0cdb-4e2a-889b-e5fac59fdbf5","Type":"ContainerDied","Data":"7b2c6478f4940bab46ab22fb59aeffb640ce0f0e8ccd61b80c50a3afdd842157"} Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:58.740447 4183 scope.go:117] "RemoveContainer" containerID="7b2c6478f4940bab46ab22fb59aeffb640ce0f0e8ccd61b80c50a3afdd842157" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:58.744316 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver-check-endpoints/1.log" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:58.745125 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/1.log" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:58.748107 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" event={"ID":"b23d6435-6431-4905-b41b-a517327385e5","Type":"ContainerStarted","Data":"807c95a3bab23454d169be67ad3880f3c2b11c9bf2ae434a29dc423b56035cca"} Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:58.748152 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:03:59 crc kubenswrapper[4183]: E0813 20:03:58.788129 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=openshift-apiserver pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\"" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.288123 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: E0813 20:03:59.288273 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?resourceVersion=0&timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: E0813 20:03:59.290115 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.290189 4183 status_manager.go:853] "Failed to get status for pod" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" pod="openshift-marketplace/redhat-operators-dcqzh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-dcqzh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.291275 4183 status_manager.go:853] "Failed to get status for pod" podUID="b23d6435-6431-4905-b41b-a517327385e5" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-jjfds\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: E0813 20:03:59.292131 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.293145 4183 status_manager.go:853] "Failed to get status for pod" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-8b455464d-f9xdt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: E0813 20:03:59.293268 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.294218 4183 status_manager.go:853] "Failed to get status for pod" podUID="71af81a9-7d43-49b2-9287-c375900aa905" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/pods/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.295226 4183 status_manager.go:853] "Failed to get status for pod" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" pod="openshift-marketplace/community-operators-8jhz6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8jhz6\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: E0813 20:03:59.295645 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: E0813 20:03:59.295730 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.296617 4183 status_manager.go:853] "Failed to get status for pod" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" pod="openshift-marketplace/community-operators-k9qqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-k9qqb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.297883 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.299107 4183 status_manager.go:853] "Failed to get status for pod" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" pod="openshift-marketplace/redhat-operators-f4jkp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f4jkp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.301006 4183 status_manager.go:853] "Failed to get status for pod" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8s8pc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.301906 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.303484 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.305187 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.306082 4183 status_manager.go:853] "Failed to get status for pod" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/pods/service-ca-operator-546b4f8984-pwccz\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.308614 4183 status_manager.go:853] "Failed to get status for pod" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" pod="openshift-marketplace/certified-operators-g4v97" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g4v97\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.309539 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.312005 4183 status_manager.go:853] "Failed to get status for pod" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-7978d7d7f6-2nt8z\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.313185 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.314671 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.316158 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.320652 4183 status_manager.go:853] "Failed to get status for pod" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-rmwfn\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.321893 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.322873 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.324685 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.327030 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.328459 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.329474 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.330380 4183 status_manager.go:853] "Failed to get status for pod" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" pod="openshift-marketplace/certified-operators-7287f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-7287f\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.331342 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.332105 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.332755 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.333584 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.334273 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.334880 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.335981 4183 status_manager.go:853] "Failed to get status for pod" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" pod="openshift-marketplace/redhat-operators-dcqzh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-dcqzh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.337487 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.539176 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.539344 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.776658 4183 scope.go:117] "RemoveContainer" containerID="21969208e6f9e5d5177b9a170e1a6076e7e4022118a21462b693bf056d71642a" Aug 13 20:03:59 crc kubenswrapper[4183]: E0813 20:03:59.777414 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=openshift-apiserver pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\"" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.778308 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"92b2a8634cfe8a21cffcc98cc8c87160","Type":"ContainerStarted","Data":"daf74224d04a5859b6f3ea7213d84dd41f91a9dfefadc077c041aabcb8247fdd"} Aug 13 20:04:00 crc kubenswrapper[4183]: I0813 20:04:00.820526 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"48128e8d38b5cbcd2691da698bd9cac3","Type":"ContainerStarted","Data":"955a586517e3a80d51e63d25ab6529e5a5465596e05a4fd7f9f0729d7998cbc9"} Aug 13 20:04:00 crc kubenswrapper[4183]: I0813 20:04:00.836446 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" event={"ID":"3482be94-0cdb-4e2a-889b-e5fac59fdbf5","Type":"ContainerStarted","Data":"b85554f0e1f346055c3ddba50c820fa4bcf10f0fb1c0952a5fa718f250783d71"} Aug 13 20:04:00 crc kubenswrapper[4183]: I0813 20:04:00.836953 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 20:04:00 crc kubenswrapper[4183]: I0813 20:04:00.839287 4183 patch_prober.go:28] interesting pod/marketplace-operator-8b455464d-f9xdt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Aug 13 20:04:00 crc kubenswrapper[4183]: I0813 20:04:00.839373 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" Aug 13 20:04:00 crc kubenswrapper[4183]: I0813 20:04:00.868702 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver-check-endpoints/2.log" Aug 13 20:04:00 crc kubenswrapper[4183]: I0813 20:04:00.872256 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver-check-endpoints/1.log" Aug 13 20:04:00 crc kubenswrapper[4183]: I0813 20:04:00.873984 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/1.log" Aug 13 20:04:00 crc kubenswrapper[4183]: I0813 20:04:00.876957 4183 generic.go:334] "Generic (PLEG): container finished" podID="b23d6435-6431-4905-b41b-a517327385e5" containerID="807c95a3bab23454d169be67ad3880f3c2b11c9bf2ae434a29dc423b56035cca" exitCode=255 Aug 13 20:04:00 crc kubenswrapper[4183]: I0813 20:04:00.877027 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" event={"ID":"b23d6435-6431-4905-b41b-a517327385e5","Type":"ContainerDied","Data":"807c95a3bab23454d169be67ad3880f3c2b11c9bf2ae434a29dc423b56035cca"} Aug 13 20:04:00 crc kubenswrapper[4183]: I0813 20:04:00.877070 4183 scope.go:117] "RemoveContainer" containerID="1a09e11981ae9c63bb4ca1d27de2b7a914e1b4ad8edd3d0d73f1ad5239373316" Aug 13 20:04:00 crc kubenswrapper[4183]: I0813 20:04:00.877941 4183 scope.go:117] "RemoveContainer" containerID="21969208e6f9e5d5177b9a170e1a6076e7e4022118a21462b693bf056d71642a" Aug 13 20:04:00 crc kubenswrapper[4183]: I0813 20:04:00.877988 4183 scope.go:117] "RemoveContainer" containerID="807c95a3bab23454d169be67ad3880f3c2b11c9bf2ae434a29dc423b56035cca" Aug 13 20:04:00 crc kubenswrapper[4183]: E0813 20:04:00.878661 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"openshift-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=openshift-apiserver pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\", failed to \"StartContainer\" for \"openshift-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=openshift-apiserver-check-endpoints pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\"]" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" Aug 13 20:04:01 crc kubenswrapper[4183]: I0813 20:04:01.912502 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"48128e8d38b5cbcd2691da698bd9cac3","Type":"ContainerStarted","Data":"8bb841779401bd078d2cc708da9ac3cfd63491bf70c3a4f9e582b8786fa96b83"} Aug 13 20:04:01 crc kubenswrapper[4183]: I0813 20:04:01.918382 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver-check-endpoints/2.log" Aug 13 20:04:01 crc kubenswrapper[4183]: I0813 20:04:01.922374 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/1.log" Aug 13 20:04:01 crc kubenswrapper[4183]: I0813 20:04:01.952170 4183 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="5e53e26d-e94d-45dc-b706-677ed667c8ce" Aug 13 20:04:01 crc kubenswrapper[4183]: I0813 20:04:01.952209 4183 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="5e53e26d-e94d-45dc-b706-677ed667c8ce" Aug 13 20:04:01 crc kubenswrapper[4183]: I0813 20:04:01.952863 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"92b2a8634cfe8a21cffcc98cc8c87160","Type":"ContainerStarted","Data":"da6e49e577c89776d78e03c12b1aa711de8c3b6ceb252a9c05b51d38a6e6fd8a"} Aug 13 20:04:01 crc kubenswrapper[4183]: I0813 20:04:01.952902 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:04:01 crc kubenswrapper[4183]: I0813 20:04:01.953193 4183 patch_prober.go:28] interesting pod/marketplace-operator-8b455464d-f9xdt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Aug 13 20:04:01 crc kubenswrapper[4183]: I0813 20:04:01.953280 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" Aug 13 20:04:02 crc kubenswrapper[4183]: I0813 20:04:02.974963 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"48128e8d38b5cbcd2691da698bd9cac3","Type":"ContainerStarted","Data":"6e4f959539810eaf11abed055957cc9d830327c14164adc78761f27b297f44b9"} Aug 13 20:04:02 crc kubenswrapper[4183]: I0813 20:04:02.983911 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-8b455464d-f9xdt_3482be94-0cdb-4e2a-889b-e5fac59fdbf5/marketplace-operator/1.log" Aug 13 20:04:02 crc kubenswrapper[4183]: I0813 20:04:02.985984 4183 generic.go:334] "Generic (PLEG): container finished" podID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerID="b85554f0e1f346055c3ddba50c820fa4bcf10f0fb1c0952a5fa718f250783d71" exitCode=1 Aug 13 20:04:02 crc kubenswrapper[4183]: I0813 20:04:02.986118 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" event={"ID":"3482be94-0cdb-4e2a-889b-e5fac59fdbf5","Type":"ContainerDied","Data":"b85554f0e1f346055c3ddba50c820fa4bcf10f0fb1c0952a5fa718f250783d71"} Aug 13 20:04:02 crc kubenswrapper[4183]: I0813 20:04:02.986157 4183 scope.go:117] "RemoveContainer" containerID="7b2c6478f4940bab46ab22fb59aeffb640ce0f0e8ccd61b80c50a3afdd842157" Aug 13 20:04:02 crc kubenswrapper[4183]: I0813 20:04:02.986735 4183 scope.go:117] "RemoveContainer" containerID="b85554f0e1f346055c3ddba50c820fa4bcf10f0fb1c0952a5fa718f250783d71" Aug 13 20:04:02 crc kubenswrapper[4183]: E0813 20:04:02.987548 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-8b455464d-f9xdt_openshift-marketplace(3482be94-0cdb-4e2a-889b-e5fac59fdbf5)\"" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 20:04:03 crc kubenswrapper[4183]: I0813 20:04:03.998006 4183 generic.go:334] "Generic (PLEG): container finished" podID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" containerID="0faea5dd6bb8aefd0e2039a30acf20b3bfe9e917754e8d9b2a898f4051a2c5dc" exitCode=0 Aug 13 20:04:03 crc kubenswrapper[4183]: I0813 20:04:03.998105 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" event={"ID":"c782cf62-a827-4677-b3c2-6f82c5f09cbb","Type":"ContainerDied","Data":"0faea5dd6bb8aefd0e2039a30acf20b3bfe9e917754e8d9b2a898f4051a2c5dc"} Aug 13 20:04:04 crc kubenswrapper[4183]: I0813 20:04:04.003442 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-8b455464d-f9xdt_3482be94-0cdb-4e2a-889b-e5fac59fdbf5/marketplace-operator/1.log" Aug 13 20:04:04 crc kubenswrapper[4183]: I0813 20:04:04.003935 4183 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="09143b32-bfcb-4682-a82f-e0bfa420e445" Aug 13 20:04:04 crc kubenswrapper[4183]: I0813 20:04:04.003971 4183 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="09143b32-bfcb-4682-a82f-e0bfa420e445" Aug 13 20:04:04 crc kubenswrapper[4183]: I0813 20:04:04.004254 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:04:04 crc kubenswrapper[4183]: I0813 20:04:04.038070 4183 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Aug 13 20:04:04 crc kubenswrapper[4183]: I0813 20:04:04.523272 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 20:04:04 crc kubenswrapper[4183]: I0813 20:04:04.524281 4183 scope.go:117] "RemoveContainer" containerID="b85554f0e1f346055c3ddba50c820fa4bcf10f0fb1c0952a5fa718f250783d71" Aug 13 20:04:04 crc kubenswrapper[4183]: E0813 20:04:04.524679 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-8b455464d-f9xdt_openshift-marketplace(3482be94-0cdb-4e2a-889b-e5fac59fdbf5)\"" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 20:04:04 crc kubenswrapper[4183]: I0813 20:04:04.871606 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:04:04 crc kubenswrapper[4183]: I0813 20:04:04.871700 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:04:04 crc kubenswrapper[4183]: I0813 20:04:04.871749 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:04:04 crc kubenswrapper[4183]: I0813 20:04:04.871952 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:04:06 crc kubenswrapper[4183]: I0813 20:04:06.232970 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:04:06 crc kubenswrapper[4183]: I0813 20:04:06.235698 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:04:06 crc kubenswrapper[4183]: I0813 20:04:06.247545 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:04:07 crc kubenswrapper[4183]: I0813 20:04:07.080683 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" event={"ID":"c782cf62-a827-4677-b3c2-6f82c5f09cbb","Type":"ContainerStarted","Data":"955cfa5558a348b4ee35f6a2b6d73e526c9554a025e5023e0fb461373cb0f4d0"} Aug 13 20:04:07 crc kubenswrapper[4183]: I0813 20:04:07.086603 4183 generic.go:334] "Generic (PLEG): container finished" podID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" containerID="5dbac91dc644a8b25317c807e75f64e96be88bcfa9dc60fb2f4e72c80656206a" exitCode=0 Aug 13 20:04:07 crc kubenswrapper[4183]: I0813 20:04:07.086722 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" event={"ID":"9ad279b4-d9dc-42a8-a1c8-a002bd063482","Type":"ContainerDied","Data":"5dbac91dc644a8b25317c807e75f64e96be88bcfa9dc60fb2f4e72c80656206a"} Aug 13 20:04:07 crc kubenswrapper[4183]: I0813 20:04:07.090544 4183 generic.go:334] "Generic (PLEG): container finished" podID="bb917686-edfb-4158-86ad-6fce0abec64c" containerID="c3dbff7f4c3117da13658584d3a507d50302df8be0d31802f8e4e5b93ddec694" exitCode=0 Aug 13 20:04:07 crc kubenswrapper[4183]: I0813 20:04:07.090601 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g4v97" event={"ID":"bb917686-edfb-4158-86ad-6fce0abec64c","Type":"ContainerDied","Data":"c3dbff7f4c3117da13658584d3a507d50302df8be0d31802f8e4e5b93ddec694"} Aug 13 20:04:09 crc kubenswrapper[4183]: I0813 20:04:09.540223 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:04:09 crc kubenswrapper[4183]: I0813 20:04:09.542063 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:04:10 crc kubenswrapper[4183]: I0813 20:04:10.128627 4183 generic.go:334] "Generic (PLEG): container finished" podID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" containerID="a56163bd96976ea74aba1c86f22da617d6a03538ac47eacc7910be637d7bf8ff" exitCode=0 Aug 13 20:04:10 crc kubenswrapper[4183]: I0813 20:04:10.128731 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7287f" event={"ID":"887d596e-c519-4bfa-af90-3edd9e1b2f0f","Type":"ContainerDied","Data":"a56163bd96976ea74aba1c86f22da617d6a03538ac47eacc7910be637d7bf8ff"} Aug 13 20:04:10 crc kubenswrapper[4183]: I0813 20:04:10.139614 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" event={"ID":"9ad279b4-d9dc-42a8-a1c8-a002bd063482","Type":"ContainerStarted","Data":"2b69a4a950514ff8d569afb43701fa230045e0687c1859975dc65fed5c5d7467"} Aug 13 20:04:10 crc kubenswrapper[4183]: I0813 20:04:10.144463 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g4v97" event={"ID":"bb917686-edfb-4158-86ad-6fce0abec64c","Type":"ContainerStarted","Data":"844f180a492dff97326b5ea50f79dcbfc132e7edaccd1723d8997c38fb3bf568"} Aug 13 20:04:10 crc kubenswrapper[4183]: I0813 20:04:10.584765 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:04:12 crc kubenswrapper[4183]: I0813 20:04:12.167278 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7287f" event={"ID":"887d596e-c519-4bfa-af90-3edd9e1b2f0f","Type":"ContainerStarted","Data":"58b55f32eafae666203cdd6fbc4d2636fee478a2b24e4b57e1b52230cdf74843"} Aug 13 20:04:13 crc kubenswrapper[4183]: I0813 20:04:13.463032 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:04:14 crc kubenswrapper[4183]: I0813 20:04:14.370971 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7287f" Aug 13 20:04:14 crc kubenswrapper[4183]: I0813 20:04:14.372425 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7287f" Aug 13 20:04:14 crc kubenswrapper[4183]: I0813 20:04:14.735468 4183 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:04:14 crc kubenswrapper[4183]: I0813 20:04:14.737108 4183 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:04:14 crc kubenswrapper[4183]: I0813 20:04:14.871953 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:04:14 crc kubenswrapper[4183]: I0813 20:04:14.872447 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:04:14 crc kubenswrapper[4183]: I0813 20:04:14.872692 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 20:04:14 crc kubenswrapper[4183]: I0813 20:04:14.871953 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:04:14 crc kubenswrapper[4183]: I0813 20:04:14.873120 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:04:14 crc kubenswrapper[4183]: I0813 20:04:14.873545 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:04:14 crc kubenswrapper[4183]: I0813 20:04:14.873658 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:04:14 crc kubenswrapper[4183]: I0813 20:04:14.876995 4183 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"9b7878320974e3985f5732deb5170463e1dafc9265287376679a29ea7923e84c"} pod="openshift-console/downloads-65476884b9-9wcvx" containerMessage="Container download-server failed liveness probe, will be restarted" Aug 13 20:04:14 crc kubenswrapper[4183]: I0813 20:04:14.877174 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" containerID="cri-o://9b7878320974e3985f5732deb5170463e1dafc9265287376679a29ea7923e84c" gracePeriod=2 Aug 13 20:04:14 crc kubenswrapper[4183]: I0813 20:04:14.936494 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 20:04:14 crc kubenswrapper[4183]: I0813 20:04:14.937746 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 20:04:14 crc kubenswrapper[4183]: I0813 20:04:14.938058 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 20:04:14 crc kubenswrapper[4183]: I0813 20:04:14.938080 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 20:04:15 crc kubenswrapper[4183]: I0813 20:04:15.210617 4183 scope.go:117] "RemoveContainer" containerID="21969208e6f9e5d5177b9a170e1a6076e7e4022118a21462b693bf056d71642a" Aug 13 20:04:15 crc kubenswrapper[4183]: I0813 20:04:15.210672 4183 scope.go:117] "RemoveContainer" containerID="807c95a3bab23454d169be67ad3880f3c2b11c9bf2ae434a29dc423b56035cca" Aug 13 20:04:15 crc kubenswrapper[4183]: I0813 20:04:15.288075 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f/control-plane-machine-set-operator/0.log" Aug 13 20:04:15 crc kubenswrapper[4183]: I0813 20:04:15.288188 4183 generic.go:334] "Generic (PLEG): container finished" podID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" containerID="cde7b91dcd48d4e06df4d6dec59646da2d7b63ba4245f33286ad238c06706436" exitCode=1 Aug 13 20:04:15 crc kubenswrapper[4183]: I0813 20:04:15.289403 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" event={"ID":"45a8038e-e7f2-4d93-a6f5-7753aa54e63f","Type":"ContainerDied","Data":"cde7b91dcd48d4e06df4d6dec59646da2d7b63ba4245f33286ad238c06706436"} Aug 13 20:04:15 crc kubenswrapper[4183]: I0813 20:04:15.289888 4183 scope.go:117] "RemoveContainer" containerID="cde7b91dcd48d4e06df4d6dec59646da2d7b63ba4245f33286ad238c06706436" Aug 13 20:04:15 crc kubenswrapper[4183]: I0813 20:04:15.939985 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" containerName="registry-server" probeResult="failure" output=< Aug 13 20:04:15 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:04:15 crc kubenswrapper[4183]: > Aug 13 20:04:16 crc kubenswrapper[4183]: I0813 20:04:16.102098 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" containerName="registry-server" probeResult="failure" output=< Aug 13 20:04:16 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:04:16 crc kubenswrapper[4183]: > Aug 13 20:04:16 crc kubenswrapper[4183]: I0813 20:04:16.109451 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" containerName="registry-server" probeResult="failure" output=< Aug 13 20:04:16 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:04:16 crc kubenswrapper[4183]: > Aug 13 20:04:16 crc kubenswrapper[4183]: I0813 20:04:16.247089 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:04:16 crc kubenswrapper[4183]: I0813 20:04:16.300679 4183 generic.go:334] "Generic (PLEG): container finished" podID="6268b7fe-8910-4505-b404-6f1df638105c" containerID="9b7878320974e3985f5732deb5170463e1dafc9265287376679a29ea7923e84c" exitCode=0 Aug 13 20:04:16 crc kubenswrapper[4183]: I0813 20:04:16.300729 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-65476884b9-9wcvx" event={"ID":"6268b7fe-8910-4505-b404-6f1df638105c","Type":"ContainerDied","Data":"9b7878320974e3985f5732deb5170463e1dafc9265287376679a29ea7923e84c"} Aug 13 20:04:16 crc kubenswrapper[4183]: I0813 20:04:16.301340 4183 scope.go:117] "RemoveContainer" containerID="74df4184eccc1eab0b2fc55559bbac3d87ade106234259f3272b047110a68b24" Aug 13 20:04:16 crc kubenswrapper[4183]: I0813 20:04:16.305561 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver-check-endpoints/2.log" Aug 13 20:04:16 crc kubenswrapper[4183]: I0813 20:04:16.306619 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/1.log" Aug 13 20:04:16 crc kubenswrapper[4183]: I0813 20:04:16.307283 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" event={"ID":"b23d6435-6431-4905-b41b-a517327385e5","Type":"ContainerStarted","Data":"d703fa1aef3414ff17f21755cb4d9348dcee4860bbb97e5def23b2a5e008c021"} Aug 13 20:04:17 crc kubenswrapper[4183]: I0813 20:04:17.317334 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-65476884b9-9wcvx" event={"ID":"6268b7fe-8910-4505-b404-6f1df638105c","Type":"ContainerStarted","Data":"00e210723fa2ab3c15d1bb1e413bb28a867eb77be9c752bffa81f06d8a65f0ee"} Aug 13 20:04:17 crc kubenswrapper[4183]: I0813 20:04:17.318439 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 20:04:17 crc kubenswrapper[4183]: I0813 20:04:17.318740 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:04:17 crc kubenswrapper[4183]: I0813 20:04:17.319123 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:04:17 crc kubenswrapper[4183]: I0813 20:04:17.321562 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f/control-plane-machine-set-operator/0.log" Aug 13 20:04:17 crc kubenswrapper[4183]: I0813 20:04:17.321649 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" event={"ID":"45a8038e-e7f2-4d93-a6f5-7753aa54e63f","Type":"ContainerStarted","Data":"0cacbc14e2522c21376a7d66a61a079d962c7b38a2d0f39522c7854c8ae5956a"} Aug 13 20:04:18 crc kubenswrapper[4183]: I0813 20:04:18.332105 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver-check-endpoints/2.log" Aug 13 20:04:18 crc kubenswrapper[4183]: I0813 20:04:18.334088 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/1.log" Aug 13 20:04:18 crc kubenswrapper[4183]: I0813 20:04:18.334885 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" event={"ID":"b23d6435-6431-4905-b41b-a517327385e5","Type":"ContainerStarted","Data":"ba82d955226ea1e51a72b2bf71d781c65d24d78e4274d8a9bbb39973d6793c6b"} Aug 13 20:04:18 crc kubenswrapper[4183]: I0813 20:04:18.335450 4183 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="5e53e26d-e94d-45dc-b706-677ed667c8ce" Aug 13 20:04:18 crc kubenswrapper[4183]: I0813 20:04:18.335487 4183 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="5e53e26d-e94d-45dc-b706-677ed667c8ce" Aug 13 20:04:18 crc kubenswrapper[4183]: I0813 20:04:18.335485 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:04:18 crc kubenswrapper[4183]: I0813 20:04:18.335605 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:04:18 crc kubenswrapper[4183]: I0813 20:04:18.336257 4183 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="09143b32-bfcb-4682-a82f-e0bfa420e445" Aug 13 20:04:18 crc kubenswrapper[4183]: I0813 20:04:18.336333 4183 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="09143b32-bfcb-4682-a82f-e0bfa420e445" Aug 13 20:04:19 crc kubenswrapper[4183]: I0813 20:04:19.211510 4183 scope.go:117] "RemoveContainer" containerID="b85554f0e1f346055c3ddba50c820fa4bcf10f0fb1c0952a5fa718f250783d71" Aug 13 20:04:19 crc kubenswrapper[4183]: I0813 20:04:19.539623 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:04:19 crc kubenswrapper[4183]: I0813 20:04:19.540660 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:04:19 crc kubenswrapper[4183]: I0813 20:04:19.658478 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-g4v97" Aug 13 20:04:19 crc kubenswrapper[4183]: I0813 20:04:19.658588 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-g4v97" Aug 13 20:04:20 crc kubenswrapper[4183]: I0813 20:04:20.377545 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-8b455464d-f9xdt_3482be94-0cdb-4e2a-889b-e5fac59fdbf5/marketplace-operator/1.log" Aug 13 20:04:20 crc kubenswrapper[4183]: I0813 20:04:20.666273 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:04:20 crc kubenswrapper[4183]: I0813 20:04:20.666350 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:04:20 crc kubenswrapper[4183]: I0813 20:04:20.667514 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-jjfds container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Aug 13 20:04:20 crc kubenswrapper[4183]: I0813 20:04:20.667578 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": dial tcp 10.217.0.70:8443: connect: connection refused" Aug 13 20:04:20 crc kubenswrapper[4183]: I0813 20:04:20.847498 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-g4v97" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" containerName="registry-server" probeResult="failure" output=< Aug 13 20:04:20 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:04:20 crc kubenswrapper[4183]: > Aug 13 20:04:21 crc kubenswrapper[4183]: I0813 20:04:21.391098 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-8b455464d-f9xdt_3482be94-0cdb-4e2a-889b-e5fac59fdbf5/marketplace-operator/1.log" Aug 13 20:04:21 crc kubenswrapper[4183]: I0813 20:04:21.391224 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" event={"ID":"3482be94-0cdb-4e2a-889b-e5fac59fdbf5","Type":"ContainerStarted","Data":"a40b12b128b1e9065da4a3aeeb59afb89c5abde3d01a932b1d00d9946d49c42e"} Aug 13 20:04:21 crc kubenswrapper[4183]: I0813 20:04:21.394316 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 20:04:21 crc kubenswrapper[4183]: I0813 20:04:21.394375 4183 patch_prober.go:28] interesting pod/marketplace-operator-8b455464d-f9xdt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Aug 13 20:04:21 crc kubenswrapper[4183]: I0813 20:04:21.394425 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" Aug 13 20:04:21 crc kubenswrapper[4183]: I0813 20:04:21.405955 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver-check-endpoints/3.log" Aug 13 20:04:21 crc kubenswrapper[4183]: I0813 20:04:21.424731 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/2.log" Aug 13 20:04:21 crc kubenswrapper[4183]: I0813 20:04:21.427524 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver-check-endpoints/2.log" Aug 13 20:04:21 crc kubenswrapper[4183]: I0813 20:04:21.428573 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/1.log" Aug 13 20:04:21 crc kubenswrapper[4183]: I0813 20:04:21.430940 4183 generic.go:334] "Generic (PLEG): container finished" podID="b23d6435-6431-4905-b41b-a517327385e5" containerID="ba82d955226ea1e51a72b2bf71d781c65d24d78e4274d8a9bbb39973d6793c6b" exitCode=255 Aug 13 20:04:21 crc kubenswrapper[4183]: I0813 20:04:21.431015 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" event={"ID":"b23d6435-6431-4905-b41b-a517327385e5","Type":"ContainerDied","Data":"ba82d955226ea1e51a72b2bf71d781c65d24d78e4274d8a9bbb39973d6793c6b"} Aug 13 20:04:21 crc kubenswrapper[4183]: I0813 20:04:21.431063 4183 scope.go:117] "RemoveContainer" containerID="807c95a3bab23454d169be67ad3880f3c2b11c9bf2ae434a29dc423b56035cca" Aug 13 20:04:21 crc kubenswrapper[4183]: I0813 20:04:21.432643 4183 scope.go:117] "RemoveContainer" containerID="d703fa1aef3414ff17f21755cb4d9348dcee4860bbb97e5def23b2a5e008c021" Aug 13 20:04:21 crc kubenswrapper[4183]: I0813 20:04:21.432698 4183 scope.go:117] "RemoveContainer" containerID="ba82d955226ea1e51a72b2bf71d781c65d24d78e4274d8a9bbb39973d6793c6b" Aug 13 20:04:21 crc kubenswrapper[4183]: E0813 20:04:21.435988 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"openshift-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-apiserver pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\", failed to \"StartContainer\" for \"openshift-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-apiserver-check-endpoints pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\"]" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" Aug 13 20:04:22 crc kubenswrapper[4183]: I0813 20:04:22.441900 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver-check-endpoints/3.log" Aug 13 20:04:22 crc kubenswrapper[4183]: I0813 20:04:22.444007 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/2.log" Aug 13 20:04:22 crc kubenswrapper[4183]: I0813 20:04:22.445109 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/1.log" Aug 13 20:04:22 crc kubenswrapper[4183]: I0813 20:04:22.449403 4183 generic.go:334] "Generic (PLEG): container finished" podID="b23d6435-6431-4905-b41b-a517327385e5" containerID="d703fa1aef3414ff17f21755cb4d9348dcee4860bbb97e5def23b2a5e008c021" exitCode=255 Aug 13 20:04:22 crc kubenswrapper[4183]: I0813 20:04:22.452444 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" event={"ID":"b23d6435-6431-4905-b41b-a517327385e5","Type":"ContainerDied","Data":"d703fa1aef3414ff17f21755cb4d9348dcee4860bbb97e5def23b2a5e008c021"} Aug 13 20:04:22 crc kubenswrapper[4183]: I0813 20:04:22.452721 4183 scope.go:117] "RemoveContainer" containerID="21969208e6f9e5d5177b9a170e1a6076e7e4022118a21462b693bf056d71642a" Aug 13 20:04:22 crc kubenswrapper[4183]: I0813 20:04:22.454578 4183 scope.go:117] "RemoveContainer" containerID="d703fa1aef3414ff17f21755cb4d9348dcee4860bbb97e5def23b2a5e008c021" Aug 13 20:04:22 crc kubenswrapper[4183]: I0813 20:04:22.454626 4183 scope.go:117] "RemoveContainer" containerID="ba82d955226ea1e51a72b2bf71d781c65d24d78e4274d8a9bbb39973d6793c6b" Aug 13 20:04:22 crc kubenswrapper[4183]: I0813 20:04:22.455260 4183 patch_prober.go:28] interesting pod/marketplace-operator-8b455464d-f9xdt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Aug 13 20:04:22 crc kubenswrapper[4183]: I0813 20:04:22.455951 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" Aug 13 20:04:22 crc kubenswrapper[4183]: E0813 20:04:22.455397 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"openshift-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-apiserver pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\", failed to \"StartContainer\" for \"openshift-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-apiserver-check-endpoints pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\"]" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" Aug 13 20:04:23 crc kubenswrapper[4183]: I0813 20:04:23.677346 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-8b455464d-f9xdt_3482be94-0cdb-4e2a-889b-e5fac59fdbf5/marketplace-operator/2.log" Aug 13 20:04:23 crc kubenswrapper[4183]: I0813 20:04:23.678661 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-8b455464d-f9xdt_3482be94-0cdb-4e2a-889b-e5fac59fdbf5/marketplace-operator/1.log" Aug 13 20:04:23 crc kubenswrapper[4183]: I0813 20:04:23.678725 4183 generic.go:334] "Generic (PLEG): container finished" podID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerID="a40b12b128b1e9065da4a3aeeb59afb89c5abde3d01a932b1d00d9946d49c42e" exitCode=1 Aug 13 20:04:23 crc kubenswrapper[4183]: I0813 20:04:23.678937 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" event={"ID":"3482be94-0cdb-4e2a-889b-e5fac59fdbf5","Type":"ContainerDied","Data":"a40b12b128b1e9065da4a3aeeb59afb89c5abde3d01a932b1d00d9946d49c42e"} Aug 13 20:04:23 crc kubenswrapper[4183]: I0813 20:04:23.678987 4183 scope.go:117] "RemoveContainer" containerID="b85554f0e1f346055c3ddba50c820fa4bcf10f0fb1c0952a5fa718f250783d71" Aug 13 20:04:23 crc kubenswrapper[4183]: I0813 20:04:23.679550 4183 scope.go:117] "RemoveContainer" containerID="a40b12b128b1e9065da4a3aeeb59afb89c5abde3d01a932b1d00d9946d49c42e" Aug 13 20:04:23 crc kubenswrapper[4183]: E0813 20:04:23.680072 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=marketplace-operator pod=marketplace-operator-8b455464d-f9xdt_openshift-marketplace(3482be94-0cdb-4e2a-889b-e5fac59fdbf5)\"" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 20:04:23 crc kubenswrapper[4183]: I0813 20:04:23.684831 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver-check-endpoints/3.log" Aug 13 20:04:23 crc kubenswrapper[4183]: I0813 20:04:23.685747 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/2.log" Aug 13 20:04:24 crc kubenswrapper[4183]: I0813 20:04:24.522960 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 20:04:24 crc kubenswrapper[4183]: I0813 20:04:24.695084 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-8b455464d-f9xdt_3482be94-0cdb-4e2a-889b-e5fac59fdbf5/marketplace-operator/2.log" Aug 13 20:04:24 crc kubenswrapper[4183]: I0813 20:04:24.695994 4183 scope.go:117] "RemoveContainer" containerID="a40b12b128b1e9065da4a3aeeb59afb89c5abde3d01a932b1d00d9946d49c42e" Aug 13 20:04:24 crc kubenswrapper[4183]: E0813 20:04:24.696619 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=marketplace-operator pod=marketplace-operator-8b455464d-f9xdt_openshift-marketplace(3482be94-0cdb-4e2a-889b-e5fac59fdbf5)\"" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 20:04:24 crc kubenswrapper[4183]: I0813 20:04:24.871956 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:04:24 crc kubenswrapper[4183]: I0813 20:04:24.872068 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:04:24 crc kubenswrapper[4183]: I0813 20:04:24.872273 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:04:24 crc kubenswrapper[4183]: I0813 20:04:24.872125 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:04:25 crc kubenswrapper[4183]: I0813 20:04:25.531477 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" containerName="registry-server" probeResult="failure" output=< Aug 13 20:04:25 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:04:25 crc kubenswrapper[4183]: > Aug 13 20:04:25 crc kubenswrapper[4183]: I0813 20:04:25.665334 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:04:25 crc kubenswrapper[4183]: I0813 20:04:25.665530 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:04:25 crc kubenswrapper[4183]: I0813 20:04:25.666412 4183 scope.go:117] "RemoveContainer" containerID="d703fa1aef3414ff17f21755cb4d9348dcee4860bbb97e5def23b2a5e008c021" Aug 13 20:04:25 crc kubenswrapper[4183]: I0813 20:04:25.666474 4183 scope.go:117] "RemoveContainer" containerID="ba82d955226ea1e51a72b2bf71d781c65d24d78e4274d8a9bbb39973d6793c6b" Aug 13 20:04:25 crc kubenswrapper[4183]: E0813 20:04:25.667564 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"openshift-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-apiserver pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\", failed to \"StartContainer\" for \"openshift-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-apiserver-check-endpoints pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\"]" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" Aug 13 20:04:25 crc kubenswrapper[4183]: I0813 20:04:25.707556 4183 scope.go:117] "RemoveContainer" containerID="d703fa1aef3414ff17f21755cb4d9348dcee4860bbb97e5def23b2a5e008c021" Aug 13 20:04:25 crc kubenswrapper[4183]: I0813 20:04:25.707921 4183 scope.go:117] "RemoveContainer" containerID="ba82d955226ea1e51a72b2bf71d781c65d24d78e4274d8a9bbb39973d6793c6b" Aug 13 20:04:25 crc kubenswrapper[4183]: E0813 20:04:25.717101 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"openshift-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-apiserver pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\", failed to \"StartContainer\" for \"openshift-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-apiserver-check-endpoints pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\"]" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" Aug 13 20:04:26 crc kubenswrapper[4183]: I0813 20:04:26.082431 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" containerName="registry-server" probeResult="failure" output=< Aug 13 20:04:26 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:04:26 crc kubenswrapper[4183]: > Aug 13 20:04:26 crc kubenswrapper[4183]: I0813 20:04:26.102356 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" containerName="registry-server" probeResult="failure" output=< Aug 13 20:04:26 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:04:26 crc kubenswrapper[4183]: > Aug 13 20:04:29 crc kubenswrapper[4183]: I0813 20:04:29.540563 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:04:29 crc kubenswrapper[4183]: I0813 20:04:29.541077 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:04:30 crc kubenswrapper[4183]: I0813 20:04:30.809386 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-g4v97" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" containerName="registry-server" probeResult="failure" output=< Aug 13 20:04:30 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:04:30 crc kubenswrapper[4183]: > Aug 13 20:04:34 crc kubenswrapper[4183]: I0813 20:04:34.872612 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:04:34 crc kubenswrapper[4183]: I0813 20:04:34.873160 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:04:34 crc kubenswrapper[4183]: I0813 20:04:34.873017 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:04:34 crc kubenswrapper[4183]: I0813 20:04:34.873257 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:04:35 crc kubenswrapper[4183]: I0813 20:04:35.523618 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" containerName="registry-server" probeResult="failure" output=< Aug 13 20:04:35 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:04:35 crc kubenswrapper[4183]: > Aug 13 20:04:36 crc kubenswrapper[4183]: I0813 20:04:36.055527 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" containerName="registry-server" probeResult="failure" output=< Aug 13 20:04:36 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:04:36 crc kubenswrapper[4183]: > Aug 13 20:04:36 crc kubenswrapper[4183]: I0813 20:04:36.067382 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" containerName="registry-server" probeResult="failure" output=< Aug 13 20:04:36 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:04:36 crc kubenswrapper[4183]: > Aug 13 20:04:36 crc kubenswrapper[4183]: I0813 20:04:36.209341 4183 scope.go:117] "RemoveContainer" containerID="a40b12b128b1e9065da4a3aeeb59afb89c5abde3d01a932b1d00d9946d49c42e" Aug 13 20:04:36 crc kubenswrapper[4183]: E0813 20:04:36.209960 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=marketplace-operator pod=marketplace-operator-8b455464d-f9xdt_openshift-marketplace(3482be94-0cdb-4e2a-889b-e5fac59fdbf5)\"" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 20:04:36 crc kubenswrapper[4183]: I0813 20:04:36.941233 4183 status_manager.go:863] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="48128e8d38b5cbcd2691da698bd9cac3" podUID="53c20181-da08-4c94-91d7-6f71a843fa75" Aug 13 20:04:38 crc kubenswrapper[4183]: I0813 20:04:38.803919 4183 status_manager.go:863] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" oldPodUID="92b2a8634cfe8a21cffcc98cc8c87160" podUID="1f93bc40-081c-4dbc-905a-acda15a1c6ce" Aug 13 20:04:39 crc kubenswrapper[4183]: I0813 20:04:39.220261 4183 scope.go:117] "RemoveContainer" containerID="d703fa1aef3414ff17f21755cb4d9348dcee4860bbb97e5def23b2a5e008c021" Aug 13 20:04:39 crc kubenswrapper[4183]: I0813 20:04:39.220322 4183 scope.go:117] "RemoveContainer" containerID="ba82d955226ea1e51a72b2bf71d781c65d24d78e4274d8a9bbb39973d6793c6b" Aug 13 20:04:39 crc kubenswrapper[4183]: E0813 20:04:39.221136 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"openshift-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-apiserver pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\", failed to \"StartContainer\" for \"openshift-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-apiserver-check-endpoints pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\"]" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" Aug 13 20:04:39 crc kubenswrapper[4183]: I0813 20:04:39.437995 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Aug 13 20:04:39 crc kubenswrapper[4183]: I0813 20:04:39.540376 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:04:39 crc kubenswrapper[4183]: I0813 20:04:39.540474 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:04:39 crc kubenswrapper[4183]: I0813 20:04:39.662007 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Aug 13 20:04:39 crc kubenswrapper[4183]: I0813 20:04:39.739757 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Aug 13 20:04:40 crc kubenswrapper[4183]: I0813 20:04:40.928980 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-g4v97" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" containerName="registry-server" probeResult="failure" output=< Aug 13 20:04:40 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:04:40 crc kubenswrapper[4183]: > Aug 13 20:04:43 crc kubenswrapper[4183]: I0813 20:04:43.083757 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Aug 13 20:04:44 crc kubenswrapper[4183]: I0813 20:04:44.326702 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Aug 13 20:04:44 crc kubenswrapper[4183]: I0813 20:04:44.890538 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 20:04:45 crc kubenswrapper[4183]: I0813 20:04:45.404275 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 20:04:45 crc kubenswrapper[4183]: I0813 20:04:45.410685 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 20:04:45 crc kubenswrapper[4183]: I0813 20:04:45.533142 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" containerName="registry-server" probeResult="failure" output=< Aug 13 20:04:45 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:04:45 crc kubenswrapper[4183]: > Aug 13 20:04:45 crc kubenswrapper[4183]: I0813 20:04:45.549551 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 20:04:45 crc kubenswrapper[4183]: I0813 20:04:45.559224 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 20:04:47 crc kubenswrapper[4183]: I0813 20:04:47.210305 4183 scope.go:117] "RemoveContainer" containerID="a40b12b128b1e9065da4a3aeeb59afb89c5abde3d01a932b1d00d9946d49c42e" Aug 13 20:04:47 crc kubenswrapper[4183]: I0813 20:04:47.777538 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Aug 13 20:04:47 crc kubenswrapper[4183]: I0813 20:04:47.862868 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-8b455464d-f9xdt_3482be94-0cdb-4e2a-889b-e5fac59fdbf5/marketplace-operator/2.log" Aug 13 20:04:47 crc kubenswrapper[4183]: I0813 20:04:47.862977 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" event={"ID":"3482be94-0cdb-4e2a-889b-e5fac59fdbf5","Type":"ContainerStarted","Data":"ba42ad15bc6c92353d4b7ae95deb709fa5499a0d5b16b9c9c6153679fed8f077"} Aug 13 20:04:47 crc kubenswrapper[4183]: I0813 20:04:47.863354 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 20:04:47 crc kubenswrapper[4183]: I0813 20:04:47.866328 4183 patch_prober.go:28] interesting pod/marketplace-operator-8b455464d-f9xdt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Aug 13 20:04:47 crc kubenswrapper[4183]: I0813 20:04:47.866537 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" Aug 13 20:04:47 crc kubenswrapper[4183]: I0813 20:04:47.935187 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Aug 13 20:04:48 crc kubenswrapper[4183]: I0813 20:04:48.415454 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Aug 13 20:04:48 crc kubenswrapper[4183]: I0813 20:04:48.871874 4183 patch_prober.go:28] interesting pod/marketplace-operator-8b455464d-f9xdt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Aug 13 20:04:48 crc kubenswrapper[4183]: I0813 20:04:48.872663 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" Aug 13 20:04:49 crc kubenswrapper[4183]: I0813 20:04:49.539935 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:04:49 crc kubenswrapper[4183]: I0813 20:04:49.540612 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:04:49 crc kubenswrapper[4183]: I0813 20:04:49.799903 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-g4v97" Aug 13 20:04:49 crc kubenswrapper[4183]: I0813 20:04:49.943986 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-g4v97" Aug 13 20:04:50 crc kubenswrapper[4183]: I0813 20:04:50.273701 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Aug 13 20:04:50 crc kubenswrapper[4183]: I0813 20:04:50.900178 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-8b455464d-f9xdt_3482be94-0cdb-4e2a-889b-e5fac59fdbf5/marketplace-operator/3.log" Aug 13 20:04:50 crc kubenswrapper[4183]: I0813 20:04:50.907557 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-8b455464d-f9xdt_3482be94-0cdb-4e2a-889b-e5fac59fdbf5/marketplace-operator/2.log" Aug 13 20:04:50 crc kubenswrapper[4183]: I0813 20:04:50.907669 4183 generic.go:334] "Generic (PLEG): container finished" podID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerID="ba42ad15bc6c92353d4b7ae95deb709fa5499a0d5b16b9c9c6153679fed8f077" exitCode=1 Aug 13 20:04:50 crc kubenswrapper[4183]: I0813 20:04:50.907705 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" event={"ID":"3482be94-0cdb-4e2a-889b-e5fac59fdbf5","Type":"ContainerDied","Data":"ba42ad15bc6c92353d4b7ae95deb709fa5499a0d5b16b9c9c6153679fed8f077"} Aug 13 20:04:50 crc kubenswrapper[4183]: I0813 20:04:50.907743 4183 scope.go:117] "RemoveContainer" containerID="a40b12b128b1e9065da4a3aeeb59afb89c5abde3d01a932b1d00d9946d49c42e" Aug 13 20:04:50 crc kubenswrapper[4183]: I0813 20:04:50.908626 4183 scope.go:117] "RemoveContainer" containerID="ba42ad15bc6c92353d4b7ae95deb709fa5499a0d5b16b9c9c6153679fed8f077" Aug 13 20:04:50 crc kubenswrapper[4183]: E0813 20:04:50.909163 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=marketplace-operator pod=marketplace-operator-8b455464d-f9xdt_openshift-marketplace(3482be94-0cdb-4e2a-889b-e5fac59fdbf5)\"" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 20:04:51 crc kubenswrapper[4183]: I0813 20:04:51.210255 4183 scope.go:117] "RemoveContainer" containerID="d703fa1aef3414ff17f21755cb4d9348dcee4860bbb97e5def23b2a5e008c021" Aug 13 20:04:51 crc kubenswrapper[4183]: I0813 20:04:51.210305 4183 scope.go:117] "RemoveContainer" containerID="ba82d955226ea1e51a72b2bf71d781c65d24d78e4274d8a9bbb39973d6793c6b" Aug 13 20:04:51 crc kubenswrapper[4183]: I0813 20:04:51.212502 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Aug 13 20:04:51 crc kubenswrapper[4183]: I0813 20:04:51.868191 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Aug 13 20:04:51 crc kubenswrapper[4183]: I0813 20:04:51.917089 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-8b455464d-f9xdt_3482be94-0cdb-4e2a-889b-e5fac59fdbf5/marketplace-operator/3.log" Aug 13 20:04:52 crc kubenswrapper[4183]: I0813 20:04:52.150279 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-twmwc" Aug 13 20:04:52 crc kubenswrapper[4183]: I0813 20:04:52.761529 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Aug 13 20:04:52 crc kubenswrapper[4183]: I0813 20:04:52.926570 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver-check-endpoints/3.log" Aug 13 20:04:52 crc kubenswrapper[4183]: I0813 20:04:52.928558 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/2.log" Aug 13 20:04:52 crc kubenswrapper[4183]: I0813 20:04:52.930835 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" event={"ID":"b23d6435-6431-4905-b41b-a517327385e5","Type":"ContainerStarted","Data":"df1d1d9a22e05cc0ee9c2836e149b57342e813e732ecae98f07e805dbee82ebb"} Aug 13 20:04:53 crc kubenswrapper[4183]: I0813 20:04:53.243045 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Aug 13 20:04:54 crc kubenswrapper[4183]: I0813 20:04:54.245119 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Aug 13 20:04:54 crc kubenswrapper[4183]: I0813 20:04:54.494708 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7287f" Aug 13 20:04:54 crc kubenswrapper[4183]: I0813 20:04:54.522671 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 20:04:54 crc kubenswrapper[4183]: I0813 20:04:54.523584 4183 scope.go:117] "RemoveContainer" containerID="ba42ad15bc6c92353d4b7ae95deb709fa5499a0d5b16b9c9c6153679fed8f077" Aug 13 20:04:54 crc kubenswrapper[4183]: E0813 20:04:54.524261 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=marketplace-operator pod=marketplace-operator-8b455464d-f9xdt_openshift-marketplace(3482be94-0cdb-4e2a-889b-e5fac59fdbf5)\"" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 20:04:54 crc kubenswrapper[4183]: I0813 20:04:54.626589 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7287f" Aug 13 20:04:54 crc kubenswrapper[4183]: I0813 20:04:54.714562 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:04:54 crc kubenswrapper[4183]: I0813 20:04:54.714725 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:04:54 crc kubenswrapper[4183]: I0813 20:04:54.714823 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:04:54 crc kubenswrapper[4183]: I0813 20:04:54.714889 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" status="Running" Aug 13 20:04:54 crc kubenswrapper[4183]: I0813 20:04:54.996764 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/1.log" Aug 13 20:04:55 crc kubenswrapper[4183]: I0813 20:04:55.007074 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/0.log" Aug 13 20:04:55 crc kubenswrapper[4183]: I0813 20:04:55.007175 4183 generic.go:334] "Generic (PLEG): container finished" podID="7d51f445-054a-4e4f-a67b-a828f5a32511" containerID="5591be2de8956909e600e69f97a9f842da06662ddb70dc80595c060706c1d24b" exitCode=1 Aug 13 20:04:55 crc kubenswrapper[4183]: I0813 20:04:55.007251 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" event={"ID":"7d51f445-054a-4e4f-a67b-a828f5a32511","Type":"ContainerDied","Data":"5591be2de8956909e600e69f97a9f842da06662ddb70dc80595c060706c1d24b"} Aug 13 20:04:55 crc kubenswrapper[4183]: I0813 20:04:55.007368 4183 scope.go:117] "RemoveContainer" containerID="957c48a64bf505f55933cfc9cf99bce461d72f89938aa38299be4b2e4c832fb2" Aug 13 20:04:55 crc kubenswrapper[4183]: I0813 20:04:55.008069 4183 scope.go:117] "RemoveContainer" containerID="5591be2de8956909e600e69f97a9f842da06662ddb70dc80595c060706c1d24b" Aug 13 20:04:55 crc kubenswrapper[4183]: E0813 20:04:55.008829 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ingress-operator pod=ingress-operator-7d46d5bb6d-rrg6t_openshift-ingress-operator(7d51f445-054a-4e4f-a67b-a828f5a32511)\"" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 20:04:55 crc kubenswrapper[4183]: I0813 20:04:55.904963 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Aug 13 20:04:56 crc kubenswrapper[4183]: I0813 20:04:56.019162 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver-check-endpoints/3.log" Aug 13 20:04:56 crc kubenswrapper[4183]: I0813 20:04:56.020146 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/2.log" Aug 13 20:04:56 crc kubenswrapper[4183]: I0813 20:04:56.021084 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" event={"ID":"b23d6435-6431-4905-b41b-a517327385e5","Type":"ContainerStarted","Data":"e5878255f5e541fa4d169576071de072a25742be132fcad416fbf91f5f8ebad9"} Aug 13 20:04:56 crc kubenswrapper[4183]: I0813 20:04:56.024920 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/1.log" Aug 13 20:04:56 crc kubenswrapper[4183]: I0813 20:04:56.452492 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Aug 13 20:04:56 crc kubenswrapper[4183]: I0813 20:04:56.474971 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Aug 13 20:04:57 crc kubenswrapper[4183]: I0813 20:04:57.089106 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Aug 13 20:04:57 crc kubenswrapper[4183]: I0813 20:04:57.629887 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Aug 13 20:04:57 crc kubenswrapper[4183]: I0813 20:04:57.789896 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Aug 13 20:04:58 crc kubenswrapper[4183]: I0813 20:04:58.152330 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Aug 13 20:04:58 crc kubenswrapper[4183]: I0813 20:04:58.472077 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Aug 13 20:04:58 crc kubenswrapper[4183]: I0813 20:04:58.562995 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Aug 13 20:04:58 crc kubenswrapper[4183]: I0813 20:04:58.675559 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Aug 13 20:04:58 crc kubenswrapper[4183]: I0813 20:04:58.893419 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Aug 13 20:04:59 crc kubenswrapper[4183]: I0813 20:04:59.073153 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver-check-endpoints/4.log" Aug 13 20:04:59 crc kubenswrapper[4183]: I0813 20:04:59.075333 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver-check-endpoints/3.log" Aug 13 20:04:59 crc kubenswrapper[4183]: I0813 20:04:59.076138 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/2.log" Aug 13 20:04:59 crc kubenswrapper[4183]: I0813 20:04:59.077032 4183 generic.go:334] "Generic (PLEG): container finished" podID="b23d6435-6431-4905-b41b-a517327385e5" containerID="e5878255f5e541fa4d169576071de072a25742be132fcad416fbf91f5f8ebad9" exitCode=255 Aug 13 20:04:59 crc kubenswrapper[4183]: I0813 20:04:59.077097 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" event={"ID":"b23d6435-6431-4905-b41b-a517327385e5","Type":"ContainerDied","Data":"e5878255f5e541fa4d169576071de072a25742be132fcad416fbf91f5f8ebad9"} Aug 13 20:04:59 crc kubenswrapper[4183]: I0813 20:04:59.077146 4183 scope.go:117] "RemoveContainer" containerID="ba82d955226ea1e51a72b2bf71d781c65d24d78e4274d8a9bbb39973d6793c6b" Aug 13 20:04:59 crc kubenswrapper[4183]: I0813 20:04:59.078341 4183 scope.go:117] "RemoveContainer" containerID="e5878255f5e541fa4d169576071de072a25742be132fcad416fbf91f5f8ebad9" Aug 13 20:04:59 crc kubenswrapper[4183]: E0813 20:04:59.078943 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-apiserver-check-endpoints pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\"" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" Aug 13 20:04:59 crc kubenswrapper[4183]: I0813 20:04:59.135243 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Aug 13 20:04:59 crc kubenswrapper[4183]: I0813 20:04:59.541093 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:04:59 crc kubenswrapper[4183]: I0813 20:04:59.542262 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:04:59 crc kubenswrapper[4183]: I0813 20:04:59.886707 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Aug 13 20:05:00 crc kubenswrapper[4183]: I0813 20:05:00.090156 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver-check-endpoints/4.log" Aug 13 20:05:00 crc kubenswrapper[4183]: I0813 20:05:00.093150 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/3.log" Aug 13 20:05:00 crc kubenswrapper[4183]: I0813 20:05:00.094540 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/2.log" Aug 13 20:05:00 crc kubenswrapper[4183]: I0813 20:05:00.095262 4183 generic.go:334] "Generic (PLEG): container finished" podID="b23d6435-6431-4905-b41b-a517327385e5" containerID="df1d1d9a22e05cc0ee9c2836e149b57342e813e732ecae98f07e805dbee82ebb" exitCode=255 Aug 13 20:05:00 crc kubenswrapper[4183]: I0813 20:05:00.095305 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" event={"ID":"b23d6435-6431-4905-b41b-a517327385e5","Type":"ContainerDied","Data":"df1d1d9a22e05cc0ee9c2836e149b57342e813e732ecae98f07e805dbee82ebb"} Aug 13 20:05:00 crc kubenswrapper[4183]: I0813 20:05:00.095764 4183 scope.go:117] "RemoveContainer" containerID="d703fa1aef3414ff17f21755cb4d9348dcee4860bbb97e5def23b2a5e008c021" Aug 13 20:05:00 crc kubenswrapper[4183]: I0813 20:05:00.096302 4183 scope.go:117] "RemoveContainer" containerID="df1d1d9a22e05cc0ee9c2836e149b57342e813e732ecae98f07e805dbee82ebb" Aug 13 20:05:00 crc kubenswrapper[4183]: I0813 20:05:00.096440 4183 scope.go:117] "RemoveContainer" containerID="e5878255f5e541fa4d169576071de072a25742be132fcad416fbf91f5f8ebad9" Aug 13 20:05:00 crc kubenswrapper[4183]: E0813 20:05:00.097254 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"openshift-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-apiserver pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\", failed to \"StartContainer\" for \"openshift-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-apiserver-check-endpoints pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\"]" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" Aug 13 20:05:00 crc kubenswrapper[4183]: I0813 20:05:00.114000 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Aug 13 20:05:00 crc kubenswrapper[4183]: I0813 20:05:00.665449 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:05:00 crc kubenswrapper[4183]: I0813 20:05:00.666145 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:05:00 crc kubenswrapper[4183]: I0813 20:05:00.668984 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:05:00 crc kubenswrapper[4183]: I0813 20:05:00.817164 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Aug 13 20:05:00 crc kubenswrapper[4183]: I0813 20:05:00.860638 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Aug 13 20:05:00 crc kubenswrapper[4183]: I0813 20:05:00.880066 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Aug 13 20:05:00 crc kubenswrapper[4183]: I0813 20:05:00.922569 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Aug 13 20:05:01 crc kubenswrapper[4183]: I0813 20:05:01.004185 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Aug 13 20:05:01 crc kubenswrapper[4183]: I0813 20:05:01.104914 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver-check-endpoints/4.log" Aug 13 20:05:01 crc kubenswrapper[4183]: I0813 20:05:01.106219 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/3.log" Aug 13 20:05:01 crc kubenswrapper[4183]: I0813 20:05:01.110562 4183 scope.go:117] "RemoveContainer" containerID="df1d1d9a22e05cc0ee9c2836e149b57342e813e732ecae98f07e805dbee82ebb" Aug 13 20:05:01 crc kubenswrapper[4183]: I0813 20:05:01.110684 4183 scope.go:117] "RemoveContainer" containerID="e5878255f5e541fa4d169576071de072a25742be132fcad416fbf91f5f8ebad9" Aug 13 20:05:01 crc kubenswrapper[4183]: E0813 20:05:01.114138 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"openshift-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-apiserver pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\", failed to \"StartContainer\" for \"openshift-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-apiserver-check-endpoints pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\"]" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" Aug 13 20:05:01 crc kubenswrapper[4183]: I0813 20:05:01.669639 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Aug 13 20:05:01 crc kubenswrapper[4183]: I0813 20:05:01.802689 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-q786x" Aug 13 20:05:01 crc kubenswrapper[4183]: I0813 20:05:01.997359 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Aug 13 20:05:02 crc kubenswrapper[4183]: I0813 20:05:02.075704 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Aug 13 20:05:02 crc kubenswrapper[4183]: I0813 20:05:02.114082 4183 scope.go:117] "RemoveContainer" containerID="df1d1d9a22e05cc0ee9c2836e149b57342e813e732ecae98f07e805dbee82ebb" Aug 13 20:05:02 crc kubenswrapper[4183]: I0813 20:05:02.114415 4183 scope.go:117] "RemoveContainer" containerID="e5878255f5e541fa4d169576071de072a25742be132fcad416fbf91f5f8ebad9" Aug 13 20:05:02 crc kubenswrapper[4183]: E0813 20:05:02.115311 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"openshift-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-apiserver pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\", failed to \"StartContainer\" for \"openshift-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-apiserver-check-endpoints pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\"]" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" Aug 13 20:05:02 crc kubenswrapper[4183]: I0813 20:05:02.270366 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Aug 13 20:05:02 crc kubenswrapper[4183]: I0813 20:05:02.361686 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Aug 13 20:05:02 crc kubenswrapper[4183]: I0813 20:05:02.462052 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Aug 13 20:05:02 crc kubenswrapper[4183]: I0813 20:05:02.876429 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Aug 13 20:05:03 crc kubenswrapper[4183]: I0813 20:05:03.121445 4183 generic.go:334] "Generic (PLEG): container finished" podID="3f4dca86-e6ee-4ec9-8324-86aff960225e" containerID="3e919419d7e26f5e613ad3f3c9052fdc42524d23434e8deabbaeb09b182eb8f6" exitCode=0 Aug 13 20:05:03 crc kubenswrapper[4183]: I0813 20:05:03.121510 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jhz6" event={"ID":"3f4dca86-e6ee-4ec9-8324-86aff960225e","Type":"ContainerDied","Data":"3e919419d7e26f5e613ad3f3c9052fdc42524d23434e8deabbaeb09b182eb8f6"} Aug 13 20:05:03 crc kubenswrapper[4183]: I0813 20:05:03.534136 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Aug 13 20:05:03 crc kubenswrapper[4183]: I0813 20:05:03.821185 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Aug 13 20:05:04 crc kubenswrapper[4183]: I0813 20:05:04.024845 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Aug 13 20:05:04 crc kubenswrapper[4183]: I0813 20:05:04.357290 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Aug 13 20:05:04 crc kubenswrapper[4183]: I0813 20:05:04.467645 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Aug 13 20:05:04 crc kubenswrapper[4183]: I0813 20:05:04.598329 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Aug 13 20:05:05 crc kubenswrapper[4183]: I0813 20:05:05.140521 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jhz6" event={"ID":"3f4dca86-e6ee-4ec9-8324-86aff960225e","Type":"ContainerStarted","Data":"936c532d2ea4335be6418d05f1cceffee6284c4c1f755194bb383a6e75f88636"} Aug 13 20:05:05 crc kubenswrapper[4183]: I0813 20:05:05.415288 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Aug 13 20:05:05 crc kubenswrapper[4183]: I0813 20:05:05.666656 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:05:05 crc kubenswrapper[4183]: I0813 20:05:05.667611 4183 scope.go:117] "RemoveContainer" containerID="df1d1d9a22e05cc0ee9c2836e149b57342e813e732ecae98f07e805dbee82ebb" Aug 13 20:05:05 crc kubenswrapper[4183]: I0813 20:05:05.667649 4183 scope.go:117] "RemoveContainer" containerID="e5878255f5e541fa4d169576071de072a25742be132fcad416fbf91f5f8ebad9" Aug 13 20:05:05 crc kubenswrapper[4183]: E0813 20:05:05.668446 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"openshift-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-apiserver pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\", failed to \"StartContainer\" for \"openshift-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-apiserver-check-endpoints pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\"]" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" Aug 13 20:05:06 crc kubenswrapper[4183]: I0813 20:05:06.189768 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Aug 13 20:05:06 crc kubenswrapper[4183]: I0813 20:05:06.210115 4183 scope.go:117] "RemoveContainer" containerID="ba42ad15bc6c92353d4b7ae95deb709fa5499a0d5b16b9c9c6153679fed8f077" Aug 13 20:05:06 crc kubenswrapper[4183]: E0813 20:05:06.210718 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=marketplace-operator pod=marketplace-operator-8b455464d-f9xdt_openshift-marketplace(3482be94-0cdb-4e2a-889b-e5fac59fdbf5)\"" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 20:05:06 crc kubenswrapper[4183]: I0813 20:05:06.251707 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Aug 13 20:05:06 crc kubenswrapper[4183]: I0813 20:05:06.252974 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-sv888" Aug 13 20:05:06 crc kubenswrapper[4183]: I0813 20:05:06.298212 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Aug 13 20:05:06 crc kubenswrapper[4183]: I0813 20:05:06.311324 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Aug 13 20:05:06 crc kubenswrapper[4183]: I0813 20:05:06.543153 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-kpdvz" Aug 13 20:05:06 crc kubenswrapper[4183]: I0813 20:05:06.788729 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Aug 13 20:05:07 crc kubenswrapper[4183]: I0813 20:05:07.130607 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Aug 13 20:05:07 crc kubenswrapper[4183]: I0813 20:05:07.210910 4183 scope.go:117] "RemoveContainer" containerID="5591be2de8956909e600e69f97a9f842da06662ddb70dc80595c060706c1d24b" Aug 13 20:05:07 crc kubenswrapper[4183]: I0813 20:05:07.426231 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Aug 13 20:05:07 crc kubenswrapper[4183]: I0813 20:05:07.680896 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Aug 13 20:05:07 crc kubenswrapper[4183]: I0813 20:05:07.833891 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Aug 13 20:05:08 crc kubenswrapper[4183]: I0813 20:05:08.170083 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/1.log" Aug 13 20:05:08 crc kubenswrapper[4183]: I0813 20:05:08.170396 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Aug 13 20:05:08 crc kubenswrapper[4183]: I0813 20:05:08.172414 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" event={"ID":"7d51f445-054a-4e4f-a67b-a828f5a32511","Type":"ContainerStarted","Data":"200de7f83d9a904f95a828b45ad75259caec176a8dddad3b3d43cc421fdead44"} Aug 13 20:05:08 crc kubenswrapper[4183]: I0813 20:05:08.376013 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Aug 13 20:05:08 crc kubenswrapper[4183]: I0813 20:05:08.627849 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Aug 13 20:05:08 crc kubenswrapper[4183]: I0813 20:05:08.740880 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Aug 13 20:05:08 crc kubenswrapper[4183]: I0813 20:05:08.759596 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Aug 13 20:05:08 crc kubenswrapper[4183]: I0813 20:05:08.778671 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Aug 13 20:05:09 crc kubenswrapper[4183]: I0813 20:05:09.182649 4183 generic.go:334] "Generic (PLEG): container finished" podID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" containerID="be5d91aad199c1c8bd5b2b79223d42aced870eea5f8ee3c624591deb82d9bd24" exitCode=0 Aug 13 20:05:09 crc kubenswrapper[4183]: I0813 20:05:09.182831 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k9qqb" event={"ID":"ccdf38cf-634a-41a2-9c8b-74bb86af80a7","Type":"ContainerDied","Data":"be5d91aad199c1c8bd5b2b79223d42aced870eea5f8ee3c624591deb82d9bd24"} Aug 13 20:05:09 crc kubenswrapper[4183]: I0813 20:05:09.369612 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Aug 13 20:05:09 crc kubenswrapper[4183]: I0813 20:05:09.540462 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:05:09 crc kubenswrapper[4183]: I0813 20:05:09.540555 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:05:09 crc kubenswrapper[4183]: I0813 20:05:09.816105 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Aug 13 20:05:10 crc kubenswrapper[4183]: I0813 20:05:10.072842 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Aug 13 20:05:10 crc kubenswrapper[4183]: I0813 20:05:10.317996 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/machine-approver-controller/0.log" Aug 13 20:05:10 crc kubenswrapper[4183]: I0813 20:05:10.322904 4183 generic.go:334] "Generic (PLEG): container finished" podID="ec1bae8b-3200-4ad9-b33b-cf8701f3027c" containerID="9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4" exitCode=255 Aug 13 20:05:10 crc kubenswrapper[4183]: I0813 20:05:10.322974 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" event={"ID":"ec1bae8b-3200-4ad9-b33b-cf8701f3027c","Type":"ContainerDied","Data":"9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4"} Aug 13 20:05:10 crc kubenswrapper[4183]: I0813 20:05:10.324271 4183 scope.go:117] "RemoveContainer" containerID="9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4" Aug 13 20:05:10 crc kubenswrapper[4183]: I0813 20:05:10.500315 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Aug 13 20:05:10 crc kubenswrapper[4183]: I0813 20:05:10.650605 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Aug 13 20:05:10 crc kubenswrapper[4183]: I0813 20:05:10.861252 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Aug 13 20:05:11 crc kubenswrapper[4183]: I0813 20:05:11.112401 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Aug 13 20:05:11 crc kubenswrapper[4183]: I0813 20:05:11.336302 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k9qqb" event={"ID":"ccdf38cf-634a-41a2-9c8b-74bb86af80a7","Type":"ContainerStarted","Data":"81cb681bd6d9448d71ccc777c84e85ec17d8973bb87b22b910458292232175d2"} Aug 13 20:05:11 crc kubenswrapper[4183]: I0813 20:05:11.339472 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/machine-approver-controller/0.log" Aug 13 20:05:11 crc kubenswrapper[4183]: I0813 20:05:11.340602 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" event={"ID":"ec1bae8b-3200-4ad9-b33b-cf8701f3027c","Type":"ContainerStarted","Data":"b6fafe7cac89983f8701bc5ed1df09e2b82c358b3a757377ca15de6546b5eb9f"} Aug 13 20:05:11 crc kubenswrapper[4183]: I0813 20:05:11.411131 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Aug 13 20:05:11 crc kubenswrapper[4183]: I0813 20:05:11.707689 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Aug 13 20:05:11 crc kubenswrapper[4183]: I0813 20:05:11.739312 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Aug 13 20:05:12 crc kubenswrapper[4183]: I0813 20:05:12.205833 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Aug 13 20:05:12 crc kubenswrapper[4183]: I0813 20:05:12.599179 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Aug 13 20:05:12 crc kubenswrapper[4183]: I0813 20:05:12.955315 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.078966 4183 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.098878 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.112587 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=172.083720651 podStartE2EDuration="2m52.083720651s" podCreationTimestamp="2025-08-13 20:02:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:04:37.37903193 +0000 UTC m=+1244.071696868" watchObservedRunningTime="2025-08-13 20:05:13.083720651 +0000 UTC m=+1279.776385389" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.116733 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-g4v97" podStartSLOduration=35619880.42286533 podStartE2EDuration="9894h30m55.116660334s" podCreationTimestamp="2024-06-27 13:34:18 +0000 UTC" firstStartedPulling="2025-08-13 19:57:52.840933971 +0000 UTC m=+839.533598689" lastFinishedPulling="2025-08-13 20:04:07.534728981 +0000 UTC m=+1214.227393689" observedRunningTime="2025-08-13 20:04:38.881376951 +0000 UTC m=+1245.574041929" watchObservedRunningTime="2025-08-13 20:05:13.116660334 +0000 UTC m=+1279.809325042" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.117062 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rmwfn" podStartSLOduration=35620009.78697888 podStartE2EDuration="9894h31m39.117029724s" podCreationTimestamp="2024-06-27 13:33:34 +0000 UTC" firstStartedPulling="2025-08-13 19:59:18.068965491 +0000 UTC m=+924.761630139" lastFinishedPulling="2025-08-13 20:04:07.399016379 +0000 UTC m=+1214.091680987" observedRunningTime="2025-08-13 20:04:39.012673861 +0000 UTC m=+1245.705338829" watchObservedRunningTime="2025-08-13 20:05:13.117029724 +0000 UTC m=+1279.809694442" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.208428 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh","openshift-controller-manager/controller-manager-78589965b8-vmcwt","openshift-image-registry/image-registry-7cbd5666ff-bbfrf","openshift-console/console-84fccc7b6-mkncc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-kube-apiserver/kube-apiserver-crc"] Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.209287 4183 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="5e53e26d-e94d-45dc-b706-677ed667c8ce" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.209340 4183 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="5e53e26d-e94d-45dc-b706-677ed667c8ce" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.209479 4183 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="09143b32-bfcb-4682-a82f-e0bfa420e445" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.209510 4183 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="09143b32-bfcb-4682-a82f-e0bfa420e445" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.224634 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00d32440-4cce-4609-96f3-51ac94480aab" path="/var/lib/kubelet/pods/00d32440-4cce-4609-96f3-51ac94480aab/volumes" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.226609 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" path="/var/lib/kubelet/pods/42b6a393-6194-4620-bf8f-7e4b6cbe5679/volumes" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.229290 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" path="/var/lib/kubelet/pods/b233d916-bfe3-4ae5-ae39-6b574d1aa05e/volumes" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.231822 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" path="/var/lib/kubelet/pods/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d/volumes" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.233054 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx","openshift-controller-manager/controller-manager-598fc85fd4-8wlsm"] Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.237345 4183 topology_manager.go:215] "Topology Admit Handler" podUID="8b8d1c48-5762-450f-bd4d-9134869f432b" podNamespace="openshift-controller-manager" podName="controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:13 crc kubenswrapper[4183]: E0813 20:05:13.249551 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" containerName="route-controller-manager" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.250646 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" containerName="route-controller-manager" Aug 13 20:05:13 crc kubenswrapper[4183]: E0813 20:05:13.250739 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" containerName="registry" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.250754 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" containerName="registry" Aug 13 20:05:13 crc kubenswrapper[4183]: E0813 20:05:13.250970 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" containerName="console" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.250988 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" containerName="console" Aug 13 20:05:13 crc kubenswrapper[4183]: E0813 20:05:13.251000 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="00d32440-4cce-4609-96f3-51ac94480aab" containerName="controller-manager" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.251008 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="00d32440-4cce-4609-96f3-51ac94480aab" containerName="controller-manager" Aug 13 20:05:13 crc kubenswrapper[4183]: E0813 20:05:13.251030 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" containerName="installer" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.251037 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" containerName="installer" Aug 13 20:05:13 crc kubenswrapper[4183]: E0813 20:05:13.251050 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" containerName="installer" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.251060 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" containerName="installer" Aug 13 20:05:13 crc kubenswrapper[4183]: E0813 20:05:13.251074 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="79050916-d488-4806-b556-1b0078b31e53" containerName="installer" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.251082 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="79050916-d488-4806-b556-1b0078b31e53" containerName="installer" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.252436 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.252897 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" containerName="installer" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.252925 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="00d32440-4cce-4609-96f3-51ac94480aab" containerName="controller-manager" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.252938 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" containerName="installer" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.252952 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" containerName="console" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.252966 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" containerName="route-controller-manager" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.252982 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" containerName="registry" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.252995 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="79050916-d488-4806-b556-1b0078b31e53" containerName="installer" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.267733 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.269541 4183 topology_manager.go:215] "Topology Admit Handler" podUID="becc7e17-2bc7-417d-832f-55127299d70f" podNamespace="openshift-route-controller-manager" podName="route-controller-manager-6884dcf749-n4qpx" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.269755 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.272943 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.276321 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.282374 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.282731 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.289509 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.292292 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.292390 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.292465 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.292493 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.292496 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.292912 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.292984 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-9r4gl" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.293303 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.293451 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-58g82" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.307677 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.394716 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.408564 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvfwr\" (UniqueName: \"kubernetes.io/projected/becc7e17-2bc7-417d-832f-55127299d70f-kube-api-access-nvfwr\") pod \"route-controller-manager-6884dcf749-n4qpx\" (UID: \"becc7e17-2bc7-417d-832f-55127299d70f\") " pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.410401 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b8d1c48-5762-450f-bd4d-9134869f432b-proxy-ca-bundles\") pod \"controller-manager-598fc85fd4-8wlsm\" (UID: \"8b8d1c48-5762-450f-bd4d-9134869f432b\") " pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.410445 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/becc7e17-2bc7-417d-832f-55127299d70f-client-ca\") pod \"route-controller-manager-6884dcf749-n4qpx\" (UID: \"becc7e17-2bc7-417d-832f-55127299d70f\") " pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.410484 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spb98\" (UniqueName: \"kubernetes.io/projected/8b8d1c48-5762-450f-bd4d-9134869f432b-kube-api-access-spb98\") pod \"controller-manager-598fc85fd4-8wlsm\" (UID: \"8b8d1c48-5762-450f-bd4d-9134869f432b\") " pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.410552 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b8d1c48-5762-450f-bd4d-9134869f432b-client-ca\") pod \"controller-manager-598fc85fd4-8wlsm\" (UID: \"8b8d1c48-5762-450f-bd4d-9134869f432b\") " pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.410603 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b8d1c48-5762-450f-bd4d-9134869f432b-config\") pod \"controller-manager-598fc85fd4-8wlsm\" (UID: \"8b8d1c48-5762-450f-bd4d-9134869f432b\") " pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.410646 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b8d1c48-5762-450f-bd4d-9134869f432b-serving-cert\") pod \"controller-manager-598fc85fd4-8wlsm\" (UID: \"8b8d1c48-5762-450f-bd4d-9134869f432b\") " pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.410715 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/becc7e17-2bc7-417d-832f-55127299d70f-serving-cert\") pod \"route-controller-manager-6884dcf749-n4qpx\" (UID: \"becc7e17-2bc7-417d-832f-55127299d70f\") " pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.410887 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/becc7e17-2bc7-417d-832f-55127299d70f-config\") pod \"route-controller-manager-6884dcf749-n4qpx\" (UID: \"becc7e17-2bc7-417d-832f-55127299d70f\") " pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.462438 4183 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.512368 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b8d1c48-5762-450f-bd4d-9134869f432b-config\") pod \"controller-manager-598fc85fd4-8wlsm\" (UID: \"8b8d1c48-5762-450f-bd4d-9134869f432b\") " pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.512461 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b8d1c48-5762-450f-bd4d-9134869f432b-serving-cert\") pod \"controller-manager-598fc85fd4-8wlsm\" (UID: \"8b8d1c48-5762-450f-bd4d-9134869f432b\") " pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.512498 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/becc7e17-2bc7-417d-832f-55127299d70f-serving-cert\") pod \"route-controller-manager-6884dcf749-n4qpx\" (UID: \"becc7e17-2bc7-417d-832f-55127299d70f\") " pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.512528 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/becc7e17-2bc7-417d-832f-55127299d70f-config\") pod \"route-controller-manager-6884dcf749-n4qpx\" (UID: \"becc7e17-2bc7-417d-832f-55127299d70f\") " pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.512562 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nvfwr\" (UniqueName: \"kubernetes.io/projected/becc7e17-2bc7-417d-832f-55127299d70f-kube-api-access-nvfwr\") pod \"route-controller-manager-6884dcf749-n4qpx\" (UID: \"becc7e17-2bc7-417d-832f-55127299d70f\") " pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.512598 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b8d1c48-5762-450f-bd4d-9134869f432b-proxy-ca-bundles\") pod \"controller-manager-598fc85fd4-8wlsm\" (UID: \"8b8d1c48-5762-450f-bd4d-9134869f432b\") " pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.512622 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/becc7e17-2bc7-417d-832f-55127299d70f-client-ca\") pod \"route-controller-manager-6884dcf749-n4qpx\" (UID: \"becc7e17-2bc7-417d-832f-55127299d70f\") " pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.512649 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-spb98\" (UniqueName: \"kubernetes.io/projected/8b8d1c48-5762-450f-bd4d-9134869f432b-kube-api-access-spb98\") pod \"controller-manager-598fc85fd4-8wlsm\" (UID: \"8b8d1c48-5762-450f-bd4d-9134869f432b\") " pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.512684 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b8d1c48-5762-450f-bd4d-9134869f432b-client-ca\") pod \"controller-manager-598fc85fd4-8wlsm\" (UID: \"8b8d1c48-5762-450f-bd4d-9134869f432b\") " pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.648609 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/becc7e17-2bc7-417d-832f-55127299d70f-client-ca\") pod \"route-controller-manager-6884dcf749-n4qpx\" (UID: \"becc7e17-2bc7-417d-832f-55127299d70f\") " pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.648683 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/becc7e17-2bc7-417d-832f-55127299d70f-config\") pod \"route-controller-manager-6884dcf749-n4qpx\" (UID: \"becc7e17-2bc7-417d-832f-55127299d70f\") " pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.648763 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b8d1c48-5762-450f-bd4d-9134869f432b-config\") pod \"controller-manager-598fc85fd4-8wlsm\" (UID: \"8b8d1c48-5762-450f-bd4d-9134869f432b\") " pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.649909 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b8d1c48-5762-450f-bd4d-9134869f432b-proxy-ca-bundles\") pod \"controller-manager-598fc85fd4-8wlsm\" (UID: \"8b8d1c48-5762-450f-bd4d-9134869f432b\") " pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.651487 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b8d1c48-5762-450f-bd4d-9134869f432b-client-ca\") pod \"controller-manager-598fc85fd4-8wlsm\" (UID: \"8b8d1c48-5762-450f-bd4d-9134869f432b\") " pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.655027 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.676275 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b8d1c48-5762-450f-bd4d-9134869f432b-serving-cert\") pod \"controller-manager-598fc85fd4-8wlsm\" (UID: \"8b8d1c48-5762-450f-bd4d-9134869f432b\") " pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.677413 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/becc7e17-2bc7-417d-832f-55127299d70f-serving-cert\") pod \"route-controller-manager-6884dcf749-n4qpx\" (UID: \"becc7e17-2bc7-417d-832f-55127299d70f\") " pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.954091 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvfwr\" (UniqueName: \"kubernetes.io/projected/becc7e17-2bc7-417d-832f-55127299d70f-kube-api-access-nvfwr\") pod \"route-controller-manager-6884dcf749-n4qpx\" (UID: \"becc7e17-2bc7-417d-832f-55127299d70f\") " pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.958326 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-spb98\" (UniqueName: \"kubernetes.io/projected/8b8d1c48-5762-450f-bd4d-9134869f432b-kube-api-access-spb98\") pod \"controller-manager-598fc85fd4-8wlsm\" (UID: \"8b8d1c48-5762-450f-bd4d-9134869f432b\") " pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:14 crc kubenswrapper[4183]: I0813 20:05:14.023275 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=59.023213394 podStartE2EDuration="59.023213394s" podCreationTimestamp="2025-08-13 20:04:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:05:14.020333212 +0000 UTC m=+1280.712998070" watchObservedRunningTime="2025-08-13 20:05:14.023213394 +0000 UTC m=+1280.715878202" Aug 13 20:05:14 crc kubenswrapper[4183]: I0813 20:05:14.066177 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/community-operators-k9qqb" podStartSLOduration=35619820.18712965 podStartE2EDuration="9894h30m58.066128853s" podCreationTimestamp="2024-06-27 13:34:16 +0000 UTC" firstStartedPulling="2025-08-13 19:57:51.83654203 +0000 UTC m=+838.529206798" lastFinishedPulling="2025-08-13 20:05:09.715541279 +0000 UTC m=+1276.408206007" observedRunningTime="2025-08-13 20:05:14.064306021 +0000 UTC m=+1280.756970859" watchObservedRunningTime="2025-08-13 20:05:14.066128853 +0000 UTC m=+1280.758793581" Aug 13 20:05:14 crc kubenswrapper[4183]: I0813 20:05:14.128077 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Aug 13 20:05:14 crc kubenswrapper[4183]: I0813 20:05:14.204184 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:14 crc kubenswrapper[4183]: I0813 20:05:14.205979 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=59.205874035 podStartE2EDuration="59.205874035s" podCreationTimestamp="2025-08-13 20:04:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:05:14.19801498 +0000 UTC m=+1280.890679758" watchObservedRunningTime="2025-08-13 20:05:14.205874035 +0000 UTC m=+1280.898539443" Aug 13 20:05:14 crc kubenswrapper[4183]: I0813 20:05:14.214829 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" Aug 13 20:05:14 crc kubenswrapper[4183]: I0813 20:05:14.222339 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Aug 13 20:05:14 crc kubenswrapper[4183]: I0813 20:05:14.255305 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Aug 13 20:05:14 crc kubenswrapper[4183]: I0813 20:05:14.565414 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 20:05:14 crc kubenswrapper[4183]: I0813 20:05:14.565913 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 20:05:14 crc kubenswrapper[4183]: I0813 20:05:14.669956 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Aug 13 20:05:14 crc kubenswrapper[4183]: I0813 20:05:14.855193 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Aug 13 20:05:15 crc kubenswrapper[4183]: I0813 20:05:15.152712 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Aug 13 20:05:15 crc kubenswrapper[4183]: I0813 20:05:15.309951 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Aug 13 20:05:15 crc kubenswrapper[4183]: I0813 20:05:15.628243 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Aug 13 20:05:15 crc kubenswrapper[4183]: I0813 20:05:15.658057 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Aug 13 20:05:15 crc kubenswrapper[4183]: I0813 20:05:15.686472 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" containerName="registry-server" probeResult="failure" output=< Aug 13 20:05:15 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:05:15 crc kubenswrapper[4183]: > Aug 13 20:05:15 crc kubenswrapper[4183]: I0813 20:05:15.781369 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Aug 13 20:05:16 crc kubenswrapper[4183]: I0813 20:05:16.344985 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Aug 13 20:05:16 crc kubenswrapper[4183]: I0813 20:05:16.485318 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Aug 13 20:05:16 crc kubenswrapper[4183]: I0813 20:05:16.513489 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Aug 13 20:05:16 crc kubenswrapper[4183]: I0813 20:05:16.789608 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Aug 13 20:05:17 crc kubenswrapper[4183]: E0813 20:05:17.146002 4183 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err=< Aug 13 20:05:17 crc kubenswrapper[4183]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-598fc85fd4-8wlsm_openshift-controller-manager_8b8d1c48-5762-450f-bd4d-9134869f432b_0(ace62b16cf271e4d6faf88bca4a6f7972a49d06e06e546ef9f1226bfa31e4626): error adding pod openshift-controller-manager_controller-manager-598fc85fd4-8wlsm to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"ace62b16cf271e4d6faf88bca4a6f7972a49d06e06e546ef9f1226bfa31e4626" Netns:"/var/run/netns/5532d8e4-703c-425a-acfc-595dd19fe6e2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-598fc85fd4-8wlsm;K8S_POD_INFRA_CONTAINER_ID=ace62b16cf271e4d6faf88bca4a6f7972a49d06e06e546ef9f1226bfa31e4626;K8S_POD_UID=8b8d1c48-5762-450f-bd4d-9134869f432b" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-598fc85fd4-8wlsm] networking: Multus: [openshift-controller-manager/controller-manager-598fc85fd4-8wlsm/8b8d1c48-5762-450f-bd4d-9134869f432b]: error waiting for pod: pod "controller-manager-598fc85fd4-8wlsm" not found Aug 13 20:05:17 crc kubenswrapper[4183]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Aug 13 20:05:17 crc kubenswrapper[4183]: > Aug 13 20:05:17 crc kubenswrapper[4183]: E0813 20:05:17.146600 4183 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Aug 13 20:05:17 crc kubenswrapper[4183]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-598fc85fd4-8wlsm_openshift-controller-manager_8b8d1c48-5762-450f-bd4d-9134869f432b_0(ace62b16cf271e4d6faf88bca4a6f7972a49d06e06e546ef9f1226bfa31e4626): error adding pod openshift-controller-manager_controller-manager-598fc85fd4-8wlsm to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"ace62b16cf271e4d6faf88bca4a6f7972a49d06e06e546ef9f1226bfa31e4626" Netns:"/var/run/netns/5532d8e4-703c-425a-acfc-595dd19fe6e2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-598fc85fd4-8wlsm;K8S_POD_INFRA_CONTAINER_ID=ace62b16cf271e4d6faf88bca4a6f7972a49d06e06e546ef9f1226bfa31e4626;K8S_POD_UID=8b8d1c48-5762-450f-bd4d-9134869f432b" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-598fc85fd4-8wlsm] networking: Multus: [openshift-controller-manager/controller-manager-598fc85fd4-8wlsm/8b8d1c48-5762-450f-bd4d-9134869f432b]: error waiting for pod: pod "controller-manager-598fc85fd4-8wlsm" not found Aug 13 20:05:17 crc kubenswrapper[4183]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Aug 13 20:05:17 crc kubenswrapper[4183]: > pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:17 crc kubenswrapper[4183]: E0813 20:05:17.146629 4183 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err=< Aug 13 20:05:17 crc kubenswrapper[4183]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-598fc85fd4-8wlsm_openshift-controller-manager_8b8d1c48-5762-450f-bd4d-9134869f432b_0(ace62b16cf271e4d6faf88bca4a6f7972a49d06e06e546ef9f1226bfa31e4626): error adding pod openshift-controller-manager_controller-manager-598fc85fd4-8wlsm to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"ace62b16cf271e4d6faf88bca4a6f7972a49d06e06e546ef9f1226bfa31e4626" Netns:"/var/run/netns/5532d8e4-703c-425a-acfc-595dd19fe6e2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-598fc85fd4-8wlsm;K8S_POD_INFRA_CONTAINER_ID=ace62b16cf271e4d6faf88bca4a6f7972a49d06e06e546ef9f1226bfa31e4626;K8S_POD_UID=8b8d1c48-5762-450f-bd4d-9134869f432b" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-598fc85fd4-8wlsm] networking: Multus: [openshift-controller-manager/controller-manager-598fc85fd4-8wlsm/8b8d1c48-5762-450f-bd4d-9134869f432b]: error waiting for pod: pod "controller-manager-598fc85fd4-8wlsm" not found Aug 13 20:05:17 crc kubenswrapper[4183]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Aug 13 20:05:17 crc kubenswrapper[4183]: > pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:17 crc kubenswrapper[4183]: E0813 20:05:17.146742 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-598fc85fd4-8wlsm_openshift-controller-manager(8b8d1c48-5762-450f-bd4d-9134869f432b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-598fc85fd4-8wlsm_openshift-controller-manager(8b8d1c48-5762-450f-bd4d-9134869f432b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-598fc85fd4-8wlsm_openshift-controller-manager_8b8d1c48-5762-450f-bd4d-9134869f432b_0(ace62b16cf271e4d6faf88bca4a6f7972a49d06e06e546ef9f1226bfa31e4626): error adding pod openshift-controller-manager_controller-manager-598fc85fd4-8wlsm to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"ace62b16cf271e4d6faf88bca4a6f7972a49d06e06e546ef9f1226bfa31e4626\\\" Netns:\\\"/var/run/netns/5532d8e4-703c-425a-acfc-595dd19fe6e2\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-598fc85fd4-8wlsm;K8S_POD_INFRA_CONTAINER_ID=ace62b16cf271e4d6faf88bca4a6f7972a49d06e06e546ef9f1226bfa31e4626;K8S_POD_UID=8b8d1c48-5762-450f-bd4d-9134869f432b\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-598fc85fd4-8wlsm] networking: Multus: [openshift-controller-manager/controller-manager-598fc85fd4-8wlsm/8b8d1c48-5762-450f-bd4d-9134869f432b]: error waiting for pod: pod \\\"controller-manager-598fc85fd4-8wlsm\\\" not found\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" podUID="8b8d1c48-5762-450f-bd4d-9134869f432b" Aug 13 20:05:17 crc kubenswrapper[4183]: E0813 20:05:17.185604 4183 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err=< Aug 13 20:05:17 crc kubenswrapper[4183]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-6884dcf749-n4qpx_openshift-route-controller-manager_becc7e17-2bc7-417d-832f-55127299d70f_0(d71fafccd793fd37294a5350fabb9749f483362ed8df8f4d3693083c86399c23): error adding pod openshift-route-controller-manager_route-controller-manager-6884dcf749-n4qpx to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"d71fafccd793fd37294a5350fabb9749f483362ed8df8f4d3693083c86399c23" Netns:"/var/run/netns/fc943dc9-e5f3-426f-a251-ab81064f93c0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-6884dcf749-n4qpx;K8S_POD_INFRA_CONTAINER_ID=d71fafccd793fd37294a5350fabb9749f483362ed8df8f4d3693083c86399c23;K8S_POD_UID=becc7e17-2bc7-417d-832f-55127299d70f" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx] networking: Multus: [openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx/becc7e17-2bc7-417d-832f-55127299d70f]: error waiting for pod: pod "route-controller-manager-6884dcf749-n4qpx" not found Aug 13 20:05:17 crc kubenswrapper[4183]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Aug 13 20:05:17 crc kubenswrapper[4183]: > Aug 13 20:05:17 crc kubenswrapper[4183]: E0813 20:05:17.185687 4183 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Aug 13 20:05:17 crc kubenswrapper[4183]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-6884dcf749-n4qpx_openshift-route-controller-manager_becc7e17-2bc7-417d-832f-55127299d70f_0(d71fafccd793fd37294a5350fabb9749f483362ed8df8f4d3693083c86399c23): error adding pod openshift-route-controller-manager_route-controller-manager-6884dcf749-n4qpx to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"d71fafccd793fd37294a5350fabb9749f483362ed8df8f4d3693083c86399c23" Netns:"/var/run/netns/fc943dc9-e5f3-426f-a251-ab81064f93c0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-6884dcf749-n4qpx;K8S_POD_INFRA_CONTAINER_ID=d71fafccd793fd37294a5350fabb9749f483362ed8df8f4d3693083c86399c23;K8S_POD_UID=becc7e17-2bc7-417d-832f-55127299d70f" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx] networking: Multus: [openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx/becc7e17-2bc7-417d-832f-55127299d70f]: error waiting for pod: pod "route-controller-manager-6884dcf749-n4qpx" not found Aug 13 20:05:17 crc kubenswrapper[4183]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Aug 13 20:05:17 crc kubenswrapper[4183]: > pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" Aug 13 20:05:17 crc kubenswrapper[4183]: E0813 20:05:17.185746 4183 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err=< Aug 13 20:05:17 crc kubenswrapper[4183]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-6884dcf749-n4qpx_openshift-route-controller-manager_becc7e17-2bc7-417d-832f-55127299d70f_0(d71fafccd793fd37294a5350fabb9749f483362ed8df8f4d3693083c86399c23): error adding pod openshift-route-controller-manager_route-controller-manager-6884dcf749-n4qpx to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"d71fafccd793fd37294a5350fabb9749f483362ed8df8f4d3693083c86399c23" Netns:"/var/run/netns/fc943dc9-e5f3-426f-a251-ab81064f93c0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-6884dcf749-n4qpx;K8S_POD_INFRA_CONTAINER_ID=d71fafccd793fd37294a5350fabb9749f483362ed8df8f4d3693083c86399c23;K8S_POD_UID=becc7e17-2bc7-417d-832f-55127299d70f" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx] networking: Multus: [openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx/becc7e17-2bc7-417d-832f-55127299d70f]: error waiting for pod: pod "route-controller-manager-6884dcf749-n4qpx" not found Aug 13 20:05:17 crc kubenswrapper[4183]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Aug 13 20:05:17 crc kubenswrapper[4183]: > pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" Aug 13 20:05:17 crc kubenswrapper[4183]: E0813 20:05:17.186516 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-6884dcf749-n4qpx_openshift-route-controller-manager(becc7e17-2bc7-417d-832f-55127299d70f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-6884dcf749-n4qpx_openshift-route-controller-manager(becc7e17-2bc7-417d-832f-55127299d70f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-6884dcf749-n4qpx_openshift-route-controller-manager_becc7e17-2bc7-417d-832f-55127299d70f_0(d71fafccd793fd37294a5350fabb9749f483362ed8df8f4d3693083c86399c23): error adding pod openshift-route-controller-manager_route-controller-manager-6884dcf749-n4qpx to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"d71fafccd793fd37294a5350fabb9749f483362ed8df8f4d3693083c86399c23\\\" Netns:\\\"/var/run/netns/fc943dc9-e5f3-426f-a251-ab81064f93c0\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-6884dcf749-n4qpx;K8S_POD_INFRA_CONTAINER_ID=d71fafccd793fd37294a5350fabb9749f483362ed8df8f4d3693083c86399c23;K8S_POD_UID=becc7e17-2bc7-417d-832f-55127299d70f\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx] networking: Multus: [openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx/becc7e17-2bc7-417d-832f-55127299d70f]: error waiting for pod: pod \\\"route-controller-manager-6884dcf749-n4qpx\\\" not found\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" podUID="becc7e17-2bc7-417d-832f-55127299d70f" Aug 13 20:05:17 crc kubenswrapper[4183]: I0813 20:05:17.209062 4183 scope.go:117] "RemoveContainer" containerID="df1d1d9a22e05cc0ee9c2836e149b57342e813e732ecae98f07e805dbee82ebb" Aug 13 20:05:17 crc kubenswrapper[4183]: I0813 20:05:17.209095 4183 scope.go:117] "RemoveContainer" containerID="e5878255f5e541fa4d169576071de072a25742be132fcad416fbf91f5f8ebad9" Aug 13 20:05:17 crc kubenswrapper[4183]: E0813 20:05:17.209766 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"openshift-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-apiserver pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\", failed to \"StartContainer\" for \"openshift-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-apiserver-check-endpoints pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\"]" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" Aug 13 20:05:17 crc kubenswrapper[4183]: I0813 20:05:17.297640 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Aug 13 20:05:17 crc kubenswrapper[4183]: I0813 20:05:17.302574 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-r9fjc" Aug 13 20:05:17 crc kubenswrapper[4183]: I0813 20:05:17.381660 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Aug 13 20:05:17 crc kubenswrapper[4183]: I0813 20:05:17.509832 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Aug 13 20:05:17 crc kubenswrapper[4183]: I0813 20:05:17.625271 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Aug 13 20:05:17 crc kubenswrapper[4183]: I0813 20:05:17.792176 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Aug 13 20:05:18 crc kubenswrapper[4183]: I0813 20:05:18.175892 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Aug 13 20:05:18 crc kubenswrapper[4183]: I0813 20:05:18.243339 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Aug 13 20:05:18 crc kubenswrapper[4183]: I0813 20:05:18.321978 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Aug 13 20:05:18 crc kubenswrapper[4183]: I0813 20:05:18.494179 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f/control-plane-machine-set-operator/1.log" Aug 13 20:05:18 crc kubenswrapper[4183]: I0813 20:05:18.497100 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f/control-plane-machine-set-operator/0.log" Aug 13 20:05:18 crc kubenswrapper[4183]: I0813 20:05:18.497201 4183 generic.go:334] "Generic (PLEG): container finished" podID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" containerID="0cacbc14e2522c21376a7d66a61a079d962c7b38a2d0f39522c7854c8ae5956a" exitCode=255 Aug 13 20:05:18 crc kubenswrapper[4183]: I0813 20:05:18.497239 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" event={"ID":"45a8038e-e7f2-4d93-a6f5-7753aa54e63f","Type":"ContainerDied","Data":"0cacbc14e2522c21376a7d66a61a079d962c7b38a2d0f39522c7854c8ae5956a"} Aug 13 20:05:18 crc kubenswrapper[4183]: I0813 20:05:18.497284 4183 scope.go:117] "RemoveContainer" containerID="cde7b91dcd48d4e06df4d6dec59646da2d7b63ba4245f33286ad238c06706436" Aug 13 20:05:18 crc kubenswrapper[4183]: I0813 20:05:18.498290 4183 scope.go:117] "RemoveContainer" containerID="0cacbc14e2522c21376a7d66a61a079d962c7b38a2d0f39522c7854c8ae5956a" Aug 13 20:05:18 crc kubenswrapper[4183]: E0813 20:05:18.499112 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"control-plane-machine-set-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=control-plane-machine-set-operator pod=control-plane-machine-set-operator-649bd778b4-tt5tw_openshift-machine-api(45a8038e-e7f2-4d93-a6f5-7753aa54e63f)\"" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 20:05:18 crc kubenswrapper[4183]: I0813 20:05:18.666389 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Aug 13 20:05:18 crc kubenswrapper[4183]: I0813 20:05:18.818229 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-k9qqb" Aug 13 20:05:18 crc kubenswrapper[4183]: I0813 20:05:18.818437 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-k9qqb" Aug 13 20:05:18 crc kubenswrapper[4183]: I0813 20:05:18.875753 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Aug 13 20:05:18 crc kubenswrapper[4183]: I0813 20:05:18.977189 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Aug 13 20:05:18 crc kubenswrapper[4183]: I0813 20:05:18.995738 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-598fc85fd4-8wlsm"] Aug 13 20:05:18 crc kubenswrapper[4183]: I0813 20:05:18.996007 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:19 crc kubenswrapper[4183]: I0813 20:05:18.996970 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:19 crc kubenswrapper[4183]: I0813 20:05:19.517497 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f/control-plane-machine-set-operator/1.log" Aug 13 20:05:19 crc kubenswrapper[4183]: I0813 20:05:19.540079 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:05:19 crc kubenswrapper[4183]: I0813 20:05:19.540285 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:05:19 crc kubenswrapper[4183]: I0813 20:05:19.540403 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:05:19 crc kubenswrapper[4183]: I0813 20:05:19.545389 4183 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="console" containerStatusID={"Type":"cri-o","ID":"bc9bc2d351deda360fe2c9a8ea52b6167467896e22b28bcf9fdb33f8155b79ba"} pod="openshift-console/console-5d9678894c-wx62n" containerMessage="Container console failed startup probe, will be restarted" Aug 13 20:05:19 crc kubenswrapper[4183]: I0813 20:05:19.589297 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Aug 13 20:05:19 crc kubenswrapper[4183]: I0813 20:05:19.700554 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx"] Aug 13 20:05:19 crc kubenswrapper[4183]: I0813 20:05:19.700751 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" Aug 13 20:05:19 crc kubenswrapper[4183]: I0813 20:05:19.709757 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" Aug 13 20:05:19 crc kubenswrapper[4183]: I0813 20:05:19.977120 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-k9qqb" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" containerName="registry-server" probeResult="failure" output=< Aug 13 20:05:19 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:05:19 crc kubenswrapper[4183]: > Aug 13 20:05:20 crc kubenswrapper[4183]: I0813 20:05:20.011674 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Aug 13 20:05:20 crc kubenswrapper[4183]: I0813 20:05:20.084552 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-6sd5l" Aug 13 20:05:20 crc kubenswrapper[4183]: I0813 20:05:20.210537 4183 scope.go:117] "RemoveContainer" containerID="ba42ad15bc6c92353d4b7ae95deb709fa5499a0d5b16b9c9c6153679fed8f077" Aug 13 20:05:20 crc kubenswrapper[4183]: E0813 20:05:20.211602 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=marketplace-operator pod=marketplace-operator-8b455464d-f9xdt_openshift-marketplace(3482be94-0cdb-4e2a-889b-e5fac59fdbf5)\"" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 20:05:20 crc kubenswrapper[4183]: I0813 20:05:20.219236 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console-operator"/"webhook-serving-cert" Aug 13 20:05:20 crc kubenswrapper[4183]: I0813 20:05:20.743720 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Aug 13 20:05:20 crc kubenswrapper[4183]: I0813 20:05:20.867244 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Aug 13 20:05:21 crc kubenswrapper[4183]: I0813 20:05:21.066612 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Aug 13 20:05:21 crc kubenswrapper[4183]: I0813 20:05:21.505896 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Aug 13 20:05:21 crc kubenswrapper[4183]: I0813 20:05:21.552288 4183 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Aug 13 20:05:21 crc kubenswrapper[4183]: I0813 20:05:21.669562 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Aug 13 20:05:22 crc kubenswrapper[4183]: I0813 20:05:22.088839 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Aug 13 20:05:22 crc kubenswrapper[4183]: I0813 20:05:22.293069 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Aug 13 20:05:22 crc kubenswrapper[4183]: I0813 20:05:22.369896 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Aug 13 20:05:22 crc kubenswrapper[4183]: I0813 20:05:22.609190 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-598fc85fd4-8wlsm"] Aug 13 20:05:22 crc kubenswrapper[4183]: I0813 20:05:22.715427 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx"] Aug 13 20:05:22 crc kubenswrapper[4183]: I0813 20:05:22.789590 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Aug 13 20:05:23 crc kubenswrapper[4183]: I0813 20:05:23.111893 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Aug 13 20:05:23 crc kubenswrapper[4183]: I0813 20:05:23.279471 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Aug 13 20:05:23 crc kubenswrapper[4183]: I0813 20:05:23.553213 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" event={"ID":"8b8d1c48-5762-450f-bd4d-9134869f432b","Type":"ContainerStarted","Data":"3a7af3bd6c985bd2cf1c0ebb554af4bd79e961a7f0b299ecb95e5c8f07b051d8"} Aug 13 20:05:23 crc kubenswrapper[4183]: I0813 20:05:23.553762 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:23 crc kubenswrapper[4183]: I0813 20:05:23.554111 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" event={"ID":"8b8d1c48-5762-450f-bd4d-9134869f432b","Type":"ContainerStarted","Data":"7814bf45dce77ed8a8c744f06e62839eae09ee6a9538e182ca2f0ea610392efb"} Aug 13 20:05:23 crc kubenswrapper[4183]: I0813 20:05:23.556456 4183 patch_prober.go:28] interesting pod/controller-manager-598fc85fd4-8wlsm container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.74:8443/healthz\": dial tcp 10.217.0.74:8443: connect: connection refused" start-of-body= Aug 13 20:05:23 crc kubenswrapper[4183]: I0813 20:05:23.556537 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" podUID="8b8d1c48-5762-450f-bd4d-9134869f432b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.74:8443/healthz\": dial tcp 10.217.0.74:8443: connect: connection refused" Aug 13 20:05:23 crc kubenswrapper[4183]: I0813 20:05:23.557599 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" event={"ID":"becc7e17-2bc7-417d-832f-55127299d70f","Type":"ContainerStarted","Data":"764b4421d338c0c0f1baf8c5cf39f8312e1a50dc3eb5f025196bf23f93fcbe75"} Aug 13 20:05:23 crc kubenswrapper[4183]: I0813 20:05:23.557658 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" event={"ID":"becc7e17-2bc7-417d-832f-55127299d70f","Type":"ContainerStarted","Data":"924f68f94ccf00f51d9670a79dea4855d290329c9234e55ec074960babbce6d7"} Aug 13 20:05:23 crc kubenswrapper[4183]: I0813 20:05:23.558583 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" Aug 13 20:05:23 crc kubenswrapper[4183]: I0813 20:05:23.560568 4183 patch_prober.go:28] interesting pod/route-controller-manager-6884dcf749-n4qpx container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.75:8443/healthz\": dial tcp 10.217.0.75:8443: connect: connection refused" start-of-body= Aug 13 20:05:23 crc kubenswrapper[4183]: I0813 20:05:23.560953 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" podUID="becc7e17-2bc7-417d-832f-55127299d70f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.75:8443/healthz\": dial tcp 10.217.0.75:8443: connect: connection refused" Aug 13 20:05:23 crc kubenswrapper[4183]: I0813 20:05:23.636023 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" podStartSLOduration=242.635956854 podStartE2EDuration="4m2.635956854s" podCreationTimestamp="2025-08-13 20:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:05:23.62989526 +0000 UTC m=+1290.322560408" watchObservedRunningTime="2025-08-13 20:05:23.635956854 +0000 UTC m=+1290.328621982" Aug 13 20:05:23 crc kubenswrapper[4183]: I0813 20:05:23.706151 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Aug 13 20:05:23 crc kubenswrapper[4183]: I0813 20:05:23.827966 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Aug 13 20:05:23 crc kubenswrapper[4183]: I0813 20:05:23.949042 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Aug 13 20:05:24 crc kubenswrapper[4183]: I0813 20:05:24.086654 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-ng44q" Aug 13 20:05:24 crc kubenswrapper[4183]: I0813 20:05:24.125475 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Aug 13 20:05:24 crc kubenswrapper[4183]: I0813 20:05:24.191367 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Aug 13 20:05:24 crc kubenswrapper[4183]: I0813 20:05:24.205474 4183 patch_prober.go:28] interesting pod/controller-manager-598fc85fd4-8wlsm container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.74:8443/healthz\": dial tcp 10.217.0.74:8443: connect: connection refused" start-of-body= Aug 13 20:05:24 crc kubenswrapper[4183]: I0813 20:05:24.205611 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" podUID="8b8d1c48-5762-450f-bd4d-9134869f432b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.74:8443/healthz\": dial tcp 10.217.0.74:8443: connect: connection refused" Aug 13 20:05:24 crc kubenswrapper[4183]: I0813 20:05:24.365075 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Aug 13 20:05:24 crc kubenswrapper[4183]: I0813 20:05:24.567394 4183 patch_prober.go:28] interesting pod/controller-manager-598fc85fd4-8wlsm container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.74:8443/healthz\": dial tcp 10.217.0.74:8443: connect: connection refused" start-of-body= Aug 13 20:05:24 crc kubenswrapper[4183]: I0813 20:05:24.567502 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" podUID="8b8d1c48-5762-450f-bd4d-9134869f432b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.74:8443/healthz\": dial tcp 10.217.0.74:8443: connect: connection refused" Aug 13 20:05:24 crc kubenswrapper[4183]: I0813 20:05:24.815329 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" Aug 13 20:05:24 crc kubenswrapper[4183]: I0813 20:05:24.826046 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Aug 13 20:05:24 crc kubenswrapper[4183]: I0813 20:05:24.927063 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" podStartSLOduration=241.926998625 podStartE2EDuration="4m1.926998625s" podCreationTimestamp="2025-08-13 20:01:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:05:23.71475329 +0000 UTC m=+1290.407418348" watchObservedRunningTime="2025-08-13 20:05:24.926998625 +0000 UTC m=+1291.619663633" Aug 13 20:05:25 crc kubenswrapper[4183]: E0813 20:05:25.203459 4183 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" podSandboxID="48ddb06f60b4f68d09a2a539638fcf41c8d68761518ac0ef54f91af62a4bb107" Aug 13 20:05:25 crc kubenswrapper[4183]: E0813 20:05:25.207311 4183 kuberuntime_manager.go:1262] container &Container{Name:console,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae,Command:[/opt/bridge/bin/bridge --public-dir=/opt/bridge/static --config=/var/console-config/console-config.yaml --service-ca-file=/var/service-ca/service-ca.crt --v=2],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{104857600 0} {} 100Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:console-serving-cert,ReadOnly:true,MountPath:/var/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:console-oauth-config,ReadOnly:true,MountPath:/var/oauth-config,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:console-config,ReadOnly:true,MountPath:/var/console-config,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:service-ca,ReadOnly:true,MountPath:/var/service-ca,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:trusted-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:oauth-serving-cert,ReadOnly:true,MountPath:/var/oauth-serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-2nz92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:1,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[sleep 25],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000590000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:30,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod console-644bb77b49-5x5xk_openshift-console(9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1): CreateContainerError: context deadline exceeded Aug 13 20:05:25 crc kubenswrapper[4183]: E0813 20:05:25.207440 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"console\" with CreateContainerError: \"context deadline exceeded\"" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Aug 13 20:05:25 crc kubenswrapper[4183]: I0813 20:05:25.770618 4183 reflector.go:351] Caches populated for *v1.RuntimeClass from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Aug 13 20:05:25 crc kubenswrapper[4183]: I0813 20:05:25.843280 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Aug 13 20:05:25 crc kubenswrapper[4183]: I0813 20:05:25.898295 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Aug 13 20:05:26 crc kubenswrapper[4183]: I0813 20:05:26.203430 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" containerName="registry-server" probeResult="failure" output=< Aug 13 20:05:26 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:05:26 crc kubenswrapper[4183]: > Aug 13 20:05:26 crc kubenswrapper[4183]: I0813 20:05:26.342830 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Aug 13 20:05:26 crc kubenswrapper[4183]: I0813 20:05:26.352289 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Aug 13 20:05:26 crc kubenswrapper[4183]: E0813 20:05:26.531826 4183 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" podSandboxID="07c341dd7186a1b00e23f13a401a9b19e5d1744c38a4a91d135cf6cc1891fe61" Aug 13 20:05:26 crc kubenswrapper[4183]: E0813 20:05:26.532359 4183 kuberuntime_manager.go:1262] container &Container{Name:kube-scheduler-operator-container,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f,Command:[cluster-kube-scheduler-operator operator],Args:[--config=/var/run/configmaps/config/config.yaml],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.16.0,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:1.29.5,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openshift-kube-scheduler-operator-5d9b995f6b-fcgd7_openshift-kube-scheduler-operator(71af81a9-7d43-49b2-9287-c375900aa905): CreateContainerError: context deadline exceeded Aug 13 20:05:26 crc kubenswrapper[4183]: E0813 20:05:26.532539 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler-operator-container\" with CreateContainerError: \"context deadline exceeded\"" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 20:05:26 crc kubenswrapper[4183]: I0813 20:05:26.533765 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Aug 13 20:05:26 crc kubenswrapper[4183]: I0813 20:05:26.583286 4183 generic.go:334] "Generic (PLEG): container finished" podID="4092a9f8-5acc-4932-9e90-ef962eeb301a" containerID="319ec802f9a442097e69485c29cd0a5e07ea7f1fe43cf8778e08e37b4cf9f85f" exitCode=0 Aug 13 20:05:26 crc kubenswrapper[4183]: I0813 20:05:26.583384 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4jkp" event={"ID":"4092a9f8-5acc-4932-9e90-ef962eeb301a","Type":"ContainerDied","Data":"319ec802f9a442097e69485c29cd0a5e07ea7f1fe43cf8778e08e37b4cf9f85f"} Aug 13 20:05:26 crc kubenswrapper[4183]: I0813 20:05:26.588158 4183 generic.go:334] "Generic (PLEG): container finished" podID="6db26b71-4e04-4688-a0c0-00e06e8c888d" containerID="5dfab3908e38ec4c78ee676439e402432e22c1d28963eb816627f094e1f7ffed" exitCode=0 Aug 13 20:05:26 crc kubenswrapper[4183]: I0813 20:05:26.588850 4183 scope.go:117] "RemoveContainer" containerID="e2ed40c9bc30c8fdbb04088362ef76212a522ea5070f999ce3dc603f8c7a487e" Aug 13 20:05:26 crc kubenswrapper[4183]: I0813 20:05:26.589271 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dcqzh" event={"ID":"6db26b71-4e04-4688-a0c0-00e06e8c888d","Type":"ContainerDied","Data":"5dfab3908e38ec4c78ee676439e402432e22c1d28963eb816627f094e1f7ffed"} Aug 13 20:05:26 crc kubenswrapper[4183]: I0813 20:05:26.655378 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Aug 13 20:05:26 crc kubenswrapper[4183]: I0813 20:05:26.734553 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Aug 13 20:05:26 crc kubenswrapper[4183]: I0813 20:05:26.770986 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Aug 13 20:05:26 crc kubenswrapper[4183]: I0813 20:05:26.829223 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Aug 13 20:05:26 crc kubenswrapper[4183]: I0813 20:05:26.840965 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Aug 13 20:05:26 crc kubenswrapper[4183]: I0813 20:05:26.850381 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Aug 13 20:05:26 crc kubenswrapper[4183]: I0813 20:05:26.912068 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Aug 13 20:05:27 crc kubenswrapper[4183]: I0813 20:05:27.416399 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Aug 13 20:05:27 crc kubenswrapper[4183]: E0813 20:05:27.518744 4183 handlers.go:79] "Exec lifecycle hook for Container in Pod failed" err="command 'sleep 25' exited with 137: " execCommand=["sleep","25"] containerName="console" pod="openshift-console/console-5d9678894c-wx62n" message="" Aug 13 20:05:27 crc kubenswrapper[4183]: E0813 20:05:27.519483 4183 kuberuntime_container.go:653] "PreStop hook failed" err="command 'sleep 25' exited with 137: " pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" containerID="cri-o://bc9bc2d351deda360fe2c9a8ea52b6167467896e22b28bcf9fdb33f8155b79ba" Aug 13 20:05:27 crc kubenswrapper[4183]: I0813 20:05:27.519589 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" containerID="cri-o://bc9bc2d351deda360fe2c9a8ea52b6167467896e22b28bcf9fdb33f8155b79ba" gracePeriod=33 Aug 13 20:05:27 crc kubenswrapper[4183]: I0813 20:05:27.588263 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Aug 13 20:05:27 crc kubenswrapper[4183]: I0813 20:05:27.601125 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-644bb77b49-5x5xk" event={"ID":"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1","Type":"ContainerStarted","Data":"d329928035eabc24218bf53782983e5317173e1aceaf58f4d858919ca11603ad"} Aug 13 20:05:27 crc kubenswrapper[4183]: I0813 20:05:27.732427 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Aug 13 20:05:28 crc kubenswrapper[4183]: I0813 20:05:28.175705 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Aug 13 20:05:28 crc kubenswrapper[4183]: I0813 20:05:28.615064 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" event={"ID":"71af81a9-7d43-49b2-9287-c375900aa905","Type":"ContainerStarted","Data":"aef36bd2553b9941561332862e00ec117b296eb1e04d6191f7d1a0e272134312"} Aug 13 20:05:28 crc kubenswrapper[4183]: I0813 20:05:28.621703 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5d9678894c-wx62n_384ed0e8-86e4-42df-bd2c-604c1f536a15/console/0.log" Aug 13 20:05:28 crc kubenswrapper[4183]: I0813 20:05:28.621932 4183 generic.go:334] "Generic (PLEG): container finished" podID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerID="bc9bc2d351deda360fe2c9a8ea52b6167467896e22b28bcf9fdb33f8155b79ba" exitCode=255 Aug 13 20:05:28 crc kubenswrapper[4183]: I0813 20:05:28.622022 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d9678894c-wx62n" event={"ID":"384ed0e8-86e4-42df-bd2c-604c1f536a15","Type":"ContainerDied","Data":"bc9bc2d351deda360fe2c9a8ea52b6167467896e22b28bcf9fdb33f8155b79ba"} Aug 13 20:05:28 crc kubenswrapper[4183]: E0813 20:05:28.628458 4183 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" podSandboxID="fe503da15decef9b50942972e3f741dba12102460aee1b1db682f945b69c1239" Aug 13 20:05:28 crc kubenswrapper[4183]: E0813 20:05:28.628643 4183 kuberuntime_manager.go:1262] container &Container{Name:cluster-image-registry-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d,Command:[],Args:[--files=/var/run/configmaps/trusted-ca/tls-ca-bundle.pem --files=/etc/secrets/tls.crt --files=/etc/secrets/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:60000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.16.0,ValueFrom:nil,},EnvVar{Name:WATCH_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:OPERATOR_NAME,Value:cluster-image-registry-operator,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d,ValueFrom:nil,},EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8,ValueFrom:nil,},EnvVar{Name:IMAGE_PRUNER,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce,ValueFrom:nil,},EnvVar{Name:AZURE_ENVIRONMENT_FILEPATH,Value:/tmp/azurestackcloud.json,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:trusted-ca,ReadOnly:false,MountPath:/var/run/configmaps/trusted-ca/,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:image-registry-operator-tls,ReadOnly:false,MountPath:/etc/secrets,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:bound-sa-token,ReadOnly:true,MountPath:/var/run/secrets/openshift/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-9x6dp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000290000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cluster-image-registry-operator-7769bd8d7d-q5cvv_openshift-image-registry(b54e8941-2fc4-432a-9e51-39684df9089e): CreateContainerError: context deadline exceeded Aug 13 20:05:28 crc kubenswrapper[4183]: E0813 20:05:28.628687 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-image-registry-operator\" with CreateContainerError: \"context deadline exceeded\"" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 20:05:28 crc kubenswrapper[4183]: I0813 20:05:28.632001 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4jkp" event={"ID":"4092a9f8-5acc-4932-9e90-ef962eeb301a","Type":"ContainerStarted","Data":"bacbddb576219793667d7bc1f3ccf593e0bd7c1662b2c71d8f1655ddbbcd82e8"} Aug 13 20:05:28 crc kubenswrapper[4183]: I0813 20:05:28.640740 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dcqzh" event={"ID":"6db26b71-4e04-4688-a0c0-00e06e8c888d","Type":"ContainerStarted","Data":"a39a002d95a82ae963b46c8196dfed935c199e471be64946be7406b3b02562c9"} Aug 13 20:05:28 crc kubenswrapper[4183]: I0813 20:05:28.744903 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Aug 13 20:05:28 crc kubenswrapper[4183]: I0813 20:05:28.782051 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-console/console-644bb77b49-5x5xk" podStartSLOduration=258.782001936 podStartE2EDuration="4m18.782001936s" podCreationTimestamp="2025-08-13 20:01:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:05:28.78074536 +0000 UTC m=+1295.473410118" watchObservedRunningTime="2025-08-13 20:05:28.782001936 +0000 UTC m=+1295.474666664" Aug 13 20:05:28 crc kubenswrapper[4183]: I0813 20:05:28.844642 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Aug 13 20:05:29 crc kubenswrapper[4183]: I0813 20:05:29.059601 4183 kubelet.go:2439] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Aug 13 20:05:29 crc kubenswrapper[4183]: I0813 20:05:29.060691 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="bf055e84f32193b9c1c21b0c34a61f01" containerName="startup-monitor" containerID="cri-o://15820ab514a1ec9c31d0791a36dbd2a502fe86541e3878da038ece782fc81268" gracePeriod=5 Aug 13 20:05:29 crc kubenswrapper[4183]: I0813 20:05:29.563129 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Aug 13 20:05:29 crc kubenswrapper[4183]: I0813 20:05:29.647320 4183 scope.go:117] "RemoveContainer" containerID="dd7033f12f10dfa562ecc04746779666b1a34bddfcb245d6e2353cc2c05cc540" Aug 13 20:05:29 crc kubenswrapper[4183]: I0813 20:05:29.648997 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dcqzh" Aug 13 20:05:29 crc kubenswrapper[4183]: I0813 20:05:29.649295 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dcqzh" Aug 13 20:05:29 crc kubenswrapper[4183]: I0813 20:05:29.974239 4183 reflector.go:351] Caches populated for *v1.CSIDriver from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Aug 13 20:05:30 crc kubenswrapper[4183]: I0813 20:05:30.211475 4183 scope.go:117] "RemoveContainer" containerID="df1d1d9a22e05cc0ee9c2836e149b57342e813e732ecae98f07e805dbee82ebb" Aug 13 20:05:30 crc kubenswrapper[4183]: I0813 20:05:30.211526 4183 scope.go:117] "RemoveContainer" containerID="e5878255f5e541fa4d169576071de072a25742be132fcad416fbf91f5f8ebad9" Aug 13 20:05:30 crc kubenswrapper[4183]: E0813 20:05:30.212347 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"openshift-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-apiserver pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\", failed to \"StartContainer\" for \"openshift-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-apiserver-check-endpoints pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\"]" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" Aug 13 20:05:30 crc kubenswrapper[4183]: I0813 20:05:30.226111 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Aug 13 20:05:30 crc kubenswrapper[4183]: I0813 20:05:30.269216 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dcqzh" podStartSLOduration=35619822.16397022 podStartE2EDuration="9894h31m16.269154122s" podCreationTimestamp="2024-06-27 13:34:14 +0000 UTC" firstStartedPulling="2025-08-13 19:57:52.841939639 +0000 UTC m=+839.534604367" lastFinishedPulling="2025-08-13 20:05:26.947123582 +0000 UTC m=+1293.639788270" observedRunningTime="2025-08-13 20:05:30.047038901 +0000 UTC m=+1296.739703649" watchObservedRunningTime="2025-08-13 20:05:30.269154122 +0000 UTC m=+1296.961818970" Aug 13 20:05:30 crc kubenswrapper[4183]: I0813 20:05:30.469599 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:05:30 crc kubenswrapper[4183]: I0813 20:05:30.469728 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:05:30 crc kubenswrapper[4183]: I0813 20:05:30.475036 4183 patch_prober.go:28] interesting pod/console-644bb77b49-5x5xk container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.73:8443/health\": dial tcp 10.217.0.73:8443: connect: connection refused" start-of-body= Aug 13 20:05:30 crc kubenswrapper[4183]: I0813 20:05:30.475118 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" containerName="console" probeResult="failure" output="Get \"https://10.217.0.73:8443/health\": dial tcp 10.217.0.73:8443: connect: connection refused" Aug 13 20:05:30 crc kubenswrapper[4183]: I0813 20:05:30.654393 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-k9qqb" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" containerName="registry-server" probeResult="failure" output=< Aug 13 20:05:30 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:05:30 crc kubenswrapper[4183]: > Aug 13 20:05:30 crc kubenswrapper[4183]: I0813 20:05:30.657994 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5d9678894c-wx62n_384ed0e8-86e4-42df-bd2c-604c1f536a15/console/0.log" Aug 13 20:05:30 crc kubenswrapper[4183]: I0813 20:05:30.658370 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d9678894c-wx62n" event={"ID":"384ed0e8-86e4-42df-bd2c-604c1f536a15","Type":"ContainerStarted","Data":"1ce82b64b98820f650cc613d542e0f0643d32ba3d661ee198711362ba0c99f8b"} Aug 13 20:05:31 crc kubenswrapper[4183]: I0813 20:05:31.188512 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Aug 13 20:05:31 crc kubenswrapper[4183]: I0813 20:05:31.227737 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Aug 13 20:05:31 crc kubenswrapper[4183]: I0813 20:05:31.434834 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Aug 13 20:05:31 crc kubenswrapper[4183]: I0813 20:05:31.543125 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dcqzh" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" containerName="registry-server" probeResult="failure" output=< Aug 13 20:05:31 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:05:31 crc kubenswrapper[4183]: > Aug 13 20:05:31 crc kubenswrapper[4183]: I0813 20:05:31.663391 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Aug 13 20:05:31 crc kubenswrapper[4183]: I0813 20:05:31.670843 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" event={"ID":"b54e8941-2fc4-432a-9e51-39684df9089e","Type":"ContainerStarted","Data":"8c343d7ff4e8fd8830942fe00e0e9953854c7d57807d54ef2fb25d9d7bd48b55"} Aug 13 20:05:31 crc kubenswrapper[4183]: I0813 20:05:31.713016 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Aug 13 20:05:32 crc kubenswrapper[4183]: I0813 20:05:32.209841 4183 scope.go:117] "RemoveContainer" containerID="0cacbc14e2522c21376a7d66a61a079d962c7b38a2d0f39522c7854c8ae5956a" Aug 13 20:05:32 crc kubenswrapper[4183]: I0813 20:05:32.209982 4183 scope.go:117] "RemoveContainer" containerID="ba42ad15bc6c92353d4b7ae95deb709fa5499a0d5b16b9c9c6153679fed8f077" Aug 13 20:05:32 crc kubenswrapper[4183]: I0813 20:05:32.802208 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Aug 13 20:05:32 crc kubenswrapper[4183]: I0813 20:05:32.847086 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Aug 13 20:05:33 crc kubenswrapper[4183]: E0813 20:05:33.158289 4183 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" podSandboxID="d3db60615905e44dc8f118e1544f7eb252e9b396f1af3b926339817c7ce1ed71" Aug 13 20:05:33 crc kubenswrapper[4183]: E0813 20:05:33.159038 4183 kuberuntime_manager.go:1262] container &Container{Name:openshift-config-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc,Command:[cluster-config-operator operator --operator-version=$(OPERATOR_IMAGE_VERSION) --authoritative-feature-gate-dir=/available-featuregates],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.16.0,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:4.16.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:available-featuregates,ReadOnly:false,MountPath:/available-featuregates,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-8dcvj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:1,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:1,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openshift-config-operator-77658b5b66-dq5sc_openshift-config-operator(530553aa-0a1d-423e-8a22-f5eb4bdbb883): CreateContainerError: context deadline exceeded Aug 13 20:05:33 crc kubenswrapper[4183]: E0813 20:05:33.159218 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CreateContainerError: \"context deadline exceeded\"" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 20:05:33 crc kubenswrapper[4183]: E0813 20:05:33.172930 4183 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" podSandboxID="2aed5bade7f294b09e25840fe64b91ca7e8460e350e656827bd2648f0721976d" Aug 13 20:05:33 crc kubenswrapper[4183]: E0813 20:05:33.173636 4183 kuberuntime_manager.go:1262] container &Container{Name:kube-controller-manager-operator,Image:quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f,Command:[cluster-kube-controller-manager-operator operator],Args:[--config=/var/run/configmaps/config/config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE,Value:quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f,ValueFrom:nil,},EnvVar{Name:CLUSTER_POLICY_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791,ValueFrom:nil,},EnvVar{Name:TOOLS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9d6201c776053346ebce8f90c34797a7a7c05898008e17f3ba9673f5f14507b0,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.16.0,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:1.29.5,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-controller-manager-operator-6f6cb54958-rbddb_openshift-kube-controller-manager-operator(c1620f19-8aa3-45cf-931b-7ae0e5cd14cf): CreateContainerError: context deadline exceeded Aug 13 20:05:33 crc kubenswrapper[4183]: E0813 20:05:33.173894 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager-operator\" with CreateContainerError: \"context deadline exceeded\"" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 20:05:33 crc kubenswrapper[4183]: I0813 20:05:33.442259 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Aug 13 20:05:33 crc kubenswrapper[4183]: I0813 20:05:33.701413 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f/control-plane-machine-set-operator/1.log" Aug 13 20:05:33 crc kubenswrapper[4183]: I0813 20:05:33.713829 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-8b455464d-f9xdt_3482be94-0cdb-4e2a-889b-e5fac59fdbf5/marketplace-operator/3.log" Aug 13 20:05:33 crc kubenswrapper[4183]: I0813 20:05:33.714252 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" event={"ID":"3482be94-0cdb-4e2a-889b-e5fac59fdbf5","Type":"ContainerStarted","Data":"2f758649dde5a0955fe3ef141a27a7c8eea7852f10da149d3fc5720018c059f9"} Aug 13 20:05:33 crc kubenswrapper[4183]: I0813 20:05:33.714456 4183 scope.go:117] "RemoveContainer" containerID="de2b2e2d762c8b359ec567ae879d9fedbdd2fb02f477f190f4465a6d6279b220" Aug 13 20:05:33 crc kubenswrapper[4183]: I0813 20:05:33.718388 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 20:05:33 crc kubenswrapper[4183]: I0813 20:05:33.720037 4183 patch_prober.go:28] interesting pod/marketplace-operator-8b455464d-f9xdt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Aug 13 20:05:33 crc kubenswrapper[4183]: I0813 20:05:33.720403 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" Aug 13 20:05:33 crc kubenswrapper[4183]: I0813 20:05:33.722308 4183 scope.go:117] "RemoveContainer" containerID="a82f834c3402db4242f753141733e4ebdbbd2a9132e9ded819a1a24bce37e03b" Aug 13 20:05:33 crc kubenswrapper[4183]: I0813 20:05:33.869762 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.166226 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.212181 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.249053 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.312330 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.445945 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_bf055e84f32193b9c1c21b0c34a61f01/startup-monitor/0.log" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.446088 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.526706 4183 patch_prober.go:28] interesting pod/marketplace-operator-8b455464d-f9xdt container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.526756 4183 patch_prober.go:28] interesting pod/marketplace-operator-8b455464d-f9xdt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.526920 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.527030 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.620106 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-var-lock\") pod \"bf055e84f32193b9c1c21b0c34a61f01\" (UID: \"bf055e84f32193b9c1c21b0c34a61f01\") " Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.620218 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-resource-dir\") pod \"bf055e84f32193b9c1c21b0c34a61f01\" (UID: \"bf055e84f32193b9c1c21b0c34a61f01\") " Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.620246 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-manifests\") pod \"bf055e84f32193b9c1c21b0c34a61f01\" (UID: \"bf055e84f32193b9c1c21b0c34a61f01\") " Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.620339 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-pod-resource-dir\") pod \"bf055e84f32193b9c1c21b0c34a61f01\" (UID: \"bf055e84f32193b9c1c21b0c34a61f01\") " Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.620378 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-var-log\") pod \"bf055e84f32193b9c1c21b0c34a61f01\" (UID: \"bf055e84f32193b9c1c21b0c34a61f01\") " Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.623328 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-var-log" (OuterVolumeSpecName: "var-log") pod "bf055e84f32193b9c1c21b0c34a61f01" (UID: "bf055e84f32193b9c1c21b0c34a61f01"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.623312 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-manifests" (OuterVolumeSpecName: "manifests") pod "bf055e84f32193b9c1c21b0c34a61f01" (UID: "bf055e84f32193b9c1c21b0c34a61f01"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.623479 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "bf055e84f32193b9c1c21b0c34a61f01" (UID: "bf055e84f32193b9c1c21b0c34a61f01"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.623721 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-var-lock" (OuterVolumeSpecName: "var-lock") pod "bf055e84f32193b9c1c21b0c34a61f01" (UID: "bf055e84f32193b9c1c21b0c34a61f01"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.658206 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "bf055e84f32193b9c1c21b0c34a61f01" (UID: "bf055e84f32193b9c1c21b0c34a61f01"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.702693 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.703227 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.722528 4183 reconciler_common.go:300] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-var-lock\") on node \"crc\" DevicePath \"\"" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.722593 4183 reconciler_common.go:300] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-resource-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.722607 4183 reconciler_common.go:300] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-manifests\") on node \"crc\" DevicePath \"\"" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.722622 4183 reconciler_common.go:300] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.722636 4183 reconciler_common.go:300] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-var-log\") on node \"crc\" DevicePath \"\"" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.742655 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f/control-plane-machine-set-operator/1.log" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.743210 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" event={"ID":"45a8038e-e7f2-4d93-a6f5-7753aa54e63f","Type":"ContainerStarted","Data":"6e2b2ebcbabf5c1d8517ce153f68731713702ba7ac48dbbb35aa2337043be534"} Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.749146 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/machine-api-operator/0.log" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.760219 4183 generic.go:334] "Generic (PLEG): container finished" podID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" containerID="de6ce3128562801aa3c24e80d49667d2d193ade88fcdf509085e61d3d048041e" exitCode=255 Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.760314 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" event={"ID":"4f8aa612-9da0-4a2b-911e-6a1764a4e74e","Type":"ContainerDied","Data":"de6ce3128562801aa3c24e80d49667d2d193ade88fcdf509085e61d3d048041e"} Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.760945 4183 scope.go:117] "RemoveContainer" containerID="de6ce3128562801aa3c24e80d49667d2d193ade88fcdf509085e61d3d048041e" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.780158 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" event={"ID":"530553aa-0a1d-423e-8a22-f5eb4bdbb883","Type":"ContainerStarted","Data":"95ea01f530cb8f9c84220be232e511a271a9480b103ab0095af603077e0cb252"} Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.781288 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.787186 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_bf055e84f32193b9c1c21b0c34a61f01/startup-monitor/0.log" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.787250 4183 generic.go:334] "Generic (PLEG): container finished" podID="bf055e84f32193b9c1c21b0c34a61f01" containerID="15820ab514a1ec9c31d0791a36dbd2a502fe86541e3878da038ece782fc81268" exitCode=137 Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.788154 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.788564 4183 scope.go:117] "RemoveContainer" containerID="15820ab514a1ec9c31d0791a36dbd2a502fe86541e3878da038ece782fc81268" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.788989 4183 patch_prober.go:28] interesting pod/marketplace-operator-8b455464d-f9xdt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.789131 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.951503 4183 scope.go:117] "RemoveContainer" containerID="15820ab514a1ec9c31d0791a36dbd2a502fe86541e3878da038ece782fc81268" Aug 13 20:05:34 crc kubenswrapper[4183]: E0813 20:05:34.952199 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15820ab514a1ec9c31d0791a36dbd2a502fe86541e3878da038ece782fc81268\": container with ID starting with 15820ab514a1ec9c31d0791a36dbd2a502fe86541e3878da038ece782fc81268 not found: ID does not exist" containerID="15820ab514a1ec9c31d0791a36dbd2a502fe86541e3878da038ece782fc81268" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.952261 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15820ab514a1ec9c31d0791a36dbd2a502fe86541e3878da038ece782fc81268"} err="failed to get container status \"15820ab514a1ec9c31d0791a36dbd2a502fe86541e3878da038ece782fc81268\": rpc error: code = NotFound desc = could not find container \"15820ab514a1ec9c31d0791a36dbd2a502fe86541e3878da038ece782fc81268\": container with ID starting with 15820ab514a1ec9c31d0791a36dbd2a502fe86541e3878da038ece782fc81268 not found: ID does not exist" Aug 13 20:05:35 crc kubenswrapper[4183]: I0813 20:05:35.225693 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf055e84f32193b9c1c21b0c34a61f01" path="/var/lib/kubelet/pods/bf055e84f32193b9c1c21b0c34a61f01/volumes" Aug 13 20:05:35 crc kubenswrapper[4183]: I0813 20:05:35.229141 4183 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Aug 13 20:05:35 crc kubenswrapper[4183]: I0813 20:05:35.232216 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Aug 13 20:05:35 crc kubenswrapper[4183]: I0813 20:05:35.311740 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-79vsd" Aug 13 20:05:35 crc kubenswrapper[4183]: I0813 20:05:35.321850 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Aug 13 20:05:35 crc kubenswrapper[4183]: I0813 20:05:35.321937 4183 kubelet.go:2639] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="423c3b23-c4c1-4055-868d-65e7387f40ce" Aug 13 20:05:35 crc kubenswrapper[4183]: I0813 20:05:35.341507 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Aug 13 20:05:35 crc kubenswrapper[4183]: I0813 20:05:35.341580 4183 kubelet.go:2663] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="423c3b23-c4c1-4055-868d-65e7387f40ce" Aug 13 20:05:35 crc kubenswrapper[4183]: I0813 20:05:35.386306 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Aug 13 20:05:35 crc kubenswrapper[4183]: I0813 20:05:35.800662 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" event={"ID":"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf","Type":"ContainerStarted","Data":"a91ec548a60f506a0a73fce12c0a6b3a787ccba29077a1f7d43da8a738f473d2"} Aug 13 20:05:36 crc kubenswrapper[4183]: I0813 20:05:36.031690 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" containerName="registry-server" probeResult="failure" output=< Aug 13 20:05:36 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:05:36 crc kubenswrapper[4183]: > Aug 13 20:05:36 crc kubenswrapper[4183]: I0813 20:05:36.140880 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Aug 13 20:05:36 crc kubenswrapper[4183]: I0813 20:05:36.301833 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" containerName="registry-server" probeResult="failure" output=< Aug 13 20:05:36 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:05:36 crc kubenswrapper[4183]: > Aug 13 20:05:36 crc kubenswrapper[4183]: I0813 20:05:36.511890 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Aug 13 20:05:36 crc kubenswrapper[4183]: I0813 20:05:36.812216 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/machine-api-operator/0.log" Aug 13 20:05:36 crc kubenswrapper[4183]: I0813 20:05:36.812973 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" event={"ID":"4f8aa612-9da0-4a2b-911e-6a1764a4e74e","Type":"ContainerStarted","Data":"4dd7298bc15ad94ac15b2586221cba0590f58e6667404ba80b077dc597db4950"} Aug 13 20:05:37 crc kubenswrapper[4183]: E0813 20:05:37.200104 4183 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = kubelet may be retrying requests that are timing out in CRI-O due to system load. Currently at stage container storage creation: the requested container k8s_openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_openshift-controller-manager-operator_0f394926-bdb9-425c-b36e-264d7fd34550_1 is now ready and will be provided to the kubelet on next retry: error reserving ctr name k8s_openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_openshift-controller-manager-operator_0f394926-bdb9-425c-b36e-264d7fd34550_1 for id 5311a227522754649347ee221cf50be9f546f8a870582594bc726558a6fab7f5: name is reserved" podSandboxID="489c96bd95d523f4b7e59e72e928433dfb6870d719899f788f393fc315d5c1f5" Aug 13 20:05:37 crc kubenswrapper[4183]: E0813 20:05:37.200320 4183 kuberuntime_manager.go:1262] container &Container{Name:openshift-controller-manager-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611,Command:[cluster-openshift-controller-manager-operator operator],Args:[--config=/var/run/configmaps/config/config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.16.0,ValueFrom:nil,},EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.16.0,ValueFrom:nil,},EnvVar{Name:ROUTE_CONTROLLER_MANAGER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-l8bxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openshift-controller-manager-operator-7978d7d7f6-2nt8z_openshift-controller-manager-operator(0f394926-bdb9-425c-b36e-264d7fd34550): CreateContainerError: kubelet may be retrying requests that are timing out in CRI-O due to system load. Currently at stage container storage creation: the requested container k8s_openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_openshift-controller-manager-operator_0f394926-bdb9-425c-b36e-264d7fd34550_1 is now ready and will be provided to the kubelet on next retry: error reserving ctr name k8s_openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_openshift-controller-manager-operator_0f394926-bdb9-425c-b36e-264d7fd34550_1 for id 5311a227522754649347ee221cf50be9f546f8a870582594bc726558a6fab7f5: name is reserved Aug 13 20:05:37 crc kubenswrapper[4183]: E0813 20:05:37.200385 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-controller-manager-operator\" with CreateContainerError: \"kubelet may be retrying requests that are timing out in CRI-O due to system load. Currently at stage container storage creation: the requested container k8s_openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_openshift-controller-manager-operator_0f394926-bdb9-425c-b36e-264d7fd34550_1 is now ready and will be provided to the kubelet on next retry: error reserving ctr name k8s_openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_openshift-controller-manager-operator_0f394926-bdb9-425c-b36e-264d7fd34550_1 for id 5311a227522754649347ee221cf50be9f546f8a870582594bc726558a6fab7f5: name is reserved\"" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 20:05:37 crc kubenswrapper[4183]: I0813 20:05:37.344231 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 20:05:37 crc kubenswrapper[4183]: I0813 20:05:37.464262 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Aug 13 20:05:37 crc kubenswrapper[4183]: I0813 20:05:37.819730 4183 scope.go:117] "RemoveContainer" containerID="30bf5390313371a8f7b0bd5cd736b789b0d1779681e69eff1d8e1c6c5c72d56d" Aug 13 20:05:37 crc kubenswrapper[4183]: I0813 20:05:37.905756 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Aug 13 20:05:38 crc kubenswrapper[4183]: I0813 20:05:38.438414 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Aug 13 20:05:38 crc kubenswrapper[4183]: I0813 20:05:38.835543 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_0f394926-bdb9-425c-b36e-264d7fd34550/openshift-controller-manager-operator/0.log" Aug 13 20:05:38 crc kubenswrapper[4183]: I0813 20:05:38.836025 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" event={"ID":"0f394926-bdb9-425c-b36e-264d7fd34550","Type":"ContainerStarted","Data":"18768e4e615786eedd49b25431da2fe5b5aaf29e37914eddd9e94881eac5e8c1"} Aug 13 20:05:39 crc kubenswrapper[4183]: I0813 20:05:39.019126 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Aug 13 20:05:39 crc kubenswrapper[4183]: I0813 20:05:39.153324 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-dwn4s" Aug 13 20:05:39 crc kubenswrapper[4183]: I0813 20:05:39.188592 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Aug 13 20:05:39 crc kubenswrapper[4183]: I0813 20:05:39.261904 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Aug 13 20:05:39 crc kubenswrapper[4183]: I0813 20:05:39.538769 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:05:39 crc kubenswrapper[4183]: I0813 20:05:39.538986 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:05:39 crc kubenswrapper[4183]: I0813 20:05:39.550611 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:05:39 crc kubenswrapper[4183]: I0813 20:05:39.671238 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Aug 13 20:05:39 crc kubenswrapper[4183]: I0813 20:05:39.854671 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:05:40 crc kubenswrapper[4183]: I0813 20:05:40.093265 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Aug 13 20:05:40 crc kubenswrapper[4183]: I0813 20:05:40.161234 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-k9qqb" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" containerName="registry-server" probeResult="failure" output=< Aug 13 20:05:40 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:05:40 crc kubenswrapper[4183]: > Aug 13 20:05:40 crc kubenswrapper[4183]: I0813 20:05:40.347047 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Aug 13 20:05:40 crc kubenswrapper[4183]: I0813 20:05:40.397675 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Aug 13 20:05:40 crc kubenswrapper[4183]: I0813 20:05:40.468081 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Aug 13 20:05:40 crc kubenswrapper[4183]: I0813 20:05:40.475820 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:05:40 crc kubenswrapper[4183]: I0813 20:05:40.483262 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:05:40 crc kubenswrapper[4183]: I0813 20:05:40.708985 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5d9678894c-wx62n"] Aug 13 20:05:40 crc kubenswrapper[4183]: I0813 20:05:40.830628 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dcqzh" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" containerName="registry-server" probeResult="failure" output=< Aug 13 20:05:40 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:05:40 crc kubenswrapper[4183]: > Aug 13 20:05:41 crc kubenswrapper[4183]: I0813 20:05:41.179381 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Aug 13 20:05:41 crc kubenswrapper[4183]: E0813 20:05:41.226057 4183 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" podSandboxID="8d494f516ab462fe0efca4e10a5bd10552cb52fe8198ca66dbb92b9402c1eae4" Aug 13 20:05:41 crc kubenswrapper[4183]: E0813 20:05:41.226360 4183 kuberuntime_manager.go:1262] container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc,Command:[/bin/bash -c #!/bin/bash Aug 13 20:05:41 crc kubenswrapper[4183]: set -o allexport Aug 13 20:05:41 crc kubenswrapper[4183]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Aug 13 20:05:41 crc kubenswrapper[4183]: source /etc/kubernetes/apiserver-url.env Aug 13 20:05:41 crc kubenswrapper[4183]: else Aug 13 20:05:41 crc kubenswrapper[4183]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Aug 13 20:05:41 crc kubenswrapper[4183]: exit 1 Aug 13 20:05:41 crc kubenswrapper[4183]: fi Aug 13 20:05:41 crc kubenswrapper[4183]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Aug 13 20:05:41 crc kubenswrapper[4183]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.16.0,ValueFrom:nil,},EnvVar{Name:SDN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9ec002699d6fa111b93b08bda974586ae4018f4a52d1cbfd0995e6dc9c732151,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce3a9355a4497b51899867170943d34bbc2d2b7996d9a002c103797bd828d71b,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f0791454224e2ec76fd43916220bd5ae55bf18f37f0cd571cb05c76e1d791453,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc5f4b6565d37bd875cdb42e95372128231218fb8741f640b09565d9dcea2cb1,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-4sfhc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-767c585db5-zd56b_openshift-network-operator(cc291782-27d2-4a74-af79-c7dcb31535d2): CreateContainerError: context deadline exceeded Aug 13 20:05:41 crc kubenswrapper[4183]: E0813 20:05:41.226433 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerError: \"context deadline exceeded\"" pod="openshift-network-operator/network-operator-767c585db5-zd56b" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" Aug 13 20:05:41 crc kubenswrapper[4183]: I0813 20:05:41.666475 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Aug 13 20:05:41 crc kubenswrapper[4183]: I0813 20:05:41.869956 4183 scope.go:117] "RemoveContainer" containerID="ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce" Aug 13 20:05:42 crc kubenswrapper[4183]: I0813 20:05:42.828248 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Aug 13 20:05:42 crc kubenswrapper[4183]: I0813 20:05:42.878397 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Aug 13 20:05:42 crc kubenswrapper[4183]: I0813 20:05:42.880586 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" event={"ID":"cc291782-27d2-4a74-af79-c7dcb31535d2","Type":"ContainerStarted","Data":"c97fff743291294c8c2671715b19a9576ef9f434134cc0f02b695dbc32284d86"} Aug 13 20:05:43 crc kubenswrapper[4183]: I0813 20:05:43.209312 4183 scope.go:117] "RemoveContainer" containerID="df1d1d9a22e05cc0ee9c2836e149b57342e813e732ecae98f07e805dbee82ebb" Aug 13 20:05:43 crc kubenswrapper[4183]: I0813 20:05:43.209366 4183 scope.go:117] "RemoveContainer" containerID="e5878255f5e541fa4d169576071de072a25742be132fcad416fbf91f5f8ebad9" Aug 13 20:05:43 crc kubenswrapper[4183]: I0813 20:05:43.884551 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Aug 13 20:05:43 crc kubenswrapper[4183]: I0813 20:05:43.897724 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver-check-endpoints/4.log" Aug 13 20:05:43 crc kubenswrapper[4183]: I0813 20:05:43.900136 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/3.log" Aug 13 20:05:43 crc kubenswrapper[4183]: I0813 20:05:43.902595 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" event={"ID":"b23d6435-6431-4905-b41b-a517327385e5","Type":"ContainerStarted","Data":"b7b2fb66a37e8c7191a914067fe2f9036112a584c9ca7714873849353733889a"} Aug 13 20:05:44 crc kubenswrapper[4183]: I0813 20:05:44.278440 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Aug 13 20:05:44 crc kubenswrapper[4183]: I0813 20:05:44.316338 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Aug 13 20:05:44 crc kubenswrapper[4183]: I0813 20:05:44.541374 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 20:05:44 crc kubenswrapper[4183]: I0813 20:05:44.817110 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 20:05:44 crc kubenswrapper[4183]: I0813 20:05:44.916519 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver-check-endpoints/4.log" Aug 13 20:05:44 crc kubenswrapper[4183]: I0813 20:05:44.918705 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/3.log" Aug 13 20:05:44 crc kubenswrapper[4183]: I0813 20:05:44.920139 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" event={"ID":"b23d6435-6431-4905-b41b-a517327385e5","Type":"ContainerStarted","Data":"b03552e2b35c92b59eb334cf496ac9d89324ae268cf17ae601bd0d6a94df8289"} Aug 13 20:05:45 crc kubenswrapper[4183]: I0813 20:05:45.013856 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 20:05:45 crc kubenswrapper[4183]: I0813 20:05:45.089826 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podStartSLOduration=304.089658085 podStartE2EDuration="5m4.089658085s" podCreationTimestamp="2025-08-13 20:00:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:05:45.042200056 +0000 UTC m=+1311.734864874" watchObservedRunningTime="2025-08-13 20:05:45.089658085 +0000 UTC m=+1311.782322903" Aug 13 20:05:45 crc kubenswrapper[4183]: E0813 20:05:45.250964 4183 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" podSandboxID="282af480c29eba88e80ad94d58f4ba7eb51ae6c6558514585728acae3448d722" Aug 13 20:05:45 crc kubenswrapper[4183]: E0813 20:05:45.251273 4183 kuberuntime_manager.go:1262] container &Container{Name:service-ca-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d,Command:[service-ca-operator operator],Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=2],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.16.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{83886080 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-d9vhj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod service-ca-operator-546b4f8984-pwccz_openshift-service-ca-operator(6d67253e-2acd-4bc1-8185-793587da4f17): CreateContainerError: context deadline exceeded Aug 13 20:05:45 crc kubenswrapper[4183]: E0813 20:05:45.251332 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-operator\" with CreateContainerError: \"context deadline exceeded\"" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 20:05:45 crc kubenswrapper[4183]: I0813 20:05:45.327881 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Aug 13 20:05:45 crc kubenswrapper[4183]: I0813 20:05:45.665239 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:05:45 crc kubenswrapper[4183]: I0813 20:05:45.665483 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:05:45 crc kubenswrapper[4183]: I0813 20:05:45.901482 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" containerName="registry-server" probeResult="failure" output=< Aug 13 20:05:45 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:05:45 crc kubenswrapper[4183]: > Aug 13 20:05:45 crc kubenswrapper[4183]: I0813 20:05:45.927429 4183 scope.go:117] "RemoveContainer" containerID="de7555d542c802e58046a90350e414a08c9d856a865303fa64131537f1cc00fc" Aug 13 20:05:46 crc kubenswrapper[4183]: I0813 20:05:46.596218 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-jjfds container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Aug 13 20:05:46 crc kubenswrapper[4183]: [+]log ok Aug 13 20:05:46 crc kubenswrapper[4183]: [+]etcd ok Aug 13 20:05:46 crc kubenswrapper[4183]: [+]poststarthook/generic-apiserver-start-informers ok Aug 13 20:05:46 crc kubenswrapper[4183]: [+]poststarthook/max-in-flight-filter ok Aug 13 20:05:46 crc kubenswrapper[4183]: [+]poststarthook/storage-object-count-tracker-hook ok Aug 13 20:05:46 crc kubenswrapper[4183]: [+]poststarthook/image.openshift.io-apiserver-caches ok Aug 13 20:05:46 crc kubenswrapper[4183]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Aug 13 20:05:46 crc kubenswrapper[4183]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Aug 13 20:05:46 crc kubenswrapper[4183]: [+]poststarthook/project.openshift.io-projectcache ok Aug 13 20:05:46 crc kubenswrapper[4183]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Aug 13 20:05:46 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-startinformers ok Aug 13 20:05:46 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-restmapperupdater ok Aug 13 20:05:46 crc kubenswrapper[4183]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Aug 13 20:05:46 crc kubenswrapper[4183]: healthz check failed Aug 13 20:05:46 crc kubenswrapper[4183]: I0813 20:05:46.596345 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:05:46 crc kubenswrapper[4183]: I0813 20:05:46.938478 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" event={"ID":"6d67253e-2acd-4bc1-8185-793587da4f17","Type":"ContainerStarted","Data":"7bc73c64b9d7e197b77d0f43ab147a148818682c82020be549d82802a07420f4"} Aug 13 20:05:48 crc kubenswrapper[4183]: I0813 20:05:48.956385 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-k9qqb" Aug 13 20:05:49 crc kubenswrapper[4183]: I0813 20:05:49.169157 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-k9qqb" Aug 13 20:05:49 crc kubenswrapper[4183]: I0813 20:05:49.521961 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Aug 13 20:05:50 crc kubenswrapper[4183]: I0813 20:05:50.699518 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:05:50 crc kubenswrapper[4183]: I0813 20:05:50.716124 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:05:50 crc kubenswrapper[4183]: I0813 20:05:50.778479 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dcqzh" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" containerName="registry-server" probeResult="failure" output=< Aug 13 20:05:50 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:05:50 crc kubenswrapper[4183]: > Aug 13 20:05:54 crc kubenswrapper[4183]: I0813 20:05:54.716496 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:05:54 crc kubenswrapper[4183]: I0813 20:05:54.718307 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:05:54 crc kubenswrapper[4183]: I0813 20:05:54.718444 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:05:54 crc kubenswrapper[4183]: I0813 20:05:54.718554 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:05:54 crc kubenswrapper[4183]: I0813 20:05:54.718680 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:05:54 crc kubenswrapper[4183]: I0813 20:05:54.748040 4183 scope.go:117] "RemoveContainer" containerID="47fe4a48f20f31be64ae9b101ef8f82942a11a5dc253da7cd8d82bea357cc9c7" Aug 13 20:05:55 crc kubenswrapper[4183]: I0813 20:05:55.816884 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" containerName="registry-server" probeResult="failure" output=< Aug 13 20:05:55 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:05:55 crc kubenswrapper[4183]: > Aug 13 20:05:57 crc kubenswrapper[4183]: I0813 20:05:57.068190 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-10-retry-1-crc"] Aug 13 20:05:57 crc kubenswrapper[4183]: I0813 20:05:57.070513 4183 topology_manager.go:215] "Topology Admit Handler" podUID="dc02677d-deed-4cc9-bb8c-0dd300f83655" podNamespace="openshift-kube-controller-manager" podName="installer-10-retry-1-crc" Aug 13 20:05:57 crc kubenswrapper[4183]: E0813 20:05:57.072133 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="bf055e84f32193b9c1c21b0c34a61f01" containerName="startup-monitor" Aug 13 20:05:57 crc kubenswrapper[4183]: I0813 20:05:57.072184 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf055e84f32193b9c1c21b0c34a61f01" containerName="startup-monitor" Aug 13 20:05:57 crc kubenswrapper[4183]: I0813 20:05:57.072369 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf055e84f32193b9c1c21b0c34a61f01" containerName="startup-monitor" Aug 13 20:05:57 crc kubenswrapper[4183]: I0813 20:05:57.073129 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-10-retry-1-crc" Aug 13 20:05:57 crc kubenswrapper[4183]: I0813 20:05:57.078051 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Aug 13 20:05:57 crc kubenswrapper[4183]: I0813 20:05:57.080371 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-dl9g2" Aug 13 20:05:57 crc kubenswrapper[4183]: I0813 20:05:57.117579 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-10-retry-1-crc"] Aug 13 20:05:57 crc kubenswrapper[4183]: I0813 20:05:57.165299 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dc02677d-deed-4cc9-bb8c-0dd300f83655-kubelet-dir\") pod \"installer-10-retry-1-crc\" (UID: \"dc02677d-deed-4cc9-bb8c-0dd300f83655\") " pod="openshift-kube-controller-manager/installer-10-retry-1-crc" Aug 13 20:05:57 crc kubenswrapper[4183]: I0813 20:05:57.165405 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dc02677d-deed-4cc9-bb8c-0dd300f83655-kube-api-access\") pod \"installer-10-retry-1-crc\" (UID: \"dc02677d-deed-4cc9-bb8c-0dd300f83655\") " pod="openshift-kube-controller-manager/installer-10-retry-1-crc" Aug 13 20:05:57 crc kubenswrapper[4183]: I0813 20:05:57.165432 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dc02677d-deed-4cc9-bb8c-0dd300f83655-var-lock\") pod \"installer-10-retry-1-crc\" (UID: \"dc02677d-deed-4cc9-bb8c-0dd300f83655\") " pod="openshift-kube-controller-manager/installer-10-retry-1-crc" Aug 13 20:05:57 crc kubenswrapper[4183]: I0813 20:05:57.266818 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dc02677d-deed-4cc9-bb8c-0dd300f83655-kube-api-access\") pod \"installer-10-retry-1-crc\" (UID: \"dc02677d-deed-4cc9-bb8c-0dd300f83655\") " pod="openshift-kube-controller-manager/installer-10-retry-1-crc" Aug 13 20:05:57 crc kubenswrapper[4183]: I0813 20:05:57.267099 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dc02677d-deed-4cc9-bb8c-0dd300f83655-var-lock\") pod \"installer-10-retry-1-crc\" (UID: \"dc02677d-deed-4cc9-bb8c-0dd300f83655\") " pod="openshift-kube-controller-manager/installer-10-retry-1-crc" Aug 13 20:05:57 crc kubenswrapper[4183]: I0813 20:05:57.267202 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dc02677d-deed-4cc9-bb8c-0dd300f83655-kubelet-dir\") pod \"installer-10-retry-1-crc\" (UID: \"dc02677d-deed-4cc9-bb8c-0dd300f83655\") " pod="openshift-kube-controller-manager/installer-10-retry-1-crc" Aug 13 20:05:57 crc kubenswrapper[4183]: I0813 20:05:57.267699 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dc02677d-deed-4cc9-bb8c-0dd300f83655-var-lock\") pod \"installer-10-retry-1-crc\" (UID: \"dc02677d-deed-4cc9-bb8c-0dd300f83655\") " pod="openshift-kube-controller-manager/installer-10-retry-1-crc" Aug 13 20:05:57 crc kubenswrapper[4183]: I0813 20:05:57.267745 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dc02677d-deed-4cc9-bb8c-0dd300f83655-kubelet-dir\") pod \"installer-10-retry-1-crc\" (UID: \"dc02677d-deed-4cc9-bb8c-0dd300f83655\") " pod="openshift-kube-controller-manager/installer-10-retry-1-crc" Aug 13 20:05:57 crc kubenswrapper[4183]: I0813 20:05:57.298670 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dc02677d-deed-4cc9-bb8c-0dd300f83655-kube-api-access\") pod \"installer-10-retry-1-crc\" (UID: \"dc02677d-deed-4cc9-bb8c-0dd300f83655\") " pod="openshift-kube-controller-manager/installer-10-retry-1-crc" Aug 13 20:05:57 crc kubenswrapper[4183]: I0813 20:05:57.402598 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-10-retry-1-crc" Aug 13 20:05:57 crc kubenswrapper[4183]: I0813 20:05:57.861827 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-67cbf64bc9-jjfds"] Aug 13 20:05:57 crc kubenswrapper[4183]: I0813 20:05:57.862628 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" containerID="cri-o://b7b2fb66a37e8c7191a914067fe2f9036112a584c9ca7714873849353733889a" gracePeriod=90 Aug 13 20:05:57 crc kubenswrapper[4183]: I0813 20:05:57.862709 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver-check-endpoints" containerID="cri-o://b03552e2b35c92b59eb334cf496ac9d89324ae268cf17ae601bd0d6a94df8289" gracePeriod=90 Aug 13 20:05:57 crc kubenswrapper[4183]: I0813 20:05:57.989886 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-10-retry-1-crc"] Aug 13 20:05:58 crc kubenswrapper[4183]: I0813 20:05:58.042959 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-10-retry-1-crc" event={"ID":"dc02677d-deed-4cc9-bb8c-0dd300f83655","Type":"ContainerStarted","Data":"0d375f365a8fdeb2a6f8e132a388c08618e43492f2ffe32f450d914395120bec"} Aug 13 20:05:59 crc kubenswrapper[4183]: I0813 20:05:59.055571 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver-check-endpoints/4.log" Aug 13 20:05:59 crc kubenswrapper[4183]: I0813 20:05:59.056695 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/3.log" Aug 13 20:05:59 crc kubenswrapper[4183]: I0813 20:05:59.058388 4183 generic.go:334] "Generic (PLEG): container finished" podID="b23d6435-6431-4905-b41b-a517327385e5" containerID="b03552e2b35c92b59eb334cf496ac9d89324ae268cf17ae601bd0d6a94df8289" exitCode=0 Aug 13 20:05:59 crc kubenswrapper[4183]: I0813 20:05:59.058470 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" event={"ID":"b23d6435-6431-4905-b41b-a517327385e5","Type":"ContainerDied","Data":"b03552e2b35c92b59eb334cf496ac9d89324ae268cf17ae601bd0d6a94df8289"} Aug 13 20:05:59 crc kubenswrapper[4183]: I0813 20:05:59.058521 4183 scope.go:117] "RemoveContainer" containerID="e5878255f5e541fa4d169576071de072a25742be132fcad416fbf91f5f8ebad9" Aug 13 20:05:59 crc kubenswrapper[4183]: I0813 20:05:59.795340 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dcqzh" Aug 13 20:05:59 crc kubenswrapper[4183]: I0813 20:05:59.911750 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dcqzh" Aug 13 20:06:00 crc kubenswrapper[4183]: I0813 20:06:00.071854 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-10-retry-1-crc" event={"ID":"dc02677d-deed-4cc9-bb8c-0dd300f83655","Type":"ContainerStarted","Data":"6cc839079ff04a5b6cb4524dc6e36a89fd8caab9bf6a552eeffb557088851619"} Aug 13 20:06:00 crc kubenswrapper[4183]: I0813 20:06:00.076769 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/3.log" Aug 13 20:06:00 crc kubenswrapper[4183]: I0813 20:06:00.676057 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-jjfds container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Aug 13 20:06:00 crc kubenswrapper[4183]: [+]log ok Aug 13 20:06:00 crc kubenswrapper[4183]: [+]etcd ok Aug 13 20:06:00 crc kubenswrapper[4183]: [+]etcd-readiness ok Aug 13 20:06:00 crc kubenswrapper[4183]: [+]informer-sync ok Aug 13 20:06:00 crc kubenswrapper[4183]: [+]poststarthook/generic-apiserver-start-informers ok Aug 13 20:06:00 crc kubenswrapper[4183]: [+]poststarthook/max-in-flight-filter ok Aug 13 20:06:00 crc kubenswrapper[4183]: [+]poststarthook/storage-object-count-tracker-hook ok Aug 13 20:06:00 crc kubenswrapper[4183]: [+]poststarthook/image.openshift.io-apiserver-caches ok Aug 13 20:06:00 crc kubenswrapper[4183]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Aug 13 20:06:00 crc kubenswrapper[4183]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Aug 13 20:06:00 crc kubenswrapper[4183]: [+]poststarthook/project.openshift.io-projectcache ok Aug 13 20:06:00 crc kubenswrapper[4183]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Aug 13 20:06:00 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-startinformers ok Aug 13 20:06:00 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-restmapperupdater ok Aug 13 20:06:00 crc kubenswrapper[4183]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Aug 13 20:06:00 crc kubenswrapper[4183]: [-]shutdown failed: reason withheld Aug 13 20:06:00 crc kubenswrapper[4183]: readyz check failed Aug 13 20:06:00 crc kubenswrapper[4183]: I0813 20:06:00.676494 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:06:00 crc kubenswrapper[4183]: I0813 20:06:00.676601 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:06:00 crc kubenswrapper[4183]: I0813 20:06:00.711960 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-10-retry-1-crc" podStartSLOduration=3.711887332 podStartE2EDuration="3.711887332s" podCreationTimestamp="2025-08-13 20:05:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:06:00.10385754 +0000 UTC m=+1326.796522368" watchObservedRunningTime="2025-08-13 20:06:00.711887332 +0000 UTC m=+1327.404552310" Aug 13 20:06:04 crc kubenswrapper[4183]: I0813 20:06:04.845332 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 20:06:04 crc kubenswrapper[4183]: I0813 20:06:04.971234 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 20:06:05 crc kubenswrapper[4183]: I0813 20:06:05.676342 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-jjfds container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Aug 13 20:06:05 crc kubenswrapper[4183]: [+]log ok Aug 13 20:06:05 crc kubenswrapper[4183]: [+]etcd ok Aug 13 20:06:05 crc kubenswrapper[4183]: [+]etcd-readiness ok Aug 13 20:06:05 crc kubenswrapper[4183]: [+]informer-sync ok Aug 13 20:06:05 crc kubenswrapper[4183]: [+]poststarthook/generic-apiserver-start-informers ok Aug 13 20:06:05 crc kubenswrapper[4183]: [+]poststarthook/max-in-flight-filter ok Aug 13 20:06:05 crc kubenswrapper[4183]: [+]poststarthook/storage-object-count-tracker-hook ok Aug 13 20:06:05 crc kubenswrapper[4183]: [+]poststarthook/image.openshift.io-apiserver-caches ok Aug 13 20:06:05 crc kubenswrapper[4183]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Aug 13 20:06:05 crc kubenswrapper[4183]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Aug 13 20:06:05 crc kubenswrapper[4183]: [+]poststarthook/project.openshift.io-projectcache ok Aug 13 20:06:05 crc kubenswrapper[4183]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Aug 13 20:06:05 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-startinformers ok Aug 13 20:06:05 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-restmapperupdater ok Aug 13 20:06:05 crc kubenswrapper[4183]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Aug 13 20:06:05 crc kubenswrapper[4183]: [-]shutdown failed: reason withheld Aug 13 20:06:05 crc kubenswrapper[4183]: readyz check failed Aug 13 20:06:05 crc kubenswrapper[4183]: I0813 20:06:05.676435 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:06:06 crc kubenswrapper[4183]: I0813 20:06:06.907656 4183 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="fsnotify: can't remove non-existent watch: /etc/kubernetes/kubelet-ca.crt" Aug 13 20:06:06 crc kubenswrapper[4183]: I0813 20:06:06.913074 4183 dynamic_cafile_content.go:119] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Aug 13 20:06:06 crc kubenswrapper[4183]: I0813 20:06:06.994135 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" containerID="cri-o://1ce82b64b98820f650cc613d542e0f0643d32ba3d661ee198711362ba0c99f8b" gracePeriod=15 Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.146170 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5d9678894c-wx62n_384ed0e8-86e4-42df-bd2c-604c1f536a15/console/1.log" Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.147353 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5d9678894c-wx62n_384ed0e8-86e4-42df-bd2c-604c1f536a15/console/0.log" Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.147427 4183 generic.go:334] "Generic (PLEG): container finished" podID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerID="1ce82b64b98820f650cc613d542e0f0643d32ba3d661ee198711362ba0c99f8b" exitCode=2 Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.147460 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d9678894c-wx62n" event={"ID":"384ed0e8-86e4-42df-bd2c-604c1f536a15","Type":"ContainerDied","Data":"1ce82b64b98820f650cc613d542e0f0643d32ba3d661ee198711362ba0c99f8b"} Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.147512 4183 scope.go:117] "RemoveContainer" containerID="bc9bc2d351deda360fe2c9a8ea52b6167467896e22b28bcf9fdb33f8155b79ba" Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.475603 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5d9678894c-wx62n_384ed0e8-86e4-42df-bd2c-604c1f536a15/console/1.log" Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.475695 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.528768 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/384ed0e8-86e4-42df-bd2c-604c1f536a15-console-oauth-config\") pod \"384ed0e8-86e4-42df-bd2c-604c1f536a15\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.529095 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-service-ca\") pod \"384ed0e8-86e4-42df-bd2c-604c1f536a15\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.529400 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/384ed0e8-86e4-42df-bd2c-604c1f536a15-console-serving-cert\") pod \"384ed0e8-86e4-42df-bd2c-604c1f536a15\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.529551 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-console-config\") pod \"384ed0e8-86e4-42df-bd2c-604c1f536a15\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.530391 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-oauth-serving-cert\") pod \"384ed0e8-86e4-42df-bd2c-604c1f536a15\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.530572 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjq9b\" (UniqueName: \"kubernetes.io/projected/384ed0e8-86e4-42df-bd2c-604c1f536a15-kube-api-access-hjq9b\") pod \"384ed0e8-86e4-42df-bd2c-604c1f536a15\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.531014 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-trusted-ca-bundle\") pod \"384ed0e8-86e4-42df-bd2c-604c1f536a15\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.548624 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-console-config" (OuterVolumeSpecName: "console-config") pod "384ed0e8-86e4-42df-bd2c-604c1f536a15" (UID: "384ed0e8-86e4-42df-bd2c-604c1f536a15"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.548824 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "384ed0e8-86e4-42df-bd2c-604c1f536a15" (UID: "384ed0e8-86e4-42df-bd2c-604c1f536a15"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.548848 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "384ed0e8-86e4-42df-bd2c-604c1f536a15" (UID: "384ed0e8-86e4-42df-bd2c-604c1f536a15"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.549462 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-service-ca" (OuterVolumeSpecName: "service-ca") pod "384ed0e8-86e4-42df-bd2c-604c1f536a15" (UID: "384ed0e8-86e4-42df-bd2c-604c1f536a15"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.554526 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/384ed0e8-86e4-42df-bd2c-604c1f536a15-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "384ed0e8-86e4-42df-bd2c-604c1f536a15" (UID: "384ed0e8-86e4-42df-bd2c-604c1f536a15"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.555144 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/384ed0e8-86e4-42df-bd2c-604c1f536a15-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "384ed0e8-86e4-42df-bd2c-604c1f536a15" (UID: "384ed0e8-86e4-42df-bd2c-604c1f536a15"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.555501 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/384ed0e8-86e4-42df-bd2c-604c1f536a15-kube-api-access-hjq9b" (OuterVolumeSpecName: "kube-api-access-hjq9b") pod "384ed0e8-86e4-42df-bd2c-604c1f536a15" (UID: "384ed0e8-86e4-42df-bd2c-604c1f536a15"). InnerVolumeSpecName "kube-api-access-hjq9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.633186 4183 reconciler_common.go:300] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/384ed0e8-86e4-42df-bd2c-604c1f536a15-console-oauth-config\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.633267 4183 reconciler_common.go:300] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-service-ca\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.633282 4183 reconciler_common.go:300] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-console-config\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.633293 4183 reconciler_common.go:300] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/384ed0e8-86e4-42df-bd2c-604c1f536a15-console-serving-cert\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.633306 4183 reconciler_common.go:300] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.633316 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-hjq9b\" (UniqueName: \"kubernetes.io/projected/384ed0e8-86e4-42df-bd2c-604c1f536a15-kube-api-access-hjq9b\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.633327 4183 reconciler_common.go:300] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:08 crc kubenswrapper[4183]: I0813 20:06:08.155627 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5d9678894c-wx62n_384ed0e8-86e4-42df-bd2c-604c1f536a15/console/1.log" Aug 13 20:06:08 crc kubenswrapper[4183]: I0813 20:06:08.155961 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:06:08 crc kubenswrapper[4183]: I0813 20:06:08.155971 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d9678894c-wx62n" event={"ID":"384ed0e8-86e4-42df-bd2c-604c1f536a15","Type":"ContainerDied","Data":"612e7824c92f4db329dd14ca96f855eb9f361591c35855b089640224677bf2f7"} Aug 13 20:06:08 crc kubenswrapper[4183]: I0813 20:06:08.156053 4183 scope.go:117] "RemoveContainer" containerID="1ce82b64b98820f650cc613d542e0f0643d32ba3d661ee198711362ba0c99f8b" Aug 13 20:06:08 crc kubenswrapper[4183]: I0813 20:06:08.264684 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5d9678894c-wx62n"] Aug 13 20:06:08 crc kubenswrapper[4183]: I0813 20:06:08.270602 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-5d9678894c-wx62n"] Aug 13 20:06:09 crc kubenswrapper[4183]: I0813 20:06:09.219349 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" path="/var/lib/kubelet/pods/384ed0e8-86e4-42df-bd2c-604c1f536a15/volumes" Aug 13 20:06:10 crc kubenswrapper[4183]: I0813 20:06:10.675650 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-jjfds container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Aug 13 20:06:10 crc kubenswrapper[4183]: [+]log ok Aug 13 20:06:10 crc kubenswrapper[4183]: [+]etcd ok Aug 13 20:06:10 crc kubenswrapper[4183]: [+]etcd-readiness ok Aug 13 20:06:10 crc kubenswrapper[4183]: [+]informer-sync ok Aug 13 20:06:10 crc kubenswrapper[4183]: [+]poststarthook/generic-apiserver-start-informers ok Aug 13 20:06:10 crc kubenswrapper[4183]: [+]poststarthook/max-in-flight-filter ok Aug 13 20:06:10 crc kubenswrapper[4183]: [+]poststarthook/storage-object-count-tracker-hook ok Aug 13 20:06:10 crc kubenswrapper[4183]: [+]poststarthook/image.openshift.io-apiserver-caches ok Aug 13 20:06:10 crc kubenswrapper[4183]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Aug 13 20:06:10 crc kubenswrapper[4183]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Aug 13 20:06:10 crc kubenswrapper[4183]: [+]poststarthook/project.openshift.io-projectcache ok Aug 13 20:06:10 crc kubenswrapper[4183]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Aug 13 20:06:10 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-startinformers ok Aug 13 20:06:10 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-restmapperupdater ok Aug 13 20:06:10 crc kubenswrapper[4183]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Aug 13 20:06:10 crc kubenswrapper[4183]: [-]shutdown failed: reason withheld Aug 13 20:06:10 crc kubenswrapper[4183]: readyz check failed Aug 13 20:06:10 crc kubenswrapper[4183]: I0813 20:06:10.676308 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:06:14 crc kubenswrapper[4183]: I0813 20:06:14.718261 4183 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="fsnotify: can't remove non-existent watch: /etc/kubernetes/kubelet-ca.crt" Aug 13 20:06:15 crc kubenswrapper[4183]: I0813 20:06:15.666176 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-jjfds container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Aug 13 20:06:15 crc kubenswrapper[4183]: I0813 20:06:15.666751 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" Aug 13 20:06:20 crc kubenswrapper[4183]: I0813 20:06:20.666389 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-jjfds container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Aug 13 20:06:20 crc kubenswrapper[4183]: I0813 20:06:20.666979 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" Aug 13 20:06:25 crc kubenswrapper[4183]: I0813 20:06:25.666823 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-jjfds container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Aug 13 20:06:25 crc kubenswrapper[4183]: I0813 20:06:25.667491 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" Aug 13 20:06:30 crc kubenswrapper[4183]: I0813 20:06:30.666322 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-jjfds container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Aug 13 20:06:30 crc kubenswrapper[4183]: I0813 20:06:30.667066 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" Aug 13 20:06:30 crc kubenswrapper[4183]: I0813 20:06:30.704832 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rmwfn"] Aug 13 20:06:30 crc kubenswrapper[4183]: I0813 20:06:30.705725 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" containerName="registry-server" containerID="cri-o://2b69a4a950514ff8d569afb43701fa230045e0687c1859975dc65fed5c5d7467" gracePeriod=2 Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.291244 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.336637 4183 generic.go:334] "Generic (PLEG): container finished" podID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" containerID="2b69a4a950514ff8d569afb43701fa230045e0687c1859975dc65fed5c5d7467" exitCode=0 Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.336726 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" event={"ID":"9ad279b4-d9dc-42a8-a1c8-a002bd063482","Type":"ContainerDied","Data":"2b69a4a950514ff8d569afb43701fa230045e0687c1859975dc65fed5c5d7467"} Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.336770 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" event={"ID":"9ad279b4-d9dc-42a8-a1c8-a002bd063482","Type":"ContainerDied","Data":"9218677c9aa0f218ae58b4990048c486cef74452f639e5a303ac08e79a2c31d7"} Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.336890 4183 scope.go:117] "RemoveContainer" containerID="2b69a4a950514ff8d569afb43701fa230045e0687c1859975dc65fed5c5d7467" Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.336854 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.399059 4183 scope.go:117] "RemoveContainer" containerID="5dbac91dc644a8b25317c807e75f64e96be88bcfa9dc60fb2f4e72c80656206a" Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.400918 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ad279b4-d9dc-42a8-a1c8-a002bd063482-catalog-content\") pod \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\" (UID: \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\") " Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.401034 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ad279b4-d9dc-42a8-a1c8-a002bd063482-utilities\") pod \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\" (UID: \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\") " Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.401135 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7dbp\" (UniqueName: \"kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp\") pod \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\" (UID: \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\") " Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.407107 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ad279b4-d9dc-42a8-a1c8-a002bd063482-utilities" (OuterVolumeSpecName: "utilities") pod "9ad279b4-d9dc-42a8-a1c8-a002bd063482" (UID: "9ad279b4-d9dc-42a8-a1c8-a002bd063482"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.418403 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dcqzh"] Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.418835 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dcqzh" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" containerName="registry-server" containerID="cri-o://a39a002d95a82ae963b46c8196dfed935c199e471be64946be7406b3b02562c9" gracePeriod=2 Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.460514 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp" (OuterVolumeSpecName: "kube-api-access-r7dbp") pod "9ad279b4-d9dc-42a8-a1c8-a002bd063482" (UID: "9ad279b4-d9dc-42a8-a1c8-a002bd063482"). InnerVolumeSpecName "kube-api-access-r7dbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.506106 4183 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ad279b4-d9dc-42a8-a1c8-a002bd063482-utilities\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.506186 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-r7dbp\" (UniqueName: \"kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.676153 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ad279b4-d9dc-42a8-a1c8-a002bd063482-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9ad279b4-d9dc-42a8-a1c8-a002bd063482" (UID: "9ad279b4-d9dc-42a8-a1c8-a002bd063482"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.710297 4183 scope.go:117] "RemoveContainer" containerID="1d3ccfcb0f390dfe83d5c073cc5942fd65993c97adb90156294ad82281a940f3" Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.713096 4183 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ad279b4-d9dc-42a8-a1c8-a002bd063482-catalog-content\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.865597 4183 scope.go:117] "RemoveContainer" containerID="2b69a4a950514ff8d569afb43701fa230045e0687c1859975dc65fed5c5d7467" Aug 13 20:06:31 crc kubenswrapper[4183]: E0813 20:06:31.866587 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b69a4a950514ff8d569afb43701fa230045e0687c1859975dc65fed5c5d7467\": container with ID starting with 2b69a4a950514ff8d569afb43701fa230045e0687c1859975dc65fed5c5d7467 not found: ID does not exist" containerID="2b69a4a950514ff8d569afb43701fa230045e0687c1859975dc65fed5c5d7467" Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.866673 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b69a4a950514ff8d569afb43701fa230045e0687c1859975dc65fed5c5d7467"} err="failed to get container status \"2b69a4a950514ff8d569afb43701fa230045e0687c1859975dc65fed5c5d7467\": rpc error: code = NotFound desc = could not find container \"2b69a4a950514ff8d569afb43701fa230045e0687c1859975dc65fed5c5d7467\": container with ID starting with 2b69a4a950514ff8d569afb43701fa230045e0687c1859975dc65fed5c5d7467 not found: ID does not exist" Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.866689 4183 scope.go:117] "RemoveContainer" containerID="5dbac91dc644a8b25317c807e75f64e96be88bcfa9dc60fb2f4e72c80656206a" Aug 13 20:06:31 crc kubenswrapper[4183]: E0813 20:06:31.867610 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5dbac91dc644a8b25317c807e75f64e96be88bcfa9dc60fb2f4e72c80656206a\": container with ID starting with 5dbac91dc644a8b25317c807e75f64e96be88bcfa9dc60fb2f4e72c80656206a not found: ID does not exist" containerID="5dbac91dc644a8b25317c807e75f64e96be88bcfa9dc60fb2f4e72c80656206a" Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.867833 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5dbac91dc644a8b25317c807e75f64e96be88bcfa9dc60fb2f4e72c80656206a"} err="failed to get container status \"5dbac91dc644a8b25317c807e75f64e96be88bcfa9dc60fb2f4e72c80656206a\": rpc error: code = NotFound desc = could not find container \"5dbac91dc644a8b25317c807e75f64e96be88bcfa9dc60fb2f4e72c80656206a\": container with ID starting with 5dbac91dc644a8b25317c807e75f64e96be88bcfa9dc60fb2f4e72c80656206a not found: ID does not exist" Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.867857 4183 scope.go:117] "RemoveContainer" containerID="1d3ccfcb0f390dfe83d5c073cc5942fd65993c97adb90156294ad82281a940f3" Aug 13 20:06:31 crc kubenswrapper[4183]: E0813 20:06:31.868437 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d3ccfcb0f390dfe83d5c073cc5942fd65993c97adb90156294ad82281a940f3\": container with ID starting with 1d3ccfcb0f390dfe83d5c073cc5942fd65993c97adb90156294ad82281a940f3 not found: ID does not exist" containerID="1d3ccfcb0f390dfe83d5c073cc5942fd65993c97adb90156294ad82281a940f3" Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.868469 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d3ccfcb0f390dfe83d5c073cc5942fd65993c97adb90156294ad82281a940f3"} err="failed to get container status \"1d3ccfcb0f390dfe83d5c073cc5942fd65993c97adb90156294ad82281a940f3\": rpc error: code = NotFound desc = could not find container \"1d3ccfcb0f390dfe83d5c073cc5942fd65993c97adb90156294ad82281a940f3\": container with ID starting with 1d3ccfcb0f390dfe83d5c073cc5942fd65993c97adb90156294ad82281a940f3 not found: ID does not exist" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.022861 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rmwfn"] Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.079232 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rmwfn"] Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.143688 4183 kubelet.go:2439] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.144333 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" containerID="cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a" gracePeriod=30 Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.144370 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="kube-controller-manager" containerID="cri-o://2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa" gracePeriod=30 Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.144341 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc" gracePeriod=30 Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.144696 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93" gracePeriod=30 Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.149628 4183 kubelet.go:2429] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.150313 4183 topology_manager.go:215] "Topology Admit Handler" podUID="56d9256d8ee968b89d58cda59af60969" podNamespace="openshift-kube-controller-manager" podName="kube-controller-manager-crc" Aug 13 20:06:32 crc kubenswrapper[4183]: E0813 20:06:32.150575 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.150679 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" Aug 13 20:06:32 crc kubenswrapper[4183]: E0813 20:06:32.150738 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="kube-controller-manager" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.150753 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="kube-controller-manager" Aug 13 20:06:32 crc kubenswrapper[4183]: E0813 20:06:32.150766 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.150828 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: E0813 20:06:32.150845 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="kube-controller-manager" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.150855 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="kube-controller-manager" Aug 13 20:06:32 crc kubenswrapper[4183]: E0813 20:06:32.150900 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.150915 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: E0813 20:06:32.150928 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="kube-controller-manager-cert-syncer" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.150938 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="kube-controller-manager-cert-syncer" Aug 13 20:06:32 crc kubenswrapper[4183]: E0813 20:06:32.150965 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.150975 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: E0813 20:06:32.150986 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.150998 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: E0813 20:06:32.151010 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" containerName="extract-content" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.151022 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" containerName="extract-content" Aug 13 20:06:32 crc kubenswrapper[4183]: E0813 20:06:32.151035 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" containerName="registry-server" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.151044 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" containerName="registry-server" Aug 13 20:06:32 crc kubenswrapper[4183]: E0813 20:06:32.151059 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="kube-controller-manager-recovery-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.151069 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="kube-controller-manager-recovery-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: E0813 20:06:32.151081 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" containerName="extract-utilities" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.151090 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" containerName="extract-utilities" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.151384 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="kube-controller-manager" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.151408 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.151419 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.151430 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" containerName="registry-server" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.151446 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="kube-controller-manager-recovery-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.151459 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.151472 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.151486 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.151499 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.151512 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.151523 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.151534 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.151549 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="kube-controller-manager-cert-syncer" Aug 13 20:06:32 crc kubenswrapper[4183]: E0813 20:06:32.151685 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.151697 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" Aug 13 20:06:32 crc kubenswrapper[4183]: E0813 20:06:32.151714 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.151723 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: E0813 20:06:32.151744 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.151755 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.154246 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="kube-controller-manager" Aug 13 20:06:32 crc kubenswrapper[4183]: E0813 20:06:32.154457 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.154473 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.220156 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/56d9256d8ee968b89d58cda59af60969-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"56d9256d8ee968b89d58cda59af60969\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.220710 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/56d9256d8ee968b89d58cda59af60969-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"56d9256d8ee968b89d58cda59af60969\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.324255 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/56d9256d8ee968b89d58cda59af60969-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"56d9256d8ee968b89d58cda59af60969\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.324653 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/56d9256d8ee968b89d58cda59af60969-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"56d9256d8ee968b89d58cda59af60969\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.324758 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/56d9256d8ee968b89d58cda59af60969-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"56d9256d8ee968b89d58cda59af60969\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.325074 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/56d9256d8ee968b89d58cda59af60969-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"56d9256d8ee968b89d58cda59af60969\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.377766 4183 generic.go:334] "Generic (PLEG): container finished" podID="6db26b71-4e04-4688-a0c0-00e06e8c888d" containerID="a39a002d95a82ae963b46c8196dfed935c199e471be64946be7406b3b02562c9" exitCode=0 Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.380354 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dcqzh" event={"ID":"6db26b71-4e04-4688-a0c0-00e06e8c888d","Type":"ContainerDied","Data":"a39a002d95a82ae963b46c8196dfed935c199e471be64946be7406b3b02562c9"} Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.513021 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dcqzh" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.565031 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/cluster-policy-controller/5.log" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.567986 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/kube-controller-manager-cert-syncer/0.log" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.585559 4183 status_manager.go:863] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-crc" oldPodUID="2eb2b200bca0d10cf0fe16fb7c0caf80" podUID="56d9256d8ee968b89d58cda59af60969" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.587046 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/kube-controller-manager/0.log" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.587198 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.610520 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k9qqb"] Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.613113 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/community-operators-k9qqb" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" containerName="registry-server" containerID="cri-o://81cb681bd6d9448d71ccc777c84e85ec17d8973bb87b22b910458292232175d2" gracePeriod=2 Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.628478 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2eb2b200bca0d10cf0fe16fb7c0caf80-cert-dir\") pod \"2eb2b200bca0d10cf0fe16fb7c0caf80\" (UID: \"2eb2b200bca0d10cf0fe16fb7c0caf80\") " Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.628580 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6db26b71-4e04-4688-a0c0-00e06e8c888d-catalog-content\") pod \"6db26b71-4e04-4688-a0c0-00e06e8c888d\" (UID: \"6db26b71-4e04-4688-a0c0-00e06e8c888d\") " Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.628636 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2eb2b200bca0d10cf0fe16fb7c0caf80-resource-dir\") pod \"2eb2b200bca0d10cf0fe16fb7c0caf80\" (UID: \"2eb2b200bca0d10cf0fe16fb7c0caf80\") " Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.628668 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6db26b71-4e04-4688-a0c0-00e06e8c888d-utilities\") pod \"6db26b71-4e04-4688-a0c0-00e06e8c888d\" (UID: \"6db26b71-4e04-4688-a0c0-00e06e8c888d\") " Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.628712 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzb4s\" (UniqueName: \"kubernetes.io/projected/6db26b71-4e04-4688-a0c0-00e06e8c888d-kube-api-access-nzb4s\") pod \"6db26b71-4e04-4688-a0c0-00e06e8c888d\" (UID: \"6db26b71-4e04-4688-a0c0-00e06e8c888d\") " Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.630710 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2eb2b200bca0d10cf0fe16fb7c0caf80-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "2eb2b200bca0d10cf0fe16fb7c0caf80" (UID: "2eb2b200bca0d10cf0fe16fb7c0caf80"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.631118 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2eb2b200bca0d10cf0fe16fb7c0caf80-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "2eb2b200bca0d10cf0fe16fb7c0caf80" (UID: "2eb2b200bca0d10cf0fe16fb7c0caf80"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.632228 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6db26b71-4e04-4688-a0c0-00e06e8c888d-utilities" (OuterVolumeSpecName: "utilities") pod "6db26b71-4e04-4688-a0c0-00e06e8c888d" (UID: "6db26b71-4e04-4688-a0c0-00e06e8c888d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.646752 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6db26b71-4e04-4688-a0c0-00e06e8c888d-kube-api-access-nzb4s" (OuterVolumeSpecName: "kube-api-access-nzb4s") pod "6db26b71-4e04-4688-a0c0-00e06e8c888d" (UID: "6db26b71-4e04-4688-a0c0-00e06e8c888d"). InnerVolumeSpecName "kube-api-access-nzb4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.746159 4183 reconciler_common.go:300] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2eb2b200bca0d10cf0fe16fb7c0caf80-cert-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.746221 4183 reconciler_common.go:300] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2eb2b200bca0d10cf0fe16fb7c0caf80-resource-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.746236 4183 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6db26b71-4e04-4688-a0c0-00e06e8c888d-utilities\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.746252 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-nzb4s\" (UniqueName: \"kubernetes.io/projected/6db26b71-4e04-4688-a0c0-00e06e8c888d-kube-api-access-nzb4s\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.769860 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g4v97"] Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.770273 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-g4v97" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" containerName="registry-server" containerID="cri-o://844f180a492dff97326b5ea50f79dcbfc132e7edaccd1723d8997c38fb3bf568" gracePeriod=2 Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.808083 4183 status_manager.go:863] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-crc" oldPodUID="2eb2b200bca0d10cf0fe16fb7c0caf80" podUID="56d9256d8ee968b89d58cda59af60969" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.223896 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" path="/var/lib/kubelet/pods/2eb2b200bca0d10cf0fe16fb7c0caf80/volumes" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.231017 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k9qqb" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.237370 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" path="/var/lib/kubelet/pods/9ad279b4-d9dc-42a8-a1c8-a002bd063482/volumes" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.386715 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccdf38cf-634a-41a2-9c8b-74bb86af80a7-utilities\") pod \"ccdf38cf-634a-41a2-9c8b-74bb86af80a7\" (UID: \"ccdf38cf-634a-41a2-9c8b-74bb86af80a7\") " Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.386913 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccdf38cf-634a-41a2-9c8b-74bb86af80a7-catalog-content\") pod \"ccdf38cf-634a-41a2-9c8b-74bb86af80a7\" (UID: \"ccdf38cf-634a-41a2-9c8b-74bb86af80a7\") " Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.387039 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n59fs\" (UniqueName: \"kubernetes.io/projected/ccdf38cf-634a-41a2-9c8b-74bb86af80a7-kube-api-access-n59fs\") pod \"ccdf38cf-634a-41a2-9c8b-74bb86af80a7\" (UID: \"ccdf38cf-634a-41a2-9c8b-74bb86af80a7\") " Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.389317 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ccdf38cf-634a-41a2-9c8b-74bb86af80a7-utilities" (OuterVolumeSpecName: "utilities") pod "ccdf38cf-634a-41a2-9c8b-74bb86af80a7" (UID: "ccdf38cf-634a-41a2-9c8b-74bb86af80a7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.406403 4183 generic.go:334] "Generic (PLEG): container finished" podID="dc02677d-deed-4cc9-bb8c-0dd300f83655" containerID="6cc839079ff04a5b6cb4524dc6e36a89fd8caab9bf6a552eeffb557088851619" exitCode=0 Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.407500 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-10-retry-1-crc" event={"ID":"dc02677d-deed-4cc9-bb8c-0dd300f83655","Type":"ContainerDied","Data":"6cc839079ff04a5b6cb4524dc6e36a89fd8caab9bf6a552eeffb557088851619"} Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.414144 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dcqzh" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.414560 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccdf38cf-634a-41a2-9c8b-74bb86af80a7-kube-api-access-n59fs" (OuterVolumeSpecName: "kube-api-access-n59fs") pod "ccdf38cf-634a-41a2-9c8b-74bb86af80a7" (UID: "ccdf38cf-634a-41a2-9c8b-74bb86af80a7"). InnerVolumeSpecName "kube-api-access-n59fs". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.415194 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dcqzh" event={"ID":"6db26b71-4e04-4688-a0c0-00e06e8c888d","Type":"ContainerDied","Data":"fd8d1d12d982e02597a295d2f3337ac4df705e6c16a1c44fe5fb982976562a45"} Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.415606 4183 scope.go:117] "RemoveContainer" containerID="a39a002d95a82ae963b46c8196dfed935c199e471be64946be7406b3b02562c9" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.447434 4183 generic.go:334] "Generic (PLEG): container finished" podID="bb917686-edfb-4158-86ad-6fce0abec64c" containerID="844f180a492dff97326b5ea50f79dcbfc132e7edaccd1723d8997c38fb3bf568" exitCode=0 Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.448262 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g4v97" event={"ID":"bb917686-edfb-4158-86ad-6fce0abec64c","Type":"ContainerDied","Data":"844f180a492dff97326b5ea50f79dcbfc132e7edaccd1723d8997c38fb3bf568"} Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.482407 4183 generic.go:334] "Generic (PLEG): container finished" podID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" containerID="81cb681bd6d9448d71ccc777c84e85ec17d8973bb87b22b910458292232175d2" exitCode=0 Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.482857 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k9qqb" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.482914 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k9qqb" event={"ID":"ccdf38cf-634a-41a2-9c8b-74bb86af80a7","Type":"ContainerDied","Data":"81cb681bd6d9448d71ccc777c84e85ec17d8973bb87b22b910458292232175d2"} Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.483860 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k9qqb" event={"ID":"ccdf38cf-634a-41a2-9c8b-74bb86af80a7","Type":"ContainerDied","Data":"ac543dfbb4577c159abff74fe63750ec6557d4198d6572a7497b3fc598fd6350"} Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.489756 4183 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccdf38cf-634a-41a2-9c8b-74bb86af80a7-utilities\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.490010 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-n59fs\" (UniqueName: \"kubernetes.io/projected/ccdf38cf-634a-41a2-9c8b-74bb86af80a7-kube-api-access-n59fs\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.501195 4183 scope.go:117] "RemoveContainer" containerID="5dfab3908e38ec4c78ee676439e402432e22c1d28963eb816627f094e1f7ffed" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.509593 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/cluster-policy-controller/5.log" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.538016 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/kube-controller-manager-cert-syncer/0.log" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.548408 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/kube-controller-manager/0.log" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.548477 4183 generic.go:334] "Generic (PLEG): container finished" podID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerID="2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa" exitCode=0 Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.548491 4183 generic.go:334] "Generic (PLEG): container finished" podID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerID="2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a" exitCode=0 Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.548506 4183 generic.go:334] "Generic (PLEG): container finished" podID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerID="8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc" exitCode=0 Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.548557 4183 generic.go:334] "Generic (PLEG): container finished" podID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerID="ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93" exitCode=2 Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.550728 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.605947 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g4v97" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.611004 4183 status_manager.go:863] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-crc" oldPodUID="2eb2b200bca0d10cf0fe16fb7c0caf80" podUID="56d9256d8ee968b89d58cda59af60969" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.651167 4183 scope.go:117] "RemoveContainer" containerID="d14340d88bbcb0bdafcdb676bdd527fc02a2314081fa0355609f2faf4fe6c57a" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.699327 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwzcr\" (UniqueName: \"kubernetes.io/projected/bb917686-edfb-4158-86ad-6fce0abec64c-kube-api-access-mwzcr\") pod \"bb917686-edfb-4158-86ad-6fce0abec64c\" (UID: \"bb917686-edfb-4158-86ad-6fce0abec64c\") " Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.699537 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb917686-edfb-4158-86ad-6fce0abec64c-utilities\") pod \"bb917686-edfb-4158-86ad-6fce0abec64c\" (UID: \"bb917686-edfb-4158-86ad-6fce0abec64c\") " Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.699654 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb917686-edfb-4158-86ad-6fce0abec64c-catalog-content\") pod \"bb917686-edfb-4158-86ad-6fce0abec64c\" (UID: \"bb917686-edfb-4158-86ad-6fce0abec64c\") " Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.703280 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb917686-edfb-4158-86ad-6fce0abec64c-utilities" (OuterVolumeSpecName: "utilities") pod "bb917686-edfb-4158-86ad-6fce0abec64c" (UID: "bb917686-edfb-4158-86ad-6fce0abec64c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.713128 4183 scope.go:117] "RemoveContainer" containerID="81cb681bd6d9448d71ccc777c84e85ec17d8973bb87b22b910458292232175d2" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.715474 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb917686-edfb-4158-86ad-6fce0abec64c-kube-api-access-mwzcr" (OuterVolumeSpecName: "kube-api-access-mwzcr") pod "bb917686-edfb-4158-86ad-6fce0abec64c" (UID: "bb917686-edfb-4158-86ad-6fce0abec64c"). InnerVolumeSpecName "kube-api-access-mwzcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.766120 4183 scope.go:117] "RemoveContainer" containerID="be5d91aad199c1c8bd5b2b79223d42aced870eea5f8ee3c624591deb82d9bd24" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.809106 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-mwzcr\" (UniqueName: \"kubernetes.io/projected/bb917686-edfb-4158-86ad-6fce0abec64c-kube-api-access-mwzcr\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.809204 4183 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb917686-edfb-4158-86ad-6fce0abec64c-utilities\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.876493 4183 scope.go:117] "RemoveContainer" containerID="aeb0e68fe787546cea2b489f1fad4768a18174f8e337cc1ad4994c7300f24101" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.951741 4183 scope.go:117] "RemoveContainer" containerID="81cb681bd6d9448d71ccc777c84e85ec17d8973bb87b22b910458292232175d2" Aug 13 20:06:33 crc kubenswrapper[4183]: E0813 20:06:33.956229 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81cb681bd6d9448d71ccc777c84e85ec17d8973bb87b22b910458292232175d2\": container with ID starting with 81cb681bd6d9448d71ccc777c84e85ec17d8973bb87b22b910458292232175d2 not found: ID does not exist" containerID="81cb681bd6d9448d71ccc777c84e85ec17d8973bb87b22b910458292232175d2" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.956396 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81cb681bd6d9448d71ccc777c84e85ec17d8973bb87b22b910458292232175d2"} err="failed to get container status \"81cb681bd6d9448d71ccc777c84e85ec17d8973bb87b22b910458292232175d2\": rpc error: code = NotFound desc = could not find container \"81cb681bd6d9448d71ccc777c84e85ec17d8973bb87b22b910458292232175d2\": container with ID starting with 81cb681bd6d9448d71ccc777c84e85ec17d8973bb87b22b910458292232175d2 not found: ID does not exist" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.956556 4183 scope.go:117] "RemoveContainer" containerID="be5d91aad199c1c8bd5b2b79223d42aced870eea5f8ee3c624591deb82d9bd24" Aug 13 20:06:33 crc kubenswrapper[4183]: E0813 20:06:33.957238 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be5d91aad199c1c8bd5b2b79223d42aced870eea5f8ee3c624591deb82d9bd24\": container with ID starting with be5d91aad199c1c8bd5b2b79223d42aced870eea5f8ee3c624591deb82d9bd24 not found: ID does not exist" containerID="be5d91aad199c1c8bd5b2b79223d42aced870eea5f8ee3c624591deb82d9bd24" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.957296 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be5d91aad199c1c8bd5b2b79223d42aced870eea5f8ee3c624591deb82d9bd24"} err="failed to get container status \"be5d91aad199c1c8bd5b2b79223d42aced870eea5f8ee3c624591deb82d9bd24\": rpc error: code = NotFound desc = could not find container \"be5d91aad199c1c8bd5b2b79223d42aced870eea5f8ee3c624591deb82d9bd24\": container with ID starting with be5d91aad199c1c8bd5b2b79223d42aced870eea5f8ee3c624591deb82d9bd24 not found: ID does not exist" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.957317 4183 scope.go:117] "RemoveContainer" containerID="aeb0e68fe787546cea2b489f1fad4768a18174f8e337cc1ad4994c7300f24101" Aug 13 20:06:33 crc kubenswrapper[4183]: E0813 20:06:33.957667 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aeb0e68fe787546cea2b489f1fad4768a18174f8e337cc1ad4994c7300f24101\": container with ID starting with aeb0e68fe787546cea2b489f1fad4768a18174f8e337cc1ad4994c7300f24101 not found: ID does not exist" containerID="aeb0e68fe787546cea2b489f1fad4768a18174f8e337cc1ad4994c7300f24101" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.957698 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aeb0e68fe787546cea2b489f1fad4768a18174f8e337cc1ad4994c7300f24101"} err="failed to get container status \"aeb0e68fe787546cea2b489f1fad4768a18174f8e337cc1ad4994c7300f24101\": rpc error: code = NotFound desc = could not find container \"aeb0e68fe787546cea2b489f1fad4768a18174f8e337cc1ad4994c7300f24101\": container with ID starting with aeb0e68fe787546cea2b489f1fad4768a18174f8e337cc1ad4994c7300f24101 not found: ID does not exist" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.957715 4183 scope.go:117] "RemoveContainer" containerID="2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.028438 4183 scope.go:117] "RemoveContainer" containerID="2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.113426 4183 scope.go:117] "RemoveContainer" containerID="d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.115441 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6db26b71-4e04-4688-a0c0-00e06e8c888d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6db26b71-4e04-4688-a0c0-00e06e8c888d" (UID: "6db26b71-4e04-4688-a0c0-00e06e8c888d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.124953 4183 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6db26b71-4e04-4688-a0c0-00e06e8c888d-catalog-content\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.127435 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb917686-edfb-4158-86ad-6fce0abec64c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bb917686-edfb-4158-86ad-6fce0abec64c" (UID: "bb917686-edfb-4158-86ad-6fce0abec64c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.190249 4183 scope.go:117] "RemoveContainer" containerID="8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.226137 4183 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb917686-edfb-4158-86ad-6fce0abec64c-catalog-content\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.230289 4183 scope.go:117] "RemoveContainer" containerID="ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.266904 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4txfd"] Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.267957 4183 topology_manager.go:215] "Topology Admit Handler" podUID="af6c965e-9dc8-417a-aa1c-303a50ec9adc" podNamespace="openshift-marketplace" podName="redhat-marketplace-4txfd" Aug 13 20:06:34 crc kubenswrapper[4183]: E0813 20:06:34.268649 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" containerName="extract-utilities" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.269046 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" containerName="extract-utilities" Aug 13 20:06:34 crc kubenswrapper[4183]: E0813 20:06:34.269069 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" containerName="registry-server" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.269076 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" containerName="registry-server" Aug 13 20:06:34 crc kubenswrapper[4183]: E0813 20:06:34.269091 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" containerName="registry-server" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.269100 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" containerName="registry-server" Aug 13 20:06:34 crc kubenswrapper[4183]: E0813 20:06:34.269114 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" containerName="extract-utilities" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.269122 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" containerName="extract-utilities" Aug 13 20:06:34 crc kubenswrapper[4183]: E0813 20:06:34.269136 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" containerName="extract-content" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.269143 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" containerName="extract-content" Aug 13 20:06:34 crc kubenswrapper[4183]: E0813 20:06:34.269155 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" containerName="registry-server" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.269164 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" containerName="registry-server" Aug 13 20:06:34 crc kubenswrapper[4183]: E0813 20:06:34.269178 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" containerName="extract-content" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.269186 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" containerName="extract-content" Aug 13 20:06:34 crc kubenswrapper[4183]: E0813 20:06:34.269219 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" containerName="extract-content" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.269227 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" containerName="extract-content" Aug 13 20:06:34 crc kubenswrapper[4183]: E0813 20:06:34.269237 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" containerName="extract-utilities" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.269244 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" containerName="extract-utilities" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.269398 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" containerName="registry-server" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.269419 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" containerName="registry-server" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.269428 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" containerName="registry-server" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.271124 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4txfd" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.302167 4183 scope.go:117] "RemoveContainer" containerID="28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.332213 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4txfd"] Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.448725 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckbzg\" (UniqueName: \"kubernetes.io/projected/af6c965e-9dc8-417a-aa1c-303a50ec9adc-kube-api-access-ckbzg\") pod \"redhat-marketplace-4txfd\" (UID: \"af6c965e-9dc8-417a-aa1c-303a50ec9adc\") " pod="openshift-marketplace/redhat-marketplace-4txfd" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.448842 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af6c965e-9dc8-417a-aa1c-303a50ec9adc-catalog-content\") pod \"redhat-marketplace-4txfd\" (UID: \"af6c965e-9dc8-417a-aa1c-303a50ec9adc\") " pod="openshift-marketplace/redhat-marketplace-4txfd" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.448906 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af6c965e-9dc8-417a-aa1c-303a50ec9adc-utilities\") pod \"redhat-marketplace-4txfd\" (UID: \"af6c965e-9dc8-417a-aa1c-303a50ec9adc\") " pod="openshift-marketplace/redhat-marketplace-4txfd" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.481760 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dcqzh"] Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.515334 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dcqzh"] Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.551308 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ckbzg\" (UniqueName: \"kubernetes.io/projected/af6c965e-9dc8-417a-aa1c-303a50ec9adc-kube-api-access-ckbzg\") pod \"redhat-marketplace-4txfd\" (UID: \"af6c965e-9dc8-417a-aa1c-303a50ec9adc\") " pod="openshift-marketplace/redhat-marketplace-4txfd" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.551391 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af6c965e-9dc8-417a-aa1c-303a50ec9adc-catalog-content\") pod \"redhat-marketplace-4txfd\" (UID: \"af6c965e-9dc8-417a-aa1c-303a50ec9adc\") " pod="openshift-marketplace/redhat-marketplace-4txfd" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.551418 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af6c965e-9dc8-417a-aa1c-303a50ec9adc-utilities\") pod \"redhat-marketplace-4txfd\" (UID: \"af6c965e-9dc8-417a-aa1c-303a50ec9adc\") " pod="openshift-marketplace/redhat-marketplace-4txfd" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.552235 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af6c965e-9dc8-417a-aa1c-303a50ec9adc-utilities\") pod \"redhat-marketplace-4txfd\" (UID: \"af6c965e-9dc8-417a-aa1c-303a50ec9adc\") " pod="openshift-marketplace/redhat-marketplace-4txfd" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.553105 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af6c965e-9dc8-417a-aa1c-303a50ec9adc-catalog-content\") pod \"redhat-marketplace-4txfd\" (UID: \"af6c965e-9dc8-417a-aa1c-303a50ec9adc\") " pod="openshift-marketplace/redhat-marketplace-4txfd" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.610158 4183 scope.go:117] "RemoveContainer" containerID="2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.625273 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckbzg\" (UniqueName: \"kubernetes.io/projected/af6c965e-9dc8-417a-aa1c-303a50ec9adc-kube-api-access-ckbzg\") pod \"redhat-marketplace-4txfd\" (UID: \"af6c965e-9dc8-417a-aa1c-303a50ec9adc\") " pod="openshift-marketplace/redhat-marketplace-4txfd" Aug 13 20:06:34 crc kubenswrapper[4183]: E0813 20:06:34.626101 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa\": container with ID starting with 2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa not found: ID does not exist" containerID="2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.626376 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa"} err="failed to get container status \"2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa\": rpc error: code = NotFound desc = could not find container \"2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa\": container with ID starting with 2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.626490 4183 scope.go:117] "RemoveContainer" containerID="2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a" Aug 13 20:06:34 crc kubenswrapper[4183]: E0813 20:06:34.631271 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\": container with ID starting with 2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a not found: ID does not exist" containerID="2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.631345 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a"} err="failed to get container status \"2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\": rpc error: code = NotFound desc = could not find container \"2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\": container with ID starting with 2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.631366 4183 scope.go:117] "RemoveContainer" containerID="d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.631658 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4txfd" Aug 13 20:06:34 crc kubenswrapper[4183]: E0813 20:06:34.641227 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\": container with ID starting with d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc not found: ID does not exist" containerID="d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.641315 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc"} err="failed to get container status \"d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\": rpc error: code = NotFound desc = could not find container \"d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\": container with ID starting with d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.641344 4183 scope.go:117] "RemoveContainer" containerID="8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc" Aug 13 20:06:34 crc kubenswrapper[4183]: E0813 20:06:34.642564 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\": container with ID starting with 8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc not found: ID does not exist" containerID="8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.642589 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc"} err="failed to get container status \"8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\": rpc error: code = NotFound desc = could not find container \"8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\": container with ID starting with 8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.642599 4183 scope.go:117] "RemoveContainer" containerID="ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.642761 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g4v97" event={"ID":"bb917686-edfb-4158-86ad-6fce0abec64c","Type":"ContainerDied","Data":"2c30e71c46910d59824a916398858a98e2a14b68aeaa558e0e34e08a82403761"} Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.642974 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g4v97" Aug 13 20:06:34 crc kubenswrapper[4183]: E0813 20:06:34.645946 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\": container with ID starting with ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93 not found: ID does not exist" containerID="ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.646259 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93"} err="failed to get container status \"ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\": rpc error: code = NotFound desc = could not find container \"ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\": container with ID starting with ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93 not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.646347 4183 scope.go:117] "RemoveContainer" containerID="28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509" Aug 13 20:06:34 crc kubenswrapper[4183]: E0813 20:06:34.650081 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\": container with ID starting with 28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509 not found: ID does not exist" containerID="28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.650302 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509"} err="failed to get container status \"28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\": rpc error: code = NotFound desc = could not find container \"28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\": container with ID starting with 28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509 not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.650482 4183 scope.go:117] "RemoveContainer" containerID="2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.652664 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa"} err="failed to get container status \"2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa\": rpc error: code = NotFound desc = could not find container \"2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa\": container with ID starting with 2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.653002 4183 scope.go:117] "RemoveContainer" containerID="2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.668983 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a"} err="failed to get container status \"2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\": rpc error: code = NotFound desc = could not find container \"2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\": container with ID starting with 2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.669054 4183 scope.go:117] "RemoveContainer" containerID="d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.676139 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc"} err="failed to get container status \"d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\": rpc error: code = NotFound desc = could not find container \"d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\": container with ID starting with d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.676184 4183 scope.go:117] "RemoveContainer" containerID="8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.689053 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc"} err="failed to get container status \"8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\": rpc error: code = NotFound desc = could not find container \"8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\": container with ID starting with 8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.689169 4183 scope.go:117] "RemoveContainer" containerID="ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.690944 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93"} err="failed to get container status \"ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\": rpc error: code = NotFound desc = could not find container \"ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\": container with ID starting with ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93 not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.691014 4183 scope.go:117] "RemoveContainer" containerID="28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.694191 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509"} err="failed to get container status \"28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\": rpc error: code = NotFound desc = could not find container \"28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\": container with ID starting with 28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509 not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.694252 4183 scope.go:117] "RemoveContainer" containerID="2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.695225 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa"} err="failed to get container status \"2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa\": rpc error: code = NotFound desc = could not find container \"2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa\": container with ID starting with 2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.695266 4183 scope.go:117] "RemoveContainer" containerID="2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.705911 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a"} err="failed to get container status \"2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\": rpc error: code = NotFound desc = could not find container \"2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\": container with ID starting with 2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.705945 4183 scope.go:117] "RemoveContainer" containerID="d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.706983 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc"} err="failed to get container status \"d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\": rpc error: code = NotFound desc = could not find container \"d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\": container with ID starting with d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.707016 4183 scope.go:117] "RemoveContainer" containerID="8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.707643 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc"} err="failed to get container status \"8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\": rpc error: code = NotFound desc = could not find container \"8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\": container with ID starting with 8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.707677 4183 scope.go:117] "RemoveContainer" containerID="ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.713412 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93"} err="failed to get container status \"ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\": rpc error: code = NotFound desc = could not find container \"ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\": container with ID starting with ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93 not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.713475 4183 scope.go:117] "RemoveContainer" containerID="28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.716474 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509"} err="failed to get container status \"28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\": rpc error: code = NotFound desc = could not find container \"28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\": container with ID starting with 28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509 not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.716517 4183 scope.go:117] "RemoveContainer" containerID="2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.722234 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa"} err="failed to get container status \"2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa\": rpc error: code = NotFound desc = could not find container \"2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa\": container with ID starting with 2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.722283 4183 scope.go:117] "RemoveContainer" containerID="2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.733247 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a"} err="failed to get container status \"2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\": rpc error: code = NotFound desc = could not find container \"2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\": container with ID starting with 2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.733349 4183 scope.go:117] "RemoveContainer" containerID="d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.739469 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ccdf38cf-634a-41a2-9c8b-74bb86af80a7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ccdf38cf-634a-41a2-9c8b-74bb86af80a7" (UID: "ccdf38cf-634a-41a2-9c8b-74bb86af80a7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.741499 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc"} err="failed to get container status \"d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\": rpc error: code = NotFound desc = could not find container \"d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\": container with ID starting with d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.741566 4183 scope.go:117] "RemoveContainer" containerID="8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.742463 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc"} err="failed to get container status \"8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\": rpc error: code = NotFound desc = could not find container \"8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\": container with ID starting with 8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.742497 4183 scope.go:117] "RemoveContainer" containerID="ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.745275 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93"} err="failed to get container status \"ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\": rpc error: code = NotFound desc = could not find container \"ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\": container with ID starting with ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93 not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.745312 4183 scope.go:117] "RemoveContainer" containerID="28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.746895 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509"} err="failed to get container status \"28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\": rpc error: code = NotFound desc = could not find container \"28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\": container with ID starting with 28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509 not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.746915 4183 scope.go:117] "RemoveContainer" containerID="844f180a492dff97326b5ea50f79dcbfc132e7edaccd1723d8997c38fb3bf568" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.767764 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g4v97"] Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.767926 4183 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccdf38cf-634a-41a2-9c8b-74bb86af80a7-catalog-content\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.817313 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-g4v97"] Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.004094 4183 scope.go:117] "RemoveContainer" containerID="c3dbff7f4c3117da13658584d3a507d50302df8be0d31802f8e4e5b93ddec694" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.109002 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k9qqb"] Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.135918 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-k9qqb"] Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.195435 4183 scope.go:117] "RemoveContainer" containerID="1e5547d2ec134d919f281661be1d8428aa473dba5709d51d784bbe4bf125231a" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.225423 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" path="/var/lib/kubelet/pods/6db26b71-4e04-4688-a0c0-00e06e8c888d/volumes" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.228259 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" path="/var/lib/kubelet/pods/bb917686-edfb-4158-86ad-6fce0abec64c/volumes" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.229735 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" path="/var/lib/kubelet/pods/ccdf38cf-634a-41a2-9c8b-74bb86af80a7/volumes" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.622105 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-10-retry-1-crc" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.666846 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-jjfds container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.667018 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.705030 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-10-retry-1-crc" event={"ID":"dc02677d-deed-4cc9-bb8c-0dd300f83655","Type":"ContainerDied","Data":"0d375f365a8fdeb2a6f8e132a388c08618e43492f2ffe32f450d914395120bec"} Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.705097 4183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d375f365a8fdeb2a6f8e132a388c08618e43492f2ffe32f450d914395120bec" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.705171 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-10-retry-1-crc" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.714641 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dc02677d-deed-4cc9-bb8c-0dd300f83655-var-lock\") pod \"dc02677d-deed-4cc9-bb8c-0dd300f83655\" (UID: \"dc02677d-deed-4cc9-bb8c-0dd300f83655\") " Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.714768 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dc02677d-deed-4cc9-bb8c-0dd300f83655-kubelet-dir\") pod \"dc02677d-deed-4cc9-bb8c-0dd300f83655\" (UID: \"dc02677d-deed-4cc9-bb8c-0dd300f83655\") " Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.715053 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dc02677d-deed-4cc9-bb8c-0dd300f83655-kube-api-access\") pod \"dc02677d-deed-4cc9-bb8c-0dd300f83655\" (UID: \"dc02677d-deed-4cc9-bb8c-0dd300f83655\") " Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.716059 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc02677d-deed-4cc9-bb8c-0dd300f83655-var-lock" (OuterVolumeSpecName: "var-lock") pod "dc02677d-deed-4cc9-bb8c-0dd300f83655" (UID: "dc02677d-deed-4cc9-bb8c-0dd300f83655"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.716115 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc02677d-deed-4cc9-bb8c-0dd300f83655-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "dc02677d-deed-4cc9-bb8c-0dd300f83655" (UID: "dc02677d-deed-4cc9-bb8c-0dd300f83655"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.739478 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cfdk8"] Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.739660 4183 topology_manager.go:215] "Topology Admit Handler" podUID="5391dc5d-0f00-4464-b617-b164e2f9b77a" podNamespace="openshift-marketplace" podName="certified-operators-cfdk8" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.740078 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc02677d-deed-4cc9-bb8c-0dd300f83655-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "dc02677d-deed-4cc9-bb8c-0dd300f83655" (UID: "dc02677d-deed-4cc9-bb8c-0dd300f83655"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:06:35 crc kubenswrapper[4183]: E0813 20:06:35.752916 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="dc02677d-deed-4cc9-bb8c-0dd300f83655" containerName="installer" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.752975 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc02677d-deed-4cc9-bb8c-0dd300f83655" containerName="installer" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.753232 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc02677d-deed-4cc9-bb8c-0dd300f83655" containerName="installer" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.754313 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cfdk8" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.802645 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cfdk8"] Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.816953 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5391dc5d-0f00-4464-b617-b164e2f9b77a-utilities\") pod \"certified-operators-cfdk8\" (UID: \"5391dc5d-0f00-4464-b617-b164e2f9b77a\") " pod="openshift-marketplace/certified-operators-cfdk8" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.817278 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5391dc5d-0f00-4464-b617-b164e2f9b77a-catalog-content\") pod \"certified-operators-cfdk8\" (UID: \"5391dc5d-0f00-4464-b617-b164e2f9b77a\") " pod="openshift-marketplace/certified-operators-cfdk8" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.817663 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqx8w\" (UniqueName: \"kubernetes.io/projected/5391dc5d-0f00-4464-b617-b164e2f9b77a-kube-api-access-nqx8w\") pod \"certified-operators-cfdk8\" (UID: \"5391dc5d-0f00-4464-b617-b164e2f9b77a\") " pod="openshift-marketplace/certified-operators-cfdk8" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.817921 4183 reconciler_common.go:300] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dc02677d-deed-4cc9-bb8c-0dd300f83655-var-lock\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.817940 4183 reconciler_common.go:300] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dc02677d-deed-4cc9-bb8c-0dd300f83655-kubelet-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.817955 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dc02677d-deed-4cc9-bb8c-0dd300f83655-kube-api-access\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.919704 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nqx8w\" (UniqueName: \"kubernetes.io/projected/5391dc5d-0f00-4464-b617-b164e2f9b77a-kube-api-access-nqx8w\") pod \"certified-operators-cfdk8\" (UID: \"5391dc5d-0f00-4464-b617-b164e2f9b77a\") " pod="openshift-marketplace/certified-operators-cfdk8" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.920273 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5391dc5d-0f00-4464-b617-b164e2f9b77a-utilities\") pod \"certified-operators-cfdk8\" (UID: \"5391dc5d-0f00-4464-b617-b164e2f9b77a\") " pod="openshift-marketplace/certified-operators-cfdk8" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.920436 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5391dc5d-0f00-4464-b617-b164e2f9b77a-catalog-content\") pod \"certified-operators-cfdk8\" (UID: \"5391dc5d-0f00-4464-b617-b164e2f9b77a\") " pod="openshift-marketplace/certified-operators-cfdk8" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.921238 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5391dc5d-0f00-4464-b617-b164e2f9b77a-utilities\") pod \"certified-operators-cfdk8\" (UID: \"5391dc5d-0f00-4464-b617-b164e2f9b77a\") " pod="openshift-marketplace/certified-operators-cfdk8" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.921268 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5391dc5d-0f00-4464-b617-b164e2f9b77a-catalog-content\") pod \"certified-operators-cfdk8\" (UID: \"5391dc5d-0f00-4464-b617-b164e2f9b77a\") " pod="openshift-marketplace/certified-operators-cfdk8" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.926700 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4txfd"] Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.967949 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqx8w\" (UniqueName: \"kubernetes.io/projected/5391dc5d-0f00-4464-b617-b164e2f9b77a-kube-api-access-nqx8w\") pod \"certified-operators-cfdk8\" (UID: \"5391dc5d-0f00-4464-b617-b164e2f9b77a\") " pod="openshift-marketplace/certified-operators-cfdk8" Aug 13 20:06:36 crc kubenswrapper[4183]: I0813 20:06:36.090066 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cfdk8" Aug 13 20:06:36 crc kubenswrapper[4183]: I0813 20:06:36.638373 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cfdk8"] Aug 13 20:06:36 crc kubenswrapper[4183]: W0813 20:06:36.663759 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5391dc5d_0f00_4464_b617_b164e2f9b77a.slice/crio-93c5c47bf133377eafcb9942e19796d3fe7fe2e004e4bf8e026b7ad2cfda695d WatchSource:0}: Error finding container 93c5c47bf133377eafcb9942e19796d3fe7fe2e004e4bf8e026b7ad2cfda695d: Status 404 returned error can't find the container with id 93c5c47bf133377eafcb9942e19796d3fe7fe2e004e4bf8e026b7ad2cfda695d Aug 13 20:06:36 crc kubenswrapper[4183]: I0813 20:06:36.722331 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pmqwc"] Aug 13 20:06:36 crc kubenswrapper[4183]: I0813 20:06:36.722712 4183 topology_manager.go:215] "Topology Admit Handler" podUID="0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed" podNamespace="openshift-marketplace" podName="redhat-operators-pmqwc" Aug 13 20:06:36 crc kubenswrapper[4183]: I0813 20:06:36.724295 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pmqwc" Aug 13 20:06:36 crc kubenswrapper[4183]: I0813 20:06:36.733585 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed-utilities\") pod \"redhat-operators-pmqwc\" (UID: \"0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed\") " pod="openshift-marketplace/redhat-operators-pmqwc" Aug 13 20:06:36 crc kubenswrapper[4183]: I0813 20:06:36.733685 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed-catalog-content\") pod \"redhat-operators-pmqwc\" (UID: \"0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed\") " pod="openshift-marketplace/redhat-operators-pmqwc" Aug 13 20:06:36 crc kubenswrapper[4183]: I0813 20:06:36.733727 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4g78\" (UniqueName: \"kubernetes.io/projected/0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed-kube-api-access-h4g78\") pod \"redhat-operators-pmqwc\" (UID: \"0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed\") " pod="openshift-marketplace/redhat-operators-pmqwc" Aug 13 20:06:36 crc kubenswrapper[4183]: I0813 20:06:36.740443 4183 generic.go:334] "Generic (PLEG): container finished" podID="af6c965e-9dc8-417a-aa1c-303a50ec9adc" containerID="ba4e7e607991d317206ebde80c8cb2e26997cbbc08e8b4f17e61b221f795d438" exitCode=0 Aug 13 20:06:36 crc kubenswrapper[4183]: I0813 20:06:36.740556 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4txfd" event={"ID":"af6c965e-9dc8-417a-aa1c-303a50ec9adc","Type":"ContainerDied","Data":"ba4e7e607991d317206ebde80c8cb2e26997cbbc08e8b4f17e61b221f795d438"} Aug 13 20:06:36 crc kubenswrapper[4183]: I0813 20:06:36.740590 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4txfd" event={"ID":"af6c965e-9dc8-417a-aa1c-303a50ec9adc","Type":"ContainerStarted","Data":"0ac24e234dbea3fbef3137a45a6686f522b22807b700e39bf1183421025f953d"} Aug 13 20:06:36 crc kubenswrapper[4183]: I0813 20:06:36.744770 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cfdk8" event={"ID":"5391dc5d-0f00-4464-b617-b164e2f9b77a","Type":"ContainerStarted","Data":"93c5c47bf133377eafcb9942e19796d3fe7fe2e004e4bf8e026b7ad2cfda695d"} Aug 13 20:06:36 crc kubenswrapper[4183]: I0813 20:06:36.834905 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed-utilities\") pod \"redhat-operators-pmqwc\" (UID: \"0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed\") " pod="openshift-marketplace/redhat-operators-pmqwc" Aug 13 20:06:36 crc kubenswrapper[4183]: I0813 20:06:36.836955 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed-catalog-content\") pod \"redhat-operators-pmqwc\" (UID: \"0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed\") " pod="openshift-marketplace/redhat-operators-pmqwc" Aug 13 20:06:36 crc kubenswrapper[4183]: I0813 20:06:36.837483 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-h4g78\" (UniqueName: \"kubernetes.io/projected/0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed-kube-api-access-h4g78\") pod \"redhat-operators-pmqwc\" (UID: \"0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed\") " pod="openshift-marketplace/redhat-operators-pmqwc" Aug 13 20:06:36 crc kubenswrapper[4183]: I0813 20:06:36.836767 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed-utilities\") pod \"redhat-operators-pmqwc\" (UID: \"0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed\") " pod="openshift-marketplace/redhat-operators-pmqwc" Aug 13 20:06:36 crc kubenswrapper[4183]: I0813 20:06:36.837421 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed-catalog-content\") pod \"redhat-operators-pmqwc\" (UID: \"0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed\") " pod="openshift-marketplace/redhat-operators-pmqwc" Aug 13 20:06:36 crc kubenswrapper[4183]: I0813 20:06:36.890610 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4g78\" (UniqueName: \"kubernetes.io/projected/0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed-kube-api-access-h4g78\") pod \"redhat-operators-pmqwc\" (UID: \"0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed\") " pod="openshift-marketplace/redhat-operators-pmqwc" Aug 13 20:06:36 crc kubenswrapper[4183]: I0813 20:06:36.896240 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pmqwc"] Aug 13 20:06:37 crc kubenswrapper[4183]: I0813 20:06:37.151050 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pmqwc" Aug 13 20:06:37 crc kubenswrapper[4183]: I0813 20:06:37.657129 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pmqwc"] Aug 13 20:06:37 crc kubenswrapper[4183]: W0813 20:06:37.678370 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e1b407b_80a9_40d6_aa0b_a5ffb555c8ed.slice/crio-3025039c6358002d40f5661f0d4ebe701c314f685e0a46fd007206a116acffb8 WatchSource:0}: Error finding container 3025039c6358002d40f5661f0d4ebe701c314f685e0a46fd007206a116acffb8: Status 404 returned error can't find the container with id 3025039c6358002d40f5661f0d4ebe701c314f685e0a46fd007206a116acffb8 Aug 13 20:06:37 crc kubenswrapper[4183]: I0813 20:06:37.752983 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pmqwc" event={"ID":"0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed","Type":"ContainerStarted","Data":"3025039c6358002d40f5661f0d4ebe701c314f685e0a46fd007206a116acffb8"} Aug 13 20:06:37 crc kubenswrapper[4183]: I0813 20:06:37.755721 4183 generic.go:334] "Generic (PLEG): container finished" podID="5391dc5d-0f00-4464-b617-b164e2f9b77a" containerID="d0410fb00ff1950c83008d849c88f9052caf868a3476a49f11cc841d25bf1215" exitCode=0 Aug 13 20:06:37 crc kubenswrapper[4183]: I0813 20:06:37.756002 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cfdk8" event={"ID":"5391dc5d-0f00-4464-b617-b164e2f9b77a","Type":"ContainerDied","Data":"d0410fb00ff1950c83008d849c88f9052caf868a3476a49f11cc841d25bf1215"} Aug 13 20:06:38 crc kubenswrapper[4183]: I0813 20:06:38.342086 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-p7svp"] Aug 13 20:06:38 crc kubenswrapper[4183]: I0813 20:06:38.342230 4183 topology_manager.go:215] "Topology Admit Handler" podUID="8518239d-8dab-48ac-a3c1-e775566b9bff" podNamespace="openshift-marketplace" podName="community-operators-p7svp" Aug 13 20:06:38 crc kubenswrapper[4183]: I0813 20:06:38.343500 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p7svp" Aug 13 20:06:38 crc kubenswrapper[4183]: I0813 20:06:38.393189 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p7svp"] Aug 13 20:06:38 crc kubenswrapper[4183]: I0813 20:06:38.460305 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8518239d-8dab-48ac-a3c1-e775566b9bff-utilities\") pod \"community-operators-p7svp\" (UID: \"8518239d-8dab-48ac-a3c1-e775566b9bff\") " pod="openshift-marketplace/community-operators-p7svp" Aug 13 20:06:38 crc kubenswrapper[4183]: I0813 20:06:38.460466 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8518239d-8dab-48ac-a3c1-e775566b9bff-catalog-content\") pod \"community-operators-p7svp\" (UID: \"8518239d-8dab-48ac-a3c1-e775566b9bff\") " pod="openshift-marketplace/community-operators-p7svp" Aug 13 20:06:38 crc kubenswrapper[4183]: I0813 20:06:38.460712 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vv6hl\" (UniqueName: \"kubernetes.io/projected/8518239d-8dab-48ac-a3c1-e775566b9bff-kube-api-access-vv6hl\") pod \"community-operators-p7svp\" (UID: \"8518239d-8dab-48ac-a3c1-e775566b9bff\") " pod="openshift-marketplace/community-operators-p7svp" Aug 13 20:06:38 crc kubenswrapper[4183]: I0813 20:06:38.562320 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vv6hl\" (UniqueName: \"kubernetes.io/projected/8518239d-8dab-48ac-a3c1-e775566b9bff-kube-api-access-vv6hl\") pod \"community-operators-p7svp\" (UID: \"8518239d-8dab-48ac-a3c1-e775566b9bff\") " pod="openshift-marketplace/community-operators-p7svp" Aug 13 20:06:38 crc kubenswrapper[4183]: I0813 20:06:38.562455 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8518239d-8dab-48ac-a3c1-e775566b9bff-utilities\") pod \"community-operators-p7svp\" (UID: \"8518239d-8dab-48ac-a3c1-e775566b9bff\") " pod="openshift-marketplace/community-operators-p7svp" Aug 13 20:06:38 crc kubenswrapper[4183]: I0813 20:06:38.562501 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8518239d-8dab-48ac-a3c1-e775566b9bff-catalog-content\") pod \"community-operators-p7svp\" (UID: \"8518239d-8dab-48ac-a3c1-e775566b9bff\") " pod="openshift-marketplace/community-operators-p7svp" Aug 13 20:06:38 crc kubenswrapper[4183]: I0813 20:06:38.563335 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8518239d-8dab-48ac-a3c1-e775566b9bff-catalog-content\") pod \"community-operators-p7svp\" (UID: \"8518239d-8dab-48ac-a3c1-e775566b9bff\") " pod="openshift-marketplace/community-operators-p7svp" Aug 13 20:06:38 crc kubenswrapper[4183]: I0813 20:06:38.563627 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8518239d-8dab-48ac-a3c1-e775566b9bff-utilities\") pod \"community-operators-p7svp\" (UID: \"8518239d-8dab-48ac-a3c1-e775566b9bff\") " pod="openshift-marketplace/community-operators-p7svp" Aug 13 20:06:38 crc kubenswrapper[4183]: I0813 20:06:38.624249 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-vv6hl\" (UniqueName: \"kubernetes.io/projected/8518239d-8dab-48ac-a3c1-e775566b9bff-kube-api-access-vv6hl\") pod \"community-operators-p7svp\" (UID: \"8518239d-8dab-48ac-a3c1-e775566b9bff\") " pod="openshift-marketplace/community-operators-p7svp" Aug 13 20:06:38 crc kubenswrapper[4183]: I0813 20:06:38.675174 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p7svp" Aug 13 20:06:38 crc kubenswrapper[4183]: I0813 20:06:38.780855 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cfdk8" event={"ID":"5391dc5d-0f00-4464-b617-b164e2f9b77a","Type":"ContainerStarted","Data":"8774ff62b19406788c10fedf068a0f954eca6a67f3db06bf9b50da1d5c7f38aa"} Aug 13 20:06:38 crc kubenswrapper[4183]: I0813 20:06:38.785269 4183 generic.go:334] "Generic (PLEG): container finished" podID="0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed" containerID="29c42b8a41289c4fea25430048589dc9dedd4b658b109126c4e196ce9807773d" exitCode=0 Aug 13 20:06:38 crc kubenswrapper[4183]: I0813 20:06:38.785411 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pmqwc" event={"ID":"0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed","Type":"ContainerDied","Data":"29c42b8a41289c4fea25430048589dc9dedd4b658b109126c4e196ce9807773d"} Aug 13 20:06:38 crc kubenswrapper[4183]: I0813 20:06:38.796367 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4txfd" event={"ID":"af6c965e-9dc8-417a-aa1c-303a50ec9adc","Type":"ContainerStarted","Data":"35b65310d7cdfa6d3f8542bf95fcc97b0283ba68976893b228beafacea70e679"} Aug 13 20:06:39 crc kubenswrapper[4183]: I0813 20:06:39.382481 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p7svp"] Aug 13 20:06:39 crc kubenswrapper[4183]: I0813 20:06:39.811895 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p7svp" event={"ID":"8518239d-8dab-48ac-a3c1-e775566b9bff","Type":"ContainerStarted","Data":"4a52c9653485366a71b6816af21a11a7652981f948545698090cec0d47c008a7"} Aug 13 20:06:40 crc kubenswrapper[4183]: I0813 20:06:40.666927 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-jjfds container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Aug 13 20:06:40 crc kubenswrapper[4183]: I0813 20:06:40.667427 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" Aug 13 20:06:40 crc kubenswrapper[4183]: I0813 20:06:40.822606 4183 generic.go:334] "Generic (PLEG): container finished" podID="8518239d-8dab-48ac-a3c1-e775566b9bff" containerID="75cca3df20371dce976a94a74005beaf51017e82ce1c4f10505ef46633dcb26b" exitCode=0 Aug 13 20:06:40 crc kubenswrapper[4183]: I0813 20:06:40.822832 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p7svp" event={"ID":"8518239d-8dab-48ac-a3c1-e775566b9bff","Type":"ContainerDied","Data":"75cca3df20371dce976a94a74005beaf51017e82ce1c4f10505ef46633dcb26b"} Aug 13 20:06:40 crc kubenswrapper[4183]: I0813 20:06:40.827595 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pmqwc" event={"ID":"0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed","Type":"ContainerStarted","Data":"89a368507993ea42c79b3af991cc9b1cccf950682066ea5091d608d27e68cbe1"} Aug 13 20:06:41 crc kubenswrapper[4183]: I0813 20:06:41.835751 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p7svp" event={"ID":"8518239d-8dab-48ac-a3c1-e775566b9bff","Type":"ContainerStarted","Data":"c8e3392d204770a3cdf4591df44d1933cb69dee9401552f91464c20b12ca2d0d"} Aug 13 20:06:45 crc kubenswrapper[4183]: I0813 20:06:45.666543 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-jjfds container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Aug 13 20:06:45 crc kubenswrapper[4183]: I0813 20:06:45.667135 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" Aug 13 20:06:46 crc kubenswrapper[4183]: I0813 20:06:46.209149 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:06:46 crc kubenswrapper[4183]: I0813 20:06:46.231273 4183 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="df02f99a-b4f8-4711-aedf-964dcb4d3400" Aug 13 20:06:46 crc kubenswrapper[4183]: I0813 20:06:46.231314 4183 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="df02f99a-b4f8-4711-aedf-964dcb4d3400" Aug 13 20:06:47 crc kubenswrapper[4183]: I0813 20:06:47.015557 4183 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="fsnotify: can't remove non-existent watch: /etc/kubernetes/kubelet-ca.crt" Aug 13 20:06:49 crc kubenswrapper[4183]: I0813 20:06:49.218239 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Aug 13 20:06:49 crc kubenswrapper[4183]: I0813 20:06:49.869394 4183 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:06:49 crc kubenswrapper[4183]: I0813 20:06:49.913567 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Aug 13 20:06:50 crc kubenswrapper[4183]: I0813 20:06:50.033314 4183 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="fsnotify: can't remove non-existent watch: /etc/kubernetes/kubelet-ca.crt" Aug 13 20:06:50 crc kubenswrapper[4183]: I0813 20:06:50.668940 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-jjfds container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Aug 13 20:06:50 crc kubenswrapper[4183]: I0813 20:06:50.669135 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" Aug 13 20:06:50 crc kubenswrapper[4183]: I0813 20:06:50.717035 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:06:50 crc kubenswrapper[4183]: I0813 20:06:50.723046 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Aug 13 20:06:50 crc kubenswrapper[4183]: I0813 20:06:50.910383 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"56d9256d8ee968b89d58cda59af60969","Type":"ContainerStarted","Data":"a386295a4836609efa126cdad0f8da6cec9163b751ff142e15d9693c89cf9866"} Aug 13 20:06:51 crc kubenswrapper[4183]: I0813 20:06:51.343841 4183 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="fsnotify: can't remove non-existent watch: /etc/kubernetes/kubelet-ca.crt" Aug 13 20:06:51 crc kubenswrapper[4183]: I0813 20:06:51.919581 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"56d9256d8ee968b89d58cda59af60969","Type":"ContainerStarted","Data":"4159ba877f8ff7e1e08f72bf3d12699149238f2597dfea0b4882ee6797fe2c98"} Aug 13 20:06:52 crc kubenswrapper[4183]: I0813 20:06:52.939619 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"56d9256d8ee968b89d58cda59af60969","Type":"ContainerStarted","Data":"6fac670aec99a6e895db54957107db545029859582d9e7bfff8bcb8b8323317b"} Aug 13 20:06:54 crc kubenswrapper[4183]: I0813 20:06:54.719310 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:06:54 crc kubenswrapper[4183]: I0813 20:06:54.720070 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:06:54 crc kubenswrapper[4183]: I0813 20:06:54.720141 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:06:54 crc kubenswrapper[4183]: I0813 20:06:54.720171 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:06:54 crc kubenswrapper[4183]: I0813 20:06:54.720205 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Pending" Aug 13 20:06:54 crc kubenswrapper[4183]: I0813 20:06:54.847698 4183 scope.go:117] "RemoveContainer" containerID="3adbf9773c9dee772e1fae33ef3bfea1611715fe8502455203e764d46595a8bc" Aug 13 20:06:54 crc kubenswrapper[4183]: I0813 20:06:54.985710 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"56d9256d8ee968b89d58cda59af60969","Type":"ContainerStarted","Data":"be1e0c86831f89f585cd2c81563266389f6b99fe3a2b00e25563c193b7ae2289"} Aug 13 20:06:55 crc kubenswrapper[4183]: I0813 20:06:55.666286 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-jjfds container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Aug 13 20:06:55 crc kubenswrapper[4183]: I0813 20:06:55.666865 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" Aug 13 20:06:55 crc kubenswrapper[4183]: I0813 20:06:55.997314 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"56d9256d8ee968b89d58cda59af60969","Type":"ContainerStarted","Data":"844a16e08b8b6f6647fb07d6bae6657e732727da7ada45f1211b70ff85887202"} Aug 13 20:06:58 crc kubenswrapper[4183]: I0813 20:06:58.023164 4183 generic.go:334] "Generic (PLEG): container finished" podID="af6c965e-9dc8-417a-aa1c-303a50ec9adc" containerID="35b65310d7cdfa6d3f8542bf95fcc97b0283ba68976893b228beafacea70e679" exitCode=0 Aug 13 20:06:58 crc kubenswrapper[4183]: I0813 20:06:58.023567 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4txfd" event={"ID":"af6c965e-9dc8-417a-aa1c-303a50ec9adc","Type":"ContainerDied","Data":"35b65310d7cdfa6d3f8542bf95fcc97b0283ba68976893b228beafacea70e679"} Aug 13 20:06:59 crc kubenswrapper[4183]: I0813 20:06:59.164298 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=9.164227237 podStartE2EDuration="9.164227237s" podCreationTimestamp="2025-08-13 20:06:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:06:56.384585892 +0000 UTC m=+1383.077250730" watchObservedRunningTime="2025-08-13 20:06:59.164227237 +0000 UTC m=+1385.856892155" Aug 13 20:07:00 crc kubenswrapper[4183]: I0813 20:07:00.040353 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4txfd" event={"ID":"af6c965e-9dc8-417a-aa1c-303a50ec9adc","Type":"ContainerStarted","Data":"ff7f35679861a611a5ba4e3c78554ac68d5f4553adfb22336409ae2267a78160"} Aug 13 20:07:00 crc kubenswrapper[4183]: I0813 20:07:00.666357 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-jjfds container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Aug 13 20:07:00 crc kubenswrapper[4183]: I0813 20:07:00.667547 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" Aug 13 20:07:00 crc kubenswrapper[4183]: I0813 20:07:00.717568 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:07:00 crc kubenswrapper[4183]: I0813 20:07:00.718035 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:07:00 crc kubenswrapper[4183]: I0813 20:07:00.718195 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:07:00 crc kubenswrapper[4183]: I0813 20:07:00.718446 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:07:00 crc kubenswrapper[4183]: I0813 20:07:00.723382 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:07:00 crc kubenswrapper[4183]: I0813 20:07:00.760496 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:07:00 crc kubenswrapper[4183]: I0813 20:07:00.947442 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4txfd" podStartSLOduration=5.377419812 podStartE2EDuration="26.947382872s" podCreationTimestamp="2025-08-13 20:06:34 +0000 UTC" firstStartedPulling="2025-08-13 20:06:36.744736971 +0000 UTC m=+1363.437401649" lastFinishedPulling="2025-08-13 20:06:58.314699941 +0000 UTC m=+1385.007364709" observedRunningTime="2025-08-13 20:07:00.09942957 +0000 UTC m=+1386.792094548" watchObservedRunningTime="2025-08-13 20:07:00.947382872 +0000 UTC m=+1387.640047580" Aug 13 20:07:01 crc kubenswrapper[4183]: I0813 20:07:01.053138 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:07:02 crc kubenswrapper[4183]: I0813 20:07:02.062380 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:07:03 crc kubenswrapper[4183]: I0813 20:07:03.066363 4183 generic.go:334] "Generic (PLEG): container finished" podID="5391dc5d-0f00-4464-b617-b164e2f9b77a" containerID="8774ff62b19406788c10fedf068a0f954eca6a67f3db06bf9b50da1d5c7f38aa" exitCode=0 Aug 13 20:07:03 crc kubenswrapper[4183]: I0813 20:07:03.066554 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cfdk8" event={"ID":"5391dc5d-0f00-4464-b617-b164e2f9b77a","Type":"ContainerDied","Data":"8774ff62b19406788c10fedf068a0f954eca6a67f3db06bf9b50da1d5c7f38aa"} Aug 13 20:07:03 crc kubenswrapper[4183]: I0813 20:07:03.225319 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-11-crc"] Aug 13 20:07:03 crc kubenswrapper[4183]: I0813 20:07:03.225450 4183 topology_manager.go:215] "Topology Admit Handler" podUID="47a054e4-19c2-4c12-a054-fc5edc98978a" podNamespace="openshift-kube-apiserver" podName="installer-11-crc" Aug 13 20:07:03 crc kubenswrapper[4183]: I0813 20:07:03.241292 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-11-crc" Aug 13 20:07:03 crc kubenswrapper[4183]: I0813 20:07:03.252570 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-4kgh8" Aug 13 20:07:03 crc kubenswrapper[4183]: I0813 20:07:03.252718 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Aug 13 20:07:03 crc kubenswrapper[4183]: I0813 20:07:03.371516 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/47a054e4-19c2-4c12-a054-fc5edc98978a-kubelet-dir\") pod \"installer-11-crc\" (UID: \"47a054e4-19c2-4c12-a054-fc5edc98978a\") " pod="openshift-kube-apiserver/installer-11-crc" Aug 13 20:07:03 crc kubenswrapper[4183]: I0813 20:07:03.371593 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/47a054e4-19c2-4c12-a054-fc5edc98978a-var-lock\") pod \"installer-11-crc\" (UID: \"47a054e4-19c2-4c12-a054-fc5edc98978a\") " pod="openshift-kube-apiserver/installer-11-crc" Aug 13 20:07:03 crc kubenswrapper[4183]: I0813 20:07:03.371635 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/47a054e4-19c2-4c12-a054-fc5edc98978a-kube-api-access\") pod \"installer-11-crc\" (UID: \"47a054e4-19c2-4c12-a054-fc5edc98978a\") " pod="openshift-kube-apiserver/installer-11-crc" Aug 13 20:07:03 crc kubenswrapper[4183]: I0813 20:07:03.473588 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/47a054e4-19c2-4c12-a054-fc5edc98978a-kubelet-dir\") pod \"installer-11-crc\" (UID: \"47a054e4-19c2-4c12-a054-fc5edc98978a\") " pod="openshift-kube-apiserver/installer-11-crc" Aug 13 20:07:03 crc kubenswrapper[4183]: I0813 20:07:03.473649 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/47a054e4-19c2-4c12-a054-fc5edc98978a-var-lock\") pod \"installer-11-crc\" (UID: \"47a054e4-19c2-4c12-a054-fc5edc98978a\") " pod="openshift-kube-apiserver/installer-11-crc" Aug 13 20:07:03 crc kubenswrapper[4183]: I0813 20:07:03.473740 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/47a054e4-19c2-4c12-a054-fc5edc98978a-kube-api-access\") pod \"installer-11-crc\" (UID: \"47a054e4-19c2-4c12-a054-fc5edc98978a\") " pod="openshift-kube-apiserver/installer-11-crc" Aug 13 20:07:03 crc kubenswrapper[4183]: I0813 20:07:03.473926 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/47a054e4-19c2-4c12-a054-fc5edc98978a-kubelet-dir\") pod \"installer-11-crc\" (UID: \"47a054e4-19c2-4c12-a054-fc5edc98978a\") " pod="openshift-kube-apiserver/installer-11-crc" Aug 13 20:07:03 crc kubenswrapper[4183]: I0813 20:07:03.474127 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/47a054e4-19c2-4c12-a054-fc5edc98978a-var-lock\") pod \"installer-11-crc\" (UID: \"47a054e4-19c2-4c12-a054-fc5edc98978a\") " pod="openshift-kube-apiserver/installer-11-crc" Aug 13 20:07:04 crc kubenswrapper[4183]: I0813 20:07:04.460102 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-11-crc"] Aug 13 20:07:04 crc kubenswrapper[4183]: I0813 20:07:04.535456 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/47a054e4-19c2-4c12-a054-fc5edc98978a-kube-api-access\") pod \"installer-11-crc\" (UID: \"47a054e4-19c2-4c12-a054-fc5edc98978a\") " pod="openshift-kube-apiserver/installer-11-crc" Aug 13 20:07:04 crc kubenswrapper[4183]: I0813 20:07:04.632665 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4txfd" Aug 13 20:07:04 crc kubenswrapper[4183]: I0813 20:07:04.633258 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4txfd" Aug 13 20:07:04 crc kubenswrapper[4183]: I0813 20:07:04.771343 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-11-crc" Aug 13 20:07:04 crc kubenswrapper[4183]: I0813 20:07:04.907291 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4txfd" Aug 13 20:07:05 crc kubenswrapper[4183]: I0813 20:07:05.111193 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cfdk8" event={"ID":"5391dc5d-0f00-4464-b617-b164e2f9b77a","Type":"ContainerStarted","Data":"d4e66bdfd9dd4a7f2d135310d101ff9f0390135dfa3cce9fda943b1c05565a80"} Aug 13 20:07:05 crc kubenswrapper[4183]: I0813 20:07:05.183763 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cfdk8" podStartSLOduration=4.576405217 podStartE2EDuration="30.18370006s" podCreationTimestamp="2025-08-13 20:06:35 +0000 UTC" firstStartedPulling="2025-08-13 20:06:37.758363852 +0000 UTC m=+1364.451028550" lastFinishedPulling="2025-08-13 20:07:03.365658395 +0000 UTC m=+1390.058323393" observedRunningTime="2025-08-13 20:07:05.183269748 +0000 UTC m=+1391.875934756" watchObservedRunningTime="2025-08-13 20:07:05.18370006 +0000 UTC m=+1391.876364888" Aug 13 20:07:05 crc kubenswrapper[4183]: I0813 20:07:05.402368 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4txfd" Aug 13 20:07:05 crc kubenswrapper[4183]: I0813 20:07:05.588097 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-11-crc"] Aug 13 20:07:05 crc kubenswrapper[4183]: W0813 20:07:05.615964 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod47a054e4_19c2_4c12_a054_fc5edc98978a.slice/crio-82592d624297fddcd6792981a2d03476ea0c73592b9982be03e42a7b6cfda763 WatchSource:0}: Error finding container 82592d624297fddcd6792981a2d03476ea0c73592b9982be03e42a7b6cfda763: Status 404 returned error can't find the container with id 82592d624297fddcd6792981a2d03476ea0c73592b9982be03e42a7b6cfda763 Aug 13 20:07:05 crc kubenswrapper[4183]: I0813 20:07:05.667290 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-jjfds container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Aug 13 20:07:05 crc kubenswrapper[4183]: I0813 20:07:05.667378 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" Aug 13 20:07:06 crc kubenswrapper[4183]: I0813 20:07:06.091326 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cfdk8" Aug 13 20:07:06 crc kubenswrapper[4183]: I0813 20:07:06.091412 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cfdk8" Aug 13 20:07:06 crc kubenswrapper[4183]: I0813 20:07:06.136054 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-11-crc" event={"ID":"47a054e4-19c2-4c12-a054-fc5edc98978a","Type":"ContainerStarted","Data":"82592d624297fddcd6792981a2d03476ea0c73592b9982be03e42a7b6cfda763"} Aug 13 20:07:06 crc kubenswrapper[4183]: I0813 20:07:06.550982 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4txfd"] Aug 13 20:07:07 crc kubenswrapper[4183]: I0813 20:07:07.151422 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4txfd" podUID="af6c965e-9dc8-417a-aa1c-303a50ec9adc" containerName="registry-server" containerID="cri-o://ff7f35679861a611a5ba4e3c78554ac68d5f4553adfb22336409ae2267a78160" gracePeriod=2 Aug 13 20:07:07 crc kubenswrapper[4183]: I0813 20:07:07.152121 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-11-crc" event={"ID":"47a054e4-19c2-4c12-a054-fc5edc98978a","Type":"ContainerStarted","Data":"1e1a0d662b883dd47a8d67de1ea3251e342574fa602e1c0b8d1d61ebcdfcfb0c"} Aug 13 20:07:07 crc kubenswrapper[4183]: I0813 20:07:07.231709 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-11-crc" podStartSLOduration=5.231646296 podStartE2EDuration="5.231646296s" podCreationTimestamp="2025-08-13 20:07:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:07:07.229267578 +0000 UTC m=+1393.921932296" watchObservedRunningTime="2025-08-13 20:07:07.231646296 +0000 UTC m=+1393.924311034" Aug 13 20:07:07 crc kubenswrapper[4183]: I0813 20:07:07.286308 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-cfdk8" podUID="5391dc5d-0f00-4464-b617-b164e2f9b77a" containerName="registry-server" probeResult="failure" output=< Aug 13 20:07:07 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:07:07 crc kubenswrapper[4183]: > Aug 13 20:07:08 crc kubenswrapper[4183]: I0813 20:07:08.192452 4183 generic.go:334] "Generic (PLEG): container finished" podID="af6c965e-9dc8-417a-aa1c-303a50ec9adc" containerID="ff7f35679861a611a5ba4e3c78554ac68d5f4553adfb22336409ae2267a78160" exitCode=0 Aug 13 20:07:08 crc kubenswrapper[4183]: I0813 20:07:08.194124 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4txfd" event={"ID":"af6c965e-9dc8-417a-aa1c-303a50ec9adc","Type":"ContainerDied","Data":"ff7f35679861a611a5ba4e3c78554ac68d5f4553adfb22336409ae2267a78160"} Aug 13 20:07:08 crc kubenswrapper[4183]: I0813 20:07:08.713376 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4txfd" Aug 13 20:07:08 crc kubenswrapper[4183]: I0813 20:07:08.890060 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af6c965e-9dc8-417a-aa1c-303a50ec9adc-catalog-content\") pod \"af6c965e-9dc8-417a-aa1c-303a50ec9adc\" (UID: \"af6c965e-9dc8-417a-aa1c-303a50ec9adc\") " Aug 13 20:07:08 crc kubenswrapper[4183]: I0813 20:07:08.891033 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af6c965e-9dc8-417a-aa1c-303a50ec9adc-utilities\") pod \"af6c965e-9dc8-417a-aa1c-303a50ec9adc\" (UID: \"af6c965e-9dc8-417a-aa1c-303a50ec9adc\") " Aug 13 20:07:08 crc kubenswrapper[4183]: I0813 20:07:08.891471 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckbzg\" (UniqueName: \"kubernetes.io/projected/af6c965e-9dc8-417a-aa1c-303a50ec9adc-kube-api-access-ckbzg\") pod \"af6c965e-9dc8-417a-aa1c-303a50ec9adc\" (UID: \"af6c965e-9dc8-417a-aa1c-303a50ec9adc\") " Aug 13 20:07:08 crc kubenswrapper[4183]: I0813 20:07:08.892132 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af6c965e-9dc8-417a-aa1c-303a50ec9adc-utilities" (OuterVolumeSpecName: "utilities") pod "af6c965e-9dc8-417a-aa1c-303a50ec9adc" (UID: "af6c965e-9dc8-417a-aa1c-303a50ec9adc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:07:09 crc kubenswrapper[4183]: I0813 20:07:09.011540 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af6c965e-9dc8-417a-aa1c-303a50ec9adc-kube-api-access-ckbzg" (OuterVolumeSpecName: "kube-api-access-ckbzg") pod "af6c965e-9dc8-417a-aa1c-303a50ec9adc" (UID: "af6c965e-9dc8-417a-aa1c-303a50ec9adc"). InnerVolumeSpecName "kube-api-access-ckbzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:07:09 crc kubenswrapper[4183]: I0813 20:07:09.015756 4183 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af6c965e-9dc8-417a-aa1c-303a50ec9adc-utilities\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:09 crc kubenswrapper[4183]: I0813 20:07:09.015858 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-ckbzg\" (UniqueName: \"kubernetes.io/projected/af6c965e-9dc8-417a-aa1c-303a50ec9adc-kube-api-access-ckbzg\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:09 crc kubenswrapper[4183]: I0813 20:07:09.212389 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4txfd" Aug 13 20:07:09 crc kubenswrapper[4183]: I0813 20:07:09.225379 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af6c965e-9dc8-417a-aa1c-303a50ec9adc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "af6c965e-9dc8-417a-aa1c-303a50ec9adc" (UID: "af6c965e-9dc8-417a-aa1c-303a50ec9adc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:07:09 crc kubenswrapper[4183]: I0813 20:07:09.226151 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4txfd" event={"ID":"af6c965e-9dc8-417a-aa1c-303a50ec9adc","Type":"ContainerDied","Data":"0ac24e234dbea3fbef3137a45a6686f522b22807b700e39bf1183421025f953d"} Aug 13 20:07:09 crc kubenswrapper[4183]: I0813 20:07:09.226223 4183 scope.go:117] "RemoveContainer" containerID="ff7f35679861a611a5ba4e3c78554ac68d5f4553adfb22336409ae2267a78160" Aug 13 20:07:09 crc kubenswrapper[4183]: I0813 20:07:09.320702 4183 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af6c965e-9dc8-417a-aa1c-303a50ec9adc-catalog-content\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:09 crc kubenswrapper[4183]: I0813 20:07:09.376467 4183 scope.go:117] "RemoveContainer" containerID="35b65310d7cdfa6d3f8542bf95fcc97b0283ba68976893b228beafacea70e679" Aug 13 20:07:09 crc kubenswrapper[4183]: I0813 20:07:09.456132 4183 scope.go:117] "RemoveContainer" containerID="ba4e7e607991d317206ebde80c8cb2e26997cbbc08e8b4f17e61b221f795d438" Aug 13 20:07:09 crc kubenswrapper[4183]: I0813 20:07:09.543745 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4txfd"] Aug 13 20:07:09 crc kubenswrapper[4183]: I0813 20:07:09.571687 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4txfd"] Aug 13 20:07:10 crc kubenswrapper[4183]: I0813 20:07:10.667045 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-jjfds container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Aug 13 20:07:10 crc kubenswrapper[4183]: I0813 20:07:10.667532 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" Aug 13 20:07:11 crc kubenswrapper[4183]: I0813 20:07:11.218191 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af6c965e-9dc8-417a-aa1c-303a50ec9adc" path="/var/lib/kubelet/pods/af6c965e-9dc8-417a-aa1c-303a50ec9adc/volumes" Aug 13 20:07:15 crc kubenswrapper[4183]: I0813 20:07:15.284216 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/3.log" Aug 13 20:07:15 crc kubenswrapper[4183]: I0813 20:07:15.285762 4183 generic.go:334] "Generic (PLEG): container finished" podID="b23d6435-6431-4905-b41b-a517327385e5" containerID="b7b2fb66a37e8c7191a914067fe2f9036112a584c9ca7714873849353733889a" exitCode=0 Aug 13 20:07:15 crc kubenswrapper[4183]: I0813 20:07:15.285861 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" event={"ID":"b23d6435-6431-4905-b41b-a517327385e5","Type":"ContainerDied","Data":"b7b2fb66a37e8c7191a914067fe2f9036112a584c9ca7714873849353733889a"} Aug 13 20:07:15 crc kubenswrapper[4183]: I0813 20:07:15.285930 4183 scope.go:117] "RemoveContainer" containerID="df1d1d9a22e05cc0ee9c2836e149b57342e813e732ecae98f07e805dbee82ebb" Aug 13 20:07:15 crc kubenswrapper[4183]: I0813 20:07:15.666054 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-jjfds container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Aug 13 20:07:15 crc kubenswrapper[4183]: I0813 20:07:15.666198 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.185655 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.295187 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" event={"ID":"b23d6435-6431-4905-b41b-a517327385e5","Type":"ContainerDied","Data":"411add17e78de78ccd75f5c0e0dfb380e3bff9047da00adac5d17d33bfb78e58"} Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.295262 4183 scope.go:117] "RemoveContainer" containerID="b03552e2b35c92b59eb334cf496ac9d89324ae268cf17ae601bd0d6a94df8289" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.295293 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.302642 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cfdk8" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.331041 4183 scope.go:117] "RemoveContainer" containerID="b7b2fb66a37e8c7191a914067fe2f9036112a584c9ca7714873849353733889a" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.370123 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-config\") pod \"b23d6435-6431-4905-b41b-a517327385e5\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.370703 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-etcd-serving-ca\") pod \"b23d6435-6431-4905-b41b-a517327385e5\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.370929 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-image-import-ca\") pod \"b23d6435-6431-4905-b41b-a517327385e5\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.370972 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6j2kj\" (UniqueName: \"kubernetes.io/projected/b23d6435-6431-4905-b41b-a517327385e5-kube-api-access-6j2kj\") pod \"b23d6435-6431-4905-b41b-a517327385e5\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.371014 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-audit\") pod \"b23d6435-6431-4905-b41b-a517327385e5\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.371046 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b23d6435-6431-4905-b41b-a517327385e5-serving-cert\") pod \"b23d6435-6431-4905-b41b-a517327385e5\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.371094 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b23d6435-6431-4905-b41b-a517327385e5-encryption-config\") pod \"b23d6435-6431-4905-b41b-a517327385e5\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.371133 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b23d6435-6431-4905-b41b-a517327385e5-node-pullsecrets\") pod \"b23d6435-6431-4905-b41b-a517327385e5\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.371182 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-trusted-ca-bundle\") pod \"b23d6435-6431-4905-b41b-a517327385e5\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.371243 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b23d6435-6431-4905-b41b-a517327385e5-audit-dir\") pod \"b23d6435-6431-4905-b41b-a517327385e5\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.371284 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b23d6435-6431-4905-b41b-a517327385e5-etcd-client\") pod \"b23d6435-6431-4905-b41b-a517327385e5\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.371667 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "b23d6435-6431-4905-b41b-a517327385e5" (UID: "b23d6435-6431-4905-b41b-a517327385e5"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.371702 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b23d6435-6431-4905-b41b-a517327385e5-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "b23d6435-6431-4905-b41b-a517327385e5" (UID: "b23d6435-6431-4905-b41b-a517327385e5"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.371918 4183 reconciler_common.go:300] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-image-import-ca\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.371945 4183 reconciler_common.go:300] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b23d6435-6431-4905-b41b-a517327385e5-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.372972 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b23d6435-6431-4905-b41b-a517327385e5-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "b23d6435-6431-4905-b41b-a517327385e5" (UID: "b23d6435-6431-4905-b41b-a517327385e5"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.380871 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b23d6435-6431-4905-b41b-a517327385e5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b23d6435-6431-4905-b41b-a517327385e5" (UID: "b23d6435-6431-4905-b41b-a517327385e5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.384032 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b23d6435-6431-4905-b41b-a517327385e5-kube-api-access-6j2kj" (OuterVolumeSpecName: "kube-api-access-6j2kj") pod "b23d6435-6431-4905-b41b-a517327385e5" (UID: "b23d6435-6431-4905-b41b-a517327385e5"). InnerVolumeSpecName "kube-api-access-6j2kj". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.395651 4183 scope.go:117] "RemoveContainer" containerID="ee7ad10446d56157471e17a6fd0a6c5ffb7cc6177a566dcf214a0b78b5502ef3" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.443578 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cfdk8" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.473163 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-6j2kj\" (UniqueName: \"kubernetes.io/projected/b23d6435-6431-4905-b41b-a517327385e5-kube-api-access-6j2kj\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.473231 4183 reconciler_common.go:300] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b23d6435-6431-4905-b41b-a517327385e5-serving-cert\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.473243 4183 reconciler_common.go:300] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b23d6435-6431-4905-b41b-a517327385e5-audit-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.514920 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "b23d6435-6431-4905-b41b-a517327385e5" (UID: "b23d6435-6431-4905-b41b-a517327385e5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.515325 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-config" (OuterVolumeSpecName: "config") pod "b23d6435-6431-4905-b41b-a517327385e5" (UID: "b23d6435-6431-4905-b41b-a517327385e5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.520955 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b23d6435-6431-4905-b41b-a517327385e5-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "b23d6435-6431-4905-b41b-a517327385e5" (UID: "b23d6435-6431-4905-b41b-a517327385e5"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.574284 4183 reconciler_common.go:300] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-config\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.574332 4183 reconciler_common.go:300] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.574348 4183 reconciler_common.go:300] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b23d6435-6431-4905-b41b-a517327385e5-etcd-client\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.616269 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-audit" (OuterVolumeSpecName: "audit") pod "b23d6435-6431-4905-b41b-a517327385e5" (UID: "b23d6435-6431-4905-b41b-a517327385e5"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.619083 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b23d6435-6431-4905-b41b-a517327385e5-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "b23d6435-6431-4905-b41b-a517327385e5" (UID: "b23d6435-6431-4905-b41b-a517327385e5"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.675731 4183 reconciler_common.go:300] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-audit\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.675868 4183 reconciler_common.go:300] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b23d6435-6431-4905-b41b-a517327385e5-encryption-config\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.688930 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "b23d6435-6431-4905-b41b-a517327385e5" (UID: "b23d6435-6431-4905-b41b-a517327385e5"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.777555 4183 reconciler_common.go:300] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:17 crc kubenswrapper[4183]: I0813 20:07:17.332901 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-67cbf64bc9-jjfds"] Aug 13 20:07:17 crc kubenswrapper[4183]: I0813 20:07:17.349174 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-apiserver/apiserver-67cbf64bc9-jjfds"] Aug 13 20:07:17 crc kubenswrapper[4183]: I0813 20:07:17.468404 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cfdk8"] Aug 13 20:07:18 crc kubenswrapper[4183]: I0813 20:07:18.313383 4183 generic.go:334] "Generic (PLEG): container finished" podID="8518239d-8dab-48ac-a3c1-e775566b9bff" containerID="c8e3392d204770a3cdf4591df44d1933cb69dee9401552f91464c20b12ca2d0d" exitCode=0 Aug 13 20:07:18 crc kubenswrapper[4183]: I0813 20:07:18.313692 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cfdk8" podUID="5391dc5d-0f00-4464-b617-b164e2f9b77a" containerName="registry-server" containerID="cri-o://d4e66bdfd9dd4a7f2d135310d101ff9f0390135dfa3cce9fda943b1c05565a80" gracePeriod=2 Aug 13 20:07:18 crc kubenswrapper[4183]: I0813 20:07:18.313898 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p7svp" event={"ID":"8518239d-8dab-48ac-a3c1-e775566b9bff","Type":"ContainerDied","Data":"c8e3392d204770a3cdf4591df44d1933cb69dee9401552f91464c20b12ca2d0d"} Aug 13 20:07:19 crc kubenswrapper[4183]: I0813 20:07:19.219654 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b23d6435-6431-4905-b41b-a517327385e5" path="/var/lib/kubelet/pods/b23d6435-6431-4905-b41b-a517327385e5/volumes" Aug 13 20:07:19 crc kubenswrapper[4183]: I0813 20:07:19.322545 4183 generic.go:334] "Generic (PLEG): container finished" podID="5391dc5d-0f00-4464-b617-b164e2f9b77a" containerID="d4e66bdfd9dd4a7f2d135310d101ff9f0390135dfa3cce9fda943b1c05565a80" exitCode=0 Aug 13 20:07:19 crc kubenswrapper[4183]: I0813 20:07:19.322644 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cfdk8" event={"ID":"5391dc5d-0f00-4464-b617-b164e2f9b77a","Type":"ContainerDied","Data":"d4e66bdfd9dd4a7f2d135310d101ff9f0390135dfa3cce9fda943b1c05565a80"} Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.070461 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-7fc54b8dd7-d2bhp"] Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.076068 4183 topology_manager.go:215] "Topology Admit Handler" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" podNamespace="openshift-apiserver" podName="apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: E0813 20:07:20.076570 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="fix-audit-permissions" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.076593 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="fix-audit-permissions" Aug 13 20:07:20 crc kubenswrapper[4183]: E0813 20:07:20.076607 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.076615 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" Aug 13 20:07:20 crc kubenswrapper[4183]: E0813 20:07:20.076963 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="af6c965e-9dc8-417a-aa1c-303a50ec9adc" containerName="registry-server" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.076984 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="af6c965e-9dc8-417a-aa1c-303a50ec9adc" containerName="registry-server" Aug 13 20:07:20 crc kubenswrapper[4183]: E0813 20:07:20.076996 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver-check-endpoints" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.077004 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver-check-endpoints" Aug 13 20:07:20 crc kubenswrapper[4183]: E0813 20:07:20.077014 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="af6c965e-9dc8-417a-aa1c-303a50ec9adc" containerName="extract-content" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.077058 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="af6c965e-9dc8-417a-aa1c-303a50ec9adc" containerName="extract-content" Aug 13 20:07:20 crc kubenswrapper[4183]: E0813 20:07:20.077069 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="af6c965e-9dc8-417a-aa1c-303a50ec9adc" containerName="extract-utilities" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.077077 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="af6c965e-9dc8-417a-aa1c-303a50ec9adc" containerName="extract-utilities" Aug 13 20:07:20 crc kubenswrapper[4183]: E0813 20:07:20.077085 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver-check-endpoints" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.077093 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver-check-endpoints" Aug 13 20:07:20 crc kubenswrapper[4183]: E0813 20:07:20.077107 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.077117 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" Aug 13 20:07:20 crc kubenswrapper[4183]: E0813 20:07:20.077129 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver-check-endpoints" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.077136 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver-check-endpoints" Aug 13 20:07:20 crc kubenswrapper[4183]: E0813 20:07:20.077147 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.077156 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.077310 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.077325 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.077335 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.077345 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.077358 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver-check-endpoints" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.077382 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver-check-endpoints" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.077392 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="af6c965e-9dc8-417a-aa1c-303a50ec9adc" containerName="registry-server" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.077402 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver-check-endpoints" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.077411 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver-check-endpoints" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.077420 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver-check-endpoints" Aug 13 20:07:20 crc kubenswrapper[4183]: E0813 20:07:20.077523 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver-check-endpoints" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.077532 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver-check-endpoints" Aug 13 20:07:20 crc kubenswrapper[4183]: E0813 20:07:20.077547 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.077555 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.078031 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" Aug 13 20:07:20 crc kubenswrapper[4183]: E0813 20:07:20.078358 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.078375 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" Aug 13 20:07:20 crc kubenswrapper[4183]: E0813 20:07:20.079939 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver-check-endpoints" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.079958 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver-check-endpoints" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.090318 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.120717 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.143089 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cfdk8" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.143954 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.144162 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.145585 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.152960 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-7fc54b8dd7-d2bhp"] Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.163645 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.174554 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/41e8708a-e40d-4d28-846b-c52eda4d1755-node-pullsecrets\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.174703 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.174746 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hpxx\" (UniqueName: \"kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.174820 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.174860 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.174926 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.174956 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.174984 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.175008 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.175038 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.175065 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/41e8708a-e40d-4d28-846b-c52eda4d1755-audit-dir\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.179288 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.179574 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.187850 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-r9fjc" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.188868 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.189288 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.265979 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.276394 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5391dc5d-0f00-4464-b617-b164e2f9b77a-utilities\") pod \"5391dc5d-0f00-4464-b617-b164e2f9b77a\" (UID: \"5391dc5d-0f00-4464-b617-b164e2f9b77a\") " Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.276475 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqx8w\" (UniqueName: \"kubernetes.io/projected/5391dc5d-0f00-4464-b617-b164e2f9b77a-kube-api-access-nqx8w\") pod \"5391dc5d-0f00-4464-b617-b164e2f9b77a\" (UID: \"5391dc5d-0f00-4464-b617-b164e2f9b77a\") " Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.276546 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5391dc5d-0f00-4464-b617-b164e2f9b77a-catalog-content\") pod \"5391dc5d-0f00-4464-b617-b164e2f9b77a\" (UID: \"5391dc5d-0f00-4464-b617-b164e2f9b77a\") " Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.276674 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.276718 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.276838 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.276864 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.276918 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.276949 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.276991 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.277022 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/41e8708a-e40d-4d28-846b-c52eda4d1755-audit-dir\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.277062 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/41e8708a-e40d-4d28-846b-c52eda4d1755-node-pullsecrets\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.277092 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.277137 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8hpxx\" (UniqueName: \"kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.278049 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.279247 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.281050 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.281554 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.288187 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5391dc5d-0f00-4464-b617-b164e2f9b77a-utilities" (OuterVolumeSpecName: "utilities") pod "5391dc5d-0f00-4464-b617-b164e2f9b77a" (UID: "5391dc5d-0f00-4464-b617-b164e2f9b77a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.290228 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/41e8708a-e40d-4d28-846b-c52eda4d1755-audit-dir\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.290477 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/41e8708a-e40d-4d28-846b-c52eda4d1755-node-pullsecrets\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.294052 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.327843 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.329297 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.334041 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5391dc5d-0f00-4464-b617-b164e2f9b77a-kube-api-access-nqx8w" (OuterVolumeSpecName: "kube-api-access-nqx8w") pod "5391dc5d-0f00-4464-b617-b164e2f9b77a" (UID: "5391dc5d-0f00-4464-b617-b164e2f9b77a"). InnerVolumeSpecName "kube-api-access-nqx8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.339052 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.350518 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hpxx\" (UniqueName: \"kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.373138 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cfdk8" event={"ID":"5391dc5d-0f00-4464-b617-b164e2f9b77a","Type":"ContainerDied","Data":"93c5c47bf133377eafcb9942e19796d3fe7fe2e004e4bf8e026b7ad2cfda695d"} Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.373208 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cfdk8" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.373223 4183 scope.go:117] "RemoveContainer" containerID="d4e66bdfd9dd4a7f2d135310d101ff9f0390135dfa3cce9fda943b1c05565a80" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.380660 4183 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5391dc5d-0f00-4464-b617-b164e2f9b77a-utilities\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.380710 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-nqx8w\" (UniqueName: \"kubernetes.io/projected/5391dc5d-0f00-4464-b617-b164e2f9b77a-kube-api-access-nqx8w\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.390558 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p7svp" event={"ID":"8518239d-8dab-48ac-a3c1-e775566b9bff","Type":"ContainerStarted","Data":"346c30b9a9faa8432b3782ba026d812f61ae2cf934cc3a5411eda085a0bf6194"} Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.451122 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.503198 4183 scope.go:117] "RemoveContainer" containerID="8774ff62b19406788c10fedf068a0f954eca6a67f3db06bf9b50da1d5c7f38aa" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.539637 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/community-operators-p7svp" podStartSLOduration=4.757178704 podStartE2EDuration="42.539582856s" podCreationTimestamp="2025-08-13 20:06:38 +0000 UTC" firstStartedPulling="2025-08-13 20:06:40.825674156 +0000 UTC m=+1367.518338884" lastFinishedPulling="2025-08-13 20:07:18.608078248 +0000 UTC m=+1405.300743036" observedRunningTime="2025-08-13 20:07:20.539262247 +0000 UTC m=+1407.231927065" watchObservedRunningTime="2025-08-13 20:07:20.539582856 +0000 UTC m=+1407.232247584" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.665127 4183 scope.go:117] "RemoveContainer" containerID="d0410fb00ff1950c83008d849c88f9052caf868a3476a49f11cc841d25bf1215" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.767388 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5391dc5d-0f00-4464-b617-b164e2f9b77a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5391dc5d-0f00-4464-b617-b164e2f9b77a" (UID: "5391dc5d-0f00-4464-b617-b164e2f9b77a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.790747 4183 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5391dc5d-0f00-4464-b617-b164e2f9b77a-catalog-content\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:21 crc kubenswrapper[4183]: I0813 20:07:21.105498 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cfdk8"] Aug 13 20:07:21 crc kubenswrapper[4183]: I0813 20:07:21.120492 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cfdk8"] Aug 13 20:07:21 crc kubenswrapper[4183]: I0813 20:07:21.218084 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5391dc5d-0f00-4464-b617-b164e2f9b77a" path="/var/lib/kubelet/pods/5391dc5d-0f00-4464-b617-b164e2f9b77a/volumes" Aug 13 20:07:21 crc kubenswrapper[4183]: I0813 20:07:21.355501 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-7fc54b8dd7-d2bhp"] Aug 13 20:07:21 crc kubenswrapper[4183]: W0813 20:07:21.374354 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41e8708a_e40d_4d28_846b_c52eda4d1755.slice/crio-2059a6e71652337fe2cdf8946abc3898c6e467e3863a7aa2b93b3528d16734f8 WatchSource:0}: Error finding container 2059a6e71652337fe2cdf8946abc3898c6e467e3863a7aa2b93b3528d16734f8: Status 404 returned error can't find the container with id 2059a6e71652337fe2cdf8946abc3898c6e467e3863a7aa2b93b3528d16734f8 Aug 13 20:07:21 crc kubenswrapper[4183]: I0813 20:07:21.402828 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" event={"ID":"41e8708a-e40d-4d28-846b-c52eda4d1755","Type":"ContainerStarted","Data":"2059a6e71652337fe2cdf8946abc3898c6e467e3863a7aa2b93b3528d16734f8"} Aug 13 20:07:22 crc kubenswrapper[4183]: I0813 20:07:22.164391 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-11-crc"] Aug 13 20:07:22 crc kubenswrapper[4183]: I0813 20:07:22.165017 4183 topology_manager.go:215] "Topology Admit Handler" podUID="1784282a-268d-4e44-a766-43281414e2dc" podNamespace="openshift-kube-controller-manager" podName="revision-pruner-11-crc" Aug 13 20:07:22 crc kubenswrapper[4183]: E0813 20:07:22.165221 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="5391dc5d-0f00-4464-b617-b164e2f9b77a" containerName="registry-server" Aug 13 20:07:22 crc kubenswrapper[4183]: I0813 20:07:22.165237 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="5391dc5d-0f00-4464-b617-b164e2f9b77a" containerName="registry-server" Aug 13 20:07:22 crc kubenswrapper[4183]: E0813 20:07:22.165257 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="5391dc5d-0f00-4464-b617-b164e2f9b77a" containerName="extract-content" Aug 13 20:07:22 crc kubenswrapper[4183]: I0813 20:07:22.165266 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="5391dc5d-0f00-4464-b617-b164e2f9b77a" containerName="extract-content" Aug 13 20:07:22 crc kubenswrapper[4183]: E0813 20:07:22.165282 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="5391dc5d-0f00-4464-b617-b164e2f9b77a" containerName="extract-utilities" Aug 13 20:07:22 crc kubenswrapper[4183]: I0813 20:07:22.165291 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="5391dc5d-0f00-4464-b617-b164e2f9b77a" containerName="extract-utilities" Aug 13 20:07:22 crc kubenswrapper[4183]: I0813 20:07:22.165468 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="5391dc5d-0f00-4464-b617-b164e2f9b77a" containerName="registry-server" Aug 13 20:07:22 crc kubenswrapper[4183]: I0813 20:07:22.166174 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-11-crc" Aug 13 20:07:22 crc kubenswrapper[4183]: I0813 20:07:22.170125 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-dl9g2" Aug 13 20:07:22 crc kubenswrapper[4183]: I0813 20:07:22.172343 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Aug 13 20:07:22 crc kubenswrapper[4183]: I0813 20:07:22.201478 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-11-crc"] Aug 13 20:07:22 crc kubenswrapper[4183]: I0813 20:07:22.210239 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1784282a-268d-4e44-a766-43281414e2dc-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"1784282a-268d-4e44-a766-43281414e2dc\") " pod="openshift-kube-controller-manager/revision-pruner-11-crc" Aug 13 20:07:22 crc kubenswrapper[4183]: I0813 20:07:22.210690 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1784282a-268d-4e44-a766-43281414e2dc-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"1784282a-268d-4e44-a766-43281414e2dc\") " pod="openshift-kube-controller-manager/revision-pruner-11-crc" Aug 13 20:07:22 crc kubenswrapper[4183]: I0813 20:07:22.312677 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1784282a-268d-4e44-a766-43281414e2dc-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"1784282a-268d-4e44-a766-43281414e2dc\") " pod="openshift-kube-controller-manager/revision-pruner-11-crc" Aug 13 20:07:22 crc kubenswrapper[4183]: I0813 20:07:22.314463 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1784282a-268d-4e44-a766-43281414e2dc-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"1784282a-268d-4e44-a766-43281414e2dc\") " pod="openshift-kube-controller-manager/revision-pruner-11-crc" Aug 13 20:07:22 crc kubenswrapper[4183]: I0813 20:07:22.315166 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1784282a-268d-4e44-a766-43281414e2dc-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"1784282a-268d-4e44-a766-43281414e2dc\") " pod="openshift-kube-controller-manager/revision-pruner-11-crc" Aug 13 20:07:22 crc kubenswrapper[4183]: I0813 20:07:22.390261 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1784282a-268d-4e44-a766-43281414e2dc-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"1784282a-268d-4e44-a766-43281414e2dc\") " pod="openshift-kube-controller-manager/revision-pruner-11-crc" Aug 13 20:07:22 crc kubenswrapper[4183]: I0813 20:07:22.411919 4183 generic.go:334] "Generic (PLEG): container finished" podID="41e8708a-e40d-4d28-846b-c52eda4d1755" containerID="58037de88507ed248b3008018dedcd37e5ffaf512da1efdad96531a3c165ed1d" exitCode=0 Aug 13 20:07:22 crc kubenswrapper[4183]: I0813 20:07:22.412028 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" event={"ID":"41e8708a-e40d-4d28-846b-c52eda4d1755","Type":"ContainerDied","Data":"58037de88507ed248b3008018dedcd37e5ffaf512da1efdad96531a3c165ed1d"} Aug 13 20:07:22 crc kubenswrapper[4183]: I0813 20:07:22.499614 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-11-crc" Aug 13 20:07:23 crc kubenswrapper[4183]: I0813 20:07:23.031373 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-8-crc"] Aug 13 20:07:23 crc kubenswrapper[4183]: I0813 20:07:23.032141 4183 topology_manager.go:215] "Topology Admit Handler" podUID="aca1f9ff-a685-4a78-b461-3931b757f754" podNamespace="openshift-kube-scheduler" podName="installer-8-crc" Aug 13 20:07:23 crc kubenswrapper[4183]: I0813 20:07:23.033275 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-8-crc" Aug 13 20:07:23 crc kubenswrapper[4183]: I0813 20:07:23.063699 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-9ln8g" Aug 13 20:07:23 crc kubenswrapper[4183]: I0813 20:07:23.064197 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Aug 13 20:07:23 crc kubenswrapper[4183]: I0813 20:07:23.127986 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-8-crc"] Aug 13 20:07:23 crc kubenswrapper[4183]: I0813 20:07:23.137526 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aca1f9ff-a685-4a78-b461-3931b757f754-kube-api-access\") pod \"installer-8-crc\" (UID: \"aca1f9ff-a685-4a78-b461-3931b757f754\") " pod="openshift-kube-scheduler/installer-8-crc" Aug 13 20:07:23 crc kubenswrapper[4183]: I0813 20:07:23.137624 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aca1f9ff-a685-4a78-b461-3931b757f754-kubelet-dir\") pod \"installer-8-crc\" (UID: \"aca1f9ff-a685-4a78-b461-3931b757f754\") " pod="openshift-kube-scheduler/installer-8-crc" Aug 13 20:07:23 crc kubenswrapper[4183]: I0813 20:07:23.137673 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/aca1f9ff-a685-4a78-b461-3931b757f754-var-lock\") pod \"installer-8-crc\" (UID: \"aca1f9ff-a685-4a78-b461-3931b757f754\") " pod="openshift-kube-scheduler/installer-8-crc" Aug 13 20:07:23 crc kubenswrapper[4183]: I0813 20:07:23.239627 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aca1f9ff-a685-4a78-b461-3931b757f754-kubelet-dir\") pod \"installer-8-crc\" (UID: \"aca1f9ff-a685-4a78-b461-3931b757f754\") " pod="openshift-kube-scheduler/installer-8-crc" Aug 13 20:07:23 crc kubenswrapper[4183]: I0813 20:07:23.239719 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/aca1f9ff-a685-4a78-b461-3931b757f754-var-lock\") pod \"installer-8-crc\" (UID: \"aca1f9ff-a685-4a78-b461-3931b757f754\") " pod="openshift-kube-scheduler/installer-8-crc" Aug 13 20:07:23 crc kubenswrapper[4183]: I0813 20:07:23.239817 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aca1f9ff-a685-4a78-b461-3931b757f754-kube-api-access\") pod \"installer-8-crc\" (UID: \"aca1f9ff-a685-4a78-b461-3931b757f754\") " pod="openshift-kube-scheduler/installer-8-crc" Aug 13 20:07:23 crc kubenswrapper[4183]: I0813 20:07:23.239944 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aca1f9ff-a685-4a78-b461-3931b757f754-kubelet-dir\") pod \"installer-8-crc\" (UID: \"aca1f9ff-a685-4a78-b461-3931b757f754\") " pod="openshift-kube-scheduler/installer-8-crc" Aug 13 20:07:23 crc kubenswrapper[4183]: I0813 20:07:23.240035 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/aca1f9ff-a685-4a78-b461-3931b757f754-var-lock\") pod \"installer-8-crc\" (UID: \"aca1f9ff-a685-4a78-b461-3931b757f754\") " pod="openshift-kube-scheduler/installer-8-crc" Aug 13 20:07:23 crc kubenswrapper[4183]: I0813 20:07:23.318300 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aca1f9ff-a685-4a78-b461-3931b757f754-kube-api-access\") pod \"installer-8-crc\" (UID: \"aca1f9ff-a685-4a78-b461-3931b757f754\") " pod="openshift-kube-scheduler/installer-8-crc" Aug 13 20:07:23 crc kubenswrapper[4183]: I0813 20:07:23.354371 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-8-crc" Aug 13 20:07:23 crc kubenswrapper[4183]: I0813 20:07:23.432220 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" event={"ID":"41e8708a-e40d-4d28-846b-c52eda4d1755","Type":"ContainerStarted","Data":"ee9b6eb9461a74aad78cf9091cb08ce2922ebd34495ef62c73d64b9e4a16fd71"} Aug 13 20:07:23 crc kubenswrapper[4183]: I0813 20:07:23.506287 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-11-crc"] Aug 13 20:07:24 crc kubenswrapper[4183]: I0813 20:07:24.097175 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-8-crc"] Aug 13 20:07:24 crc kubenswrapper[4183]: W0813 20:07:24.115985 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podaca1f9ff_a685_4a78_b461_3931b757f754.slice/crio-d0ba8aa29fc697e8bf02d629bbdd14aece0c6f0cdf3711bdd960f2de5046f056 WatchSource:0}: Error finding container d0ba8aa29fc697e8bf02d629bbdd14aece0c6f0cdf3711bdd960f2de5046f056: Status 404 returned error can't find the container with id d0ba8aa29fc697e8bf02d629bbdd14aece0c6f0cdf3711bdd960f2de5046f056 Aug 13 20:07:24 crc kubenswrapper[4183]: I0813 20:07:24.337192 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-11-crc"] Aug 13 20:07:24 crc kubenswrapper[4183]: I0813 20:07:24.337768 4183 topology_manager.go:215] "Topology Admit Handler" podUID="a45bfab9-f78b-4d72-b5b7-903e60401124" podNamespace="openshift-kube-controller-manager" podName="installer-11-crc" Aug 13 20:07:24 crc kubenswrapper[4183]: I0813 20:07:24.338997 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-11-crc" Aug 13 20:07:24 crc kubenswrapper[4183]: I0813 20:07:24.463611 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a45bfab9-f78b-4d72-b5b7-903e60401124-kube-api-access\") pod \"installer-11-crc\" (UID: \"a45bfab9-f78b-4d72-b5b7-903e60401124\") " pod="openshift-kube-controller-manager/installer-11-crc" Aug 13 20:07:24 crc kubenswrapper[4183]: I0813 20:07:24.463699 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a45bfab9-f78b-4d72-b5b7-903e60401124-var-lock\") pod \"installer-11-crc\" (UID: \"a45bfab9-f78b-4d72-b5b7-903e60401124\") " pod="openshift-kube-controller-manager/installer-11-crc" Aug 13 20:07:24 crc kubenswrapper[4183]: I0813 20:07:24.463837 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a45bfab9-f78b-4d72-b5b7-903e60401124-kubelet-dir\") pod \"installer-11-crc\" (UID: \"a45bfab9-f78b-4d72-b5b7-903e60401124\") " pod="openshift-kube-controller-manager/installer-11-crc" Aug 13 20:07:24 crc kubenswrapper[4183]: I0813 20:07:24.476437 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" event={"ID":"41e8708a-e40d-4d28-846b-c52eda4d1755","Type":"ContainerStarted","Data":"907e380361ba3b0228dd34236f32c08de85ddb289bd11f2a1c6bc95e5042248f"} Aug 13 20:07:24 crc kubenswrapper[4183]: I0813 20:07:24.484451 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-11-crc"] Aug 13 20:07:24 crc kubenswrapper[4183]: I0813 20:07:24.488919 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-8-crc" event={"ID":"aca1f9ff-a685-4a78-b461-3931b757f754","Type":"ContainerStarted","Data":"d0ba8aa29fc697e8bf02d629bbdd14aece0c6f0cdf3711bdd960f2de5046f056"} Aug 13 20:07:24 crc kubenswrapper[4183]: I0813 20:07:24.498696 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-11-crc" event={"ID":"1784282a-268d-4e44-a766-43281414e2dc","Type":"ContainerStarted","Data":"a480fccd2debaafb2ae0e571464b52a743bd9b9bd88124f3ec23ac1917ea0448"} Aug 13 20:07:24 crc kubenswrapper[4183]: I0813 20:07:24.564857 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a45bfab9-f78b-4d72-b5b7-903e60401124-kubelet-dir\") pod \"installer-11-crc\" (UID: \"a45bfab9-f78b-4d72-b5b7-903e60401124\") " pod="openshift-kube-controller-manager/installer-11-crc" Aug 13 20:07:24 crc kubenswrapper[4183]: I0813 20:07:24.565013 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a45bfab9-f78b-4d72-b5b7-903e60401124-kube-api-access\") pod \"installer-11-crc\" (UID: \"a45bfab9-f78b-4d72-b5b7-903e60401124\") " pod="openshift-kube-controller-manager/installer-11-crc" Aug 13 20:07:24 crc kubenswrapper[4183]: I0813 20:07:24.565046 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a45bfab9-f78b-4d72-b5b7-903e60401124-var-lock\") pod \"installer-11-crc\" (UID: \"a45bfab9-f78b-4d72-b5b7-903e60401124\") " pod="openshift-kube-controller-manager/installer-11-crc" Aug 13 20:07:24 crc kubenswrapper[4183]: I0813 20:07:24.566492 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a45bfab9-f78b-4d72-b5b7-903e60401124-kubelet-dir\") pod \"installer-11-crc\" (UID: \"a45bfab9-f78b-4d72-b5b7-903e60401124\") " pod="openshift-kube-controller-manager/installer-11-crc" Aug 13 20:07:24 crc kubenswrapper[4183]: I0813 20:07:24.567348 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a45bfab9-f78b-4d72-b5b7-903e60401124-var-lock\") pod \"installer-11-crc\" (UID: \"a45bfab9-f78b-4d72-b5b7-903e60401124\") " pod="openshift-kube-controller-manager/installer-11-crc" Aug 13 20:07:24 crc kubenswrapper[4183]: I0813 20:07:24.700714 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a45bfab9-f78b-4d72-b5b7-903e60401124-kube-api-access\") pod \"installer-11-crc\" (UID: \"a45bfab9-f78b-4d72-b5b7-903e60401124\") " pod="openshift-kube-controller-manager/installer-11-crc" Aug 13 20:07:24 crc kubenswrapper[4183]: I0813 20:07:24.702078 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podStartSLOduration=87.702000825 podStartE2EDuration="1m27.702000825s" podCreationTimestamp="2025-08-13 20:05:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:07:24.689446405 +0000 UTC m=+1411.382111213" watchObservedRunningTime="2025-08-13 20:07:24.702000825 +0000 UTC m=+1411.394665613" Aug 13 20:07:24 crc kubenswrapper[4183]: I0813 20:07:24.963169 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-11-crc" Aug 13 20:07:25 crc kubenswrapper[4183]: I0813 20:07:25.452551 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:25 crc kubenswrapper[4183]: I0813 20:07:25.453223 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:25 crc kubenswrapper[4183]: I0813 20:07:25.522573 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-8-crc" event={"ID":"aca1f9ff-a685-4a78-b461-3931b757f754","Type":"ContainerStarted","Data":"f4f5bb6e58084ee7338acaefbb6a6dac0e4bc0801ff33d60707cf12512275cd2"} Aug 13 20:07:25 crc kubenswrapper[4183]: I0813 20:07:25.527492 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-11-crc" event={"ID":"1784282a-268d-4e44-a766-43281414e2dc","Type":"ContainerStarted","Data":"5d491b38e707472af1834693c9fb2878d530381f767e9605a1f4536f559018ef"} Aug 13 20:07:25 crc kubenswrapper[4183]: I0813 20:07:25.561588 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-8-crc" podStartSLOduration=3.561536929 podStartE2EDuration="3.561536929s" podCreationTimestamp="2025-08-13 20:07:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:07:25.553178059 +0000 UTC m=+1412.245842817" watchObservedRunningTime="2025-08-13 20:07:25.561536929 +0000 UTC m=+1412.254201967" Aug 13 20:07:25 crc kubenswrapper[4183]: I0813 20:07:25.625133 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-11-crc" podStartSLOduration=3.62507817 podStartE2EDuration="3.62507817s" podCreationTimestamp="2025-08-13 20:07:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:07:25.606249501 +0000 UTC m=+1412.298914199" watchObservedRunningTime="2025-08-13 20:07:25.62507817 +0000 UTC m=+1412.317742888" Aug 13 20:07:26 crc kubenswrapper[4183]: I0813 20:07:26.189841 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-11-crc"] Aug 13 20:07:26 crc kubenswrapper[4183]: I0813 20:07:26.548853 4183 generic.go:334] "Generic (PLEG): container finished" podID="1784282a-268d-4e44-a766-43281414e2dc" containerID="5d491b38e707472af1834693c9fb2878d530381f767e9605a1f4536f559018ef" exitCode=0 Aug 13 20:07:26 crc kubenswrapper[4183]: I0813 20:07:26.549013 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-11-crc" event={"ID":"1784282a-268d-4e44-a766-43281414e2dc","Type":"ContainerDied","Data":"5d491b38e707472af1834693c9fb2878d530381f767e9605a1f4536f559018ef"} Aug 13 20:07:26 crc kubenswrapper[4183]: I0813 20:07:26.552214 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-11-crc" event={"ID":"a45bfab9-f78b-4d72-b5b7-903e60401124","Type":"ContainerStarted","Data":"8f0bbf4ce8e2b74d4c5a52712776bba9158d1913b3bd281fb7184ad1a80ceb31"} Aug 13 20:07:27 crc kubenswrapper[4183]: I0813 20:07:27.561049 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-11-crc" event={"ID":"a45bfab9-f78b-4d72-b5b7-903e60401124","Type":"ContainerStarted","Data":"0028ed1d2f2b6b7f754d78a66fe28befb02bf632d29bbafaf101bd5630ca0ce6"} Aug 13 20:07:27 crc kubenswrapper[4183]: I0813 20:07:27.608386 4183 patch_prober.go:28] interesting pod/apiserver-7fc54b8dd7-d2bhp container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Aug 13 20:07:27 crc kubenswrapper[4183]: [+]log ok Aug 13 20:07:27 crc kubenswrapper[4183]: [+]etcd ok Aug 13 20:07:27 crc kubenswrapper[4183]: [+]poststarthook/generic-apiserver-start-informers ok Aug 13 20:07:27 crc kubenswrapper[4183]: [+]poststarthook/max-in-flight-filter ok Aug 13 20:07:27 crc kubenswrapper[4183]: [+]poststarthook/storage-object-count-tracker-hook ok Aug 13 20:07:27 crc kubenswrapper[4183]: [+]poststarthook/image.openshift.io-apiserver-caches ok Aug 13 20:07:27 crc kubenswrapper[4183]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Aug 13 20:07:27 crc kubenswrapper[4183]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Aug 13 20:07:27 crc kubenswrapper[4183]: [+]poststarthook/project.openshift.io-projectcache ok Aug 13 20:07:27 crc kubenswrapper[4183]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Aug 13 20:07:27 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-startinformers ok Aug 13 20:07:27 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-restmapperupdater ok Aug 13 20:07:27 crc kubenswrapper[4183]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Aug 13 20:07:27 crc kubenswrapper[4183]: healthz check failed Aug 13 20:07:27 crc kubenswrapper[4183]: I0813 20:07:27.608501 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:07:27 crc kubenswrapper[4183]: I0813 20:07:27.610608 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-11-crc" podStartSLOduration=3.610560436 podStartE2EDuration="3.610560436s" podCreationTimestamp="2025-08-13 20:07:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:07:27.606207552 +0000 UTC m=+1414.298872320" watchObservedRunningTime="2025-08-13 20:07:27.610560436 +0000 UTC m=+1414.303225224" Aug 13 20:07:28 crc kubenswrapper[4183]: I0813 20:07:28.081528 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-11-crc" Aug 13 20:07:28 crc kubenswrapper[4183]: I0813 20:07:28.181422 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1784282a-268d-4e44-a766-43281414e2dc-kube-api-access\") pod \"1784282a-268d-4e44-a766-43281414e2dc\" (UID: \"1784282a-268d-4e44-a766-43281414e2dc\") " Aug 13 20:07:28 crc kubenswrapper[4183]: I0813 20:07:28.181506 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1784282a-268d-4e44-a766-43281414e2dc-kubelet-dir\") pod \"1784282a-268d-4e44-a766-43281414e2dc\" (UID: \"1784282a-268d-4e44-a766-43281414e2dc\") " Aug 13 20:07:28 crc kubenswrapper[4183]: I0813 20:07:28.181844 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1784282a-268d-4e44-a766-43281414e2dc-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1784282a-268d-4e44-a766-43281414e2dc" (UID: "1784282a-268d-4e44-a766-43281414e2dc"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:07:28 crc kubenswrapper[4183]: I0813 20:07:28.192577 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1784282a-268d-4e44-a766-43281414e2dc-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1784282a-268d-4e44-a766-43281414e2dc" (UID: "1784282a-268d-4e44-a766-43281414e2dc"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:07:28 crc kubenswrapper[4183]: I0813 20:07:28.282391 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1784282a-268d-4e44-a766-43281414e2dc-kube-api-access\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:28 crc kubenswrapper[4183]: I0813 20:07:28.282458 4183 reconciler_common.go:300] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1784282a-268d-4e44-a766-43281414e2dc-kubelet-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:28 crc kubenswrapper[4183]: I0813 20:07:28.571373 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-11-crc" event={"ID":"1784282a-268d-4e44-a766-43281414e2dc","Type":"ContainerDied","Data":"a480fccd2debaafb2ae0e571464b52a743bd9b9bd88124f3ec23ac1917ea0448"} Aug 13 20:07:28 crc kubenswrapper[4183]: I0813 20:07:28.571444 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-11-crc" Aug 13 20:07:28 crc kubenswrapper[4183]: I0813 20:07:28.571490 4183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a480fccd2debaafb2ae0e571464b52a743bd9b9bd88124f3ec23ac1917ea0448" Aug 13 20:07:28 crc kubenswrapper[4183]: I0813 20:07:28.675683 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-p7svp" Aug 13 20:07:28 crc kubenswrapper[4183]: I0813 20:07:28.675947 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-p7svp" Aug 13 20:07:30 crc kubenswrapper[4183]: I0813 20:07:30.055307 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-p7svp" podUID="8518239d-8dab-48ac-a3c1-e775566b9bff" containerName="registry-server" probeResult="failure" output=< Aug 13 20:07:30 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:07:30 crc kubenswrapper[4183]: > Aug 13 20:07:30 crc kubenswrapper[4183]: I0813 20:07:30.476521 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:30 crc kubenswrapper[4183]: I0813 20:07:30.489692 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:30 crc kubenswrapper[4183]: I0813 20:07:30.785087 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/2.log" Aug 13 20:07:30 crc kubenswrapper[4183]: I0813 20:07:30.794980 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/1.log" Aug 13 20:07:30 crc kubenswrapper[4183]: I0813 20:07:30.796348 4183 generic.go:334] "Generic (PLEG): container finished" podID="7d51f445-054a-4e4f-a67b-a828f5a32511" containerID="200de7f83d9a904f95a828b45ad75259caec176a8dddad3b3d43cc421fdead44" exitCode=1 Aug 13 20:07:30 crc kubenswrapper[4183]: I0813 20:07:30.796429 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" event={"ID":"7d51f445-054a-4e4f-a67b-a828f5a32511","Type":"ContainerDied","Data":"200de7f83d9a904f95a828b45ad75259caec176a8dddad3b3d43cc421fdead44"} Aug 13 20:07:30 crc kubenswrapper[4183]: I0813 20:07:30.796711 4183 scope.go:117] "RemoveContainer" containerID="5591be2de8956909e600e69f97a9f842da06662ddb70dc80595c060706c1d24b" Aug 13 20:07:30 crc kubenswrapper[4183]: I0813 20:07:30.798757 4183 scope.go:117] "RemoveContainer" containerID="200de7f83d9a904f95a828b45ad75259caec176a8dddad3b3d43cc421fdead44" Aug 13 20:07:30 crc kubenswrapper[4183]: E0813 20:07:30.802263 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ingress-operator pod=ingress-operator-7d46d5bb6d-rrg6t_openshift-ingress-operator(7d51f445-054a-4e4f-a67b-a828f5a32511)\"" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 20:07:31 crc kubenswrapper[4183]: I0813 20:07:31.494135 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-11-crc"] Aug 13 20:07:31 crc kubenswrapper[4183]: I0813 20:07:31.496093 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-11-crc" podUID="47a054e4-19c2-4c12-a054-fc5edc98978a" containerName="installer" containerID="cri-o://1e1a0d662b883dd47a8d67de1ea3251e342574fa602e1c0b8d1d61ebcdfcfb0c" gracePeriod=30 Aug 13 20:07:31 crc kubenswrapper[4183]: I0813 20:07:31.806205 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/2.log" Aug 13 20:07:33 crc kubenswrapper[4183]: I0813 20:07:33.900684 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Aug 13 20:07:33 crc kubenswrapper[4183]: I0813 20:07:33.900870 4183 topology_manager.go:215] "Topology Admit Handler" podUID="3557248c-8f70-4165-aa66-8df983e7e01a" podNamespace="openshift-kube-apiserver" podName="installer-12-crc" Aug 13 20:07:33 crc kubenswrapper[4183]: E0813 20:07:33.901086 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="1784282a-268d-4e44-a766-43281414e2dc" containerName="pruner" Aug 13 20:07:33 crc kubenswrapper[4183]: I0813 20:07:33.901101 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="1784282a-268d-4e44-a766-43281414e2dc" containerName="pruner" Aug 13 20:07:33 crc kubenswrapper[4183]: I0813 20:07:33.901254 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="1784282a-268d-4e44-a766-43281414e2dc" containerName="pruner" Aug 13 20:07:33 crc kubenswrapper[4183]: I0813 20:07:33.901686 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Aug 13 20:07:33 crc kubenswrapper[4183]: I0813 20:07:33.941547 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Aug 13 20:07:33 crc kubenswrapper[4183]: I0813 20:07:33.977020 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3557248c-8f70-4165-aa66-8df983e7e01a-kubelet-dir\") pod \"installer-12-crc\" (UID: \"3557248c-8f70-4165-aa66-8df983e7e01a\") " pod="openshift-kube-apiserver/installer-12-crc" Aug 13 20:07:33 crc kubenswrapper[4183]: I0813 20:07:33.977103 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3557248c-8f70-4165-aa66-8df983e7e01a-var-lock\") pod \"installer-12-crc\" (UID: \"3557248c-8f70-4165-aa66-8df983e7e01a\") " pod="openshift-kube-apiserver/installer-12-crc" Aug 13 20:07:33 crc kubenswrapper[4183]: I0813 20:07:33.977151 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3557248c-8f70-4165-aa66-8df983e7e01a-kube-api-access\") pod \"installer-12-crc\" (UID: \"3557248c-8f70-4165-aa66-8df983e7e01a\") " pod="openshift-kube-apiserver/installer-12-crc" Aug 13 20:07:34 crc kubenswrapper[4183]: I0813 20:07:34.078045 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3557248c-8f70-4165-aa66-8df983e7e01a-kube-api-access\") pod \"installer-12-crc\" (UID: \"3557248c-8f70-4165-aa66-8df983e7e01a\") " pod="openshift-kube-apiserver/installer-12-crc" Aug 13 20:07:34 crc kubenswrapper[4183]: I0813 20:07:34.078226 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3557248c-8f70-4165-aa66-8df983e7e01a-kubelet-dir\") pod \"installer-12-crc\" (UID: \"3557248c-8f70-4165-aa66-8df983e7e01a\") " pod="openshift-kube-apiserver/installer-12-crc" Aug 13 20:07:34 crc kubenswrapper[4183]: I0813 20:07:34.078263 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3557248c-8f70-4165-aa66-8df983e7e01a-var-lock\") pod \"installer-12-crc\" (UID: \"3557248c-8f70-4165-aa66-8df983e7e01a\") " pod="openshift-kube-apiserver/installer-12-crc" Aug 13 20:07:34 crc kubenswrapper[4183]: I0813 20:07:34.078391 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3557248c-8f70-4165-aa66-8df983e7e01a-var-lock\") pod \"installer-12-crc\" (UID: \"3557248c-8f70-4165-aa66-8df983e7e01a\") " pod="openshift-kube-apiserver/installer-12-crc" Aug 13 20:07:34 crc kubenswrapper[4183]: I0813 20:07:34.078512 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3557248c-8f70-4165-aa66-8df983e7e01a-kubelet-dir\") pod \"installer-12-crc\" (UID: \"3557248c-8f70-4165-aa66-8df983e7e01a\") " pod="openshift-kube-apiserver/installer-12-crc" Aug 13 20:07:34 crc kubenswrapper[4183]: I0813 20:07:34.108364 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3557248c-8f70-4165-aa66-8df983e7e01a-kube-api-access\") pod \"installer-12-crc\" (UID: \"3557248c-8f70-4165-aa66-8df983e7e01a\") " pod="openshift-kube-apiserver/installer-12-crc" Aug 13 20:07:34 crc kubenswrapper[4183]: I0813 20:07:34.241523 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Aug 13 20:07:34 crc kubenswrapper[4183]: I0813 20:07:34.910347 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Aug 13 20:07:34 crc kubenswrapper[4183]: W0813 20:07:34.931394 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod3557248c_8f70_4165_aa66_8df983e7e01a.slice/crio-afb6a839e21ef78ccbdf5a295971cba7dafad8761ac11e55edbab58d304e4309 WatchSource:0}: Error finding container afb6a839e21ef78ccbdf5a295971cba7dafad8761ac11e55edbab58d304e4309: Status 404 returned error can't find the container with id afb6a839e21ef78ccbdf5a295971cba7dafad8761ac11e55edbab58d304e4309 Aug 13 20:07:35 crc kubenswrapper[4183]: I0813 20:07:35.846426 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"3557248c-8f70-4165-aa66-8df983e7e01a","Type":"ContainerStarted","Data":"afb6a839e21ef78ccbdf5a295971cba7dafad8761ac11e55edbab58d304e4309"} Aug 13 20:07:36 crc kubenswrapper[4183]: I0813 20:07:36.856537 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"3557248c-8f70-4165-aa66-8df983e7e01a","Type":"ContainerStarted","Data":"6b580ba621276e10a232c15451ffaeddf32ec7044f6dad05aaf5e3b8fd52877a"} Aug 13 20:07:37 crc kubenswrapper[4183]: I0813 20:07:37.071385 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-12-crc" podStartSLOduration=4.071312054 podStartE2EDuration="4.071312054s" podCreationTimestamp="2025-08-13 20:07:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:07:37.058583339 +0000 UTC m=+1423.751248147" watchObservedRunningTime="2025-08-13 20:07:37.071312054 +0000 UTC m=+1423.763976852" Aug 13 20:07:38 crc kubenswrapper[4183]: I0813 20:07:38.884289 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-p7svp" Aug 13 20:07:38 crc kubenswrapper[4183]: I0813 20:07:38.888306 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-11-crc_47a054e4-19c2-4c12-a054-fc5edc98978a/installer/0.log" Aug 13 20:07:38 crc kubenswrapper[4183]: I0813 20:07:38.888691 4183 generic.go:334] "Generic (PLEG): container finished" podID="47a054e4-19c2-4c12-a054-fc5edc98978a" containerID="1e1a0d662b883dd47a8d67de1ea3251e342574fa602e1c0b8d1d61ebcdfcfb0c" exitCode=1 Aug 13 20:07:38 crc kubenswrapper[4183]: I0813 20:07:38.888738 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-11-crc" event={"ID":"47a054e4-19c2-4c12-a054-fc5edc98978a","Type":"ContainerDied","Data":"1e1a0d662b883dd47a8d67de1ea3251e342574fa602e1c0b8d1d61ebcdfcfb0c"} Aug 13 20:07:39 crc kubenswrapper[4183]: I0813 20:07:39.005603 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-p7svp" Aug 13 20:07:39 crc kubenswrapper[4183]: I0813 20:07:39.899108 4183 generic.go:334] "Generic (PLEG): container finished" podID="0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed" containerID="89a368507993ea42c79b3af991cc9b1cccf950682066ea5091d608d27e68cbe1" exitCode=0 Aug 13 20:07:39 crc kubenswrapper[4183]: I0813 20:07:39.899327 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pmqwc" event={"ID":"0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed","Type":"ContainerDied","Data":"89a368507993ea42c79b3af991cc9b1cccf950682066ea5091d608d27e68cbe1"} Aug 13 20:07:40 crc kubenswrapper[4183]: I0813 20:07:40.374439 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-11-crc_47a054e4-19c2-4c12-a054-fc5edc98978a/installer/0.log" Aug 13 20:07:40 crc kubenswrapper[4183]: I0813 20:07:40.374553 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-11-crc" Aug 13 20:07:40 crc kubenswrapper[4183]: I0813 20:07:40.480018 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/47a054e4-19c2-4c12-a054-fc5edc98978a-kube-api-access\") pod \"47a054e4-19c2-4c12-a054-fc5edc98978a\" (UID: \"47a054e4-19c2-4c12-a054-fc5edc98978a\") " Aug 13 20:07:40 crc kubenswrapper[4183]: I0813 20:07:40.480112 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/47a054e4-19c2-4c12-a054-fc5edc98978a-kubelet-dir\") pod \"47a054e4-19c2-4c12-a054-fc5edc98978a\" (UID: \"47a054e4-19c2-4c12-a054-fc5edc98978a\") " Aug 13 20:07:40 crc kubenswrapper[4183]: I0813 20:07:40.480227 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/47a054e4-19c2-4c12-a054-fc5edc98978a-var-lock\") pod \"47a054e4-19c2-4c12-a054-fc5edc98978a\" (UID: \"47a054e4-19c2-4c12-a054-fc5edc98978a\") " Aug 13 20:07:40 crc kubenswrapper[4183]: I0813 20:07:40.480543 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47a054e4-19c2-4c12-a054-fc5edc98978a-var-lock" (OuterVolumeSpecName: "var-lock") pod "47a054e4-19c2-4c12-a054-fc5edc98978a" (UID: "47a054e4-19c2-4c12-a054-fc5edc98978a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:07:40 crc kubenswrapper[4183]: I0813 20:07:40.481650 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47a054e4-19c2-4c12-a054-fc5edc98978a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "47a054e4-19c2-4c12-a054-fc5edc98978a" (UID: "47a054e4-19c2-4c12-a054-fc5edc98978a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:07:40 crc kubenswrapper[4183]: I0813 20:07:40.498477 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47a054e4-19c2-4c12-a054-fc5edc98978a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "47a054e4-19c2-4c12-a054-fc5edc98978a" (UID: "47a054e4-19c2-4c12-a054-fc5edc98978a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:07:40 crc kubenswrapper[4183]: I0813 20:07:40.535472 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p7svp"] Aug 13 20:07:40 crc kubenswrapper[4183]: I0813 20:07:40.581704 4183 reconciler_common.go:300] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/47a054e4-19c2-4c12-a054-fc5edc98978a-var-lock\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:40 crc kubenswrapper[4183]: I0813 20:07:40.581765 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/47a054e4-19c2-4c12-a054-fc5edc98978a-kube-api-access\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:40 crc kubenswrapper[4183]: I0813 20:07:40.581835 4183 reconciler_common.go:300] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/47a054e4-19c2-4c12-a054-fc5edc98978a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:40 crc kubenswrapper[4183]: I0813 20:07:40.929182 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-11-crc_47a054e4-19c2-4c12-a054-fc5edc98978a/installer/0.log" Aug 13 20:07:40 crc kubenswrapper[4183]: I0813 20:07:40.929511 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/community-operators-p7svp" podUID="8518239d-8dab-48ac-a3c1-e775566b9bff" containerName="registry-server" containerID="cri-o://346c30b9a9faa8432b3782ba026d812f61ae2cf934cc3a5411eda085a0bf6194" gracePeriod=2 Aug 13 20:07:40 crc kubenswrapper[4183]: I0813 20:07:40.929634 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-11-crc" Aug 13 20:07:40 crc kubenswrapper[4183]: I0813 20:07:40.931381 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-11-crc" event={"ID":"47a054e4-19c2-4c12-a054-fc5edc98978a","Type":"ContainerDied","Data":"82592d624297fddcd6792981a2d03476ea0c73592b9982be03e42a7b6cfda763"} Aug 13 20:07:40 crc kubenswrapper[4183]: I0813 20:07:40.931445 4183 scope.go:117] "RemoveContainer" containerID="1e1a0d662b883dd47a8d67de1ea3251e342574fa602e1c0b8d1d61ebcdfcfb0c" Aug 13 20:07:41 crc kubenswrapper[4183]: I0813 20:07:41.023616 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-11-crc"] Aug 13 20:07:41 crc kubenswrapper[4183]: I0813 20:07:41.038541 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-11-crc"] Aug 13 20:07:41 crc kubenswrapper[4183]: I0813 20:07:41.226148 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47a054e4-19c2-4c12-a054-fc5edc98978a" path="/var/lib/kubelet/pods/47a054e4-19c2-4c12-a054-fc5edc98978a/volumes" Aug 13 20:07:41 crc kubenswrapper[4183]: I0813 20:07:41.536707 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p7svp" Aug 13 20:07:41 crc kubenswrapper[4183]: I0813 20:07:41.699273 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vv6hl\" (UniqueName: \"kubernetes.io/projected/8518239d-8dab-48ac-a3c1-e775566b9bff-kube-api-access-vv6hl\") pod \"8518239d-8dab-48ac-a3c1-e775566b9bff\" (UID: \"8518239d-8dab-48ac-a3c1-e775566b9bff\") " Aug 13 20:07:41 crc kubenswrapper[4183]: I0813 20:07:41.699872 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8518239d-8dab-48ac-a3c1-e775566b9bff-catalog-content\") pod \"8518239d-8dab-48ac-a3c1-e775566b9bff\" (UID: \"8518239d-8dab-48ac-a3c1-e775566b9bff\") " Aug 13 20:07:41 crc kubenswrapper[4183]: I0813 20:07:41.700154 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8518239d-8dab-48ac-a3c1-e775566b9bff-utilities\") pod \"8518239d-8dab-48ac-a3c1-e775566b9bff\" (UID: \"8518239d-8dab-48ac-a3c1-e775566b9bff\") " Aug 13 20:07:41 crc kubenswrapper[4183]: I0813 20:07:41.701044 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8518239d-8dab-48ac-a3c1-e775566b9bff-utilities" (OuterVolumeSpecName: "utilities") pod "8518239d-8dab-48ac-a3c1-e775566b9bff" (UID: "8518239d-8dab-48ac-a3c1-e775566b9bff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:07:41 crc kubenswrapper[4183]: I0813 20:07:41.706169 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8518239d-8dab-48ac-a3c1-e775566b9bff-kube-api-access-vv6hl" (OuterVolumeSpecName: "kube-api-access-vv6hl") pod "8518239d-8dab-48ac-a3c1-e775566b9bff" (UID: "8518239d-8dab-48ac-a3c1-e775566b9bff"). InnerVolumeSpecName "kube-api-access-vv6hl". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:07:41 crc kubenswrapper[4183]: I0813 20:07:41.802685 4183 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8518239d-8dab-48ac-a3c1-e775566b9bff-utilities\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:41 crc kubenswrapper[4183]: I0813 20:07:41.803220 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-vv6hl\" (UniqueName: \"kubernetes.io/projected/8518239d-8dab-48ac-a3c1-e775566b9bff-kube-api-access-vv6hl\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:41 crc kubenswrapper[4183]: I0813 20:07:41.944462 4183 generic.go:334] "Generic (PLEG): container finished" podID="8518239d-8dab-48ac-a3c1-e775566b9bff" containerID="346c30b9a9faa8432b3782ba026d812f61ae2cf934cc3a5411eda085a0bf6194" exitCode=0 Aug 13 20:07:41 crc kubenswrapper[4183]: I0813 20:07:41.944597 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p7svp" Aug 13 20:07:41 crc kubenswrapper[4183]: I0813 20:07:41.944665 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p7svp" event={"ID":"8518239d-8dab-48ac-a3c1-e775566b9bff","Type":"ContainerDied","Data":"346c30b9a9faa8432b3782ba026d812f61ae2cf934cc3a5411eda085a0bf6194"} Aug 13 20:07:41 crc kubenswrapper[4183]: I0813 20:07:41.946142 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p7svp" event={"ID":"8518239d-8dab-48ac-a3c1-e775566b9bff","Type":"ContainerDied","Data":"4a52c9653485366a71b6816af21a11a7652981f948545698090cec0d47c008a7"} Aug 13 20:07:41 crc kubenswrapper[4183]: I0813 20:07:41.946204 4183 scope.go:117] "RemoveContainer" containerID="346c30b9a9faa8432b3782ba026d812f61ae2cf934cc3a5411eda085a0bf6194" Aug 13 20:07:41 crc kubenswrapper[4183]: I0813 20:07:41.953649 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pmqwc" event={"ID":"0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed","Type":"ContainerStarted","Data":"18ee63c59f6a1fec2a9a9cca96016647026294fd85d2b3d9bab846314db76012"} Aug 13 20:07:41 crc kubenswrapper[4183]: I0813 20:07:41.981507 4183 scope.go:117] "RemoveContainer" containerID="c8e3392d204770a3cdf4591df44d1933cb69dee9401552f91464c20b12ca2d0d" Aug 13 20:07:42 crc kubenswrapper[4183]: I0813 20:07:42.052749 4183 scope.go:117] "RemoveContainer" containerID="75cca3df20371dce976a94a74005beaf51017e82ce1c4f10505ef46633dcb26b" Aug 13 20:07:42 crc kubenswrapper[4183]: I0813 20:07:42.152768 4183 scope.go:117] "RemoveContainer" containerID="346c30b9a9faa8432b3782ba026d812f61ae2cf934cc3a5411eda085a0bf6194" Aug 13 20:07:42 crc kubenswrapper[4183]: E0813 20:07:42.154453 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"346c30b9a9faa8432b3782ba026d812f61ae2cf934cc3a5411eda085a0bf6194\": container with ID starting with 346c30b9a9faa8432b3782ba026d812f61ae2cf934cc3a5411eda085a0bf6194 not found: ID does not exist" containerID="346c30b9a9faa8432b3782ba026d812f61ae2cf934cc3a5411eda085a0bf6194" Aug 13 20:07:42 crc kubenswrapper[4183]: I0813 20:07:42.154529 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"346c30b9a9faa8432b3782ba026d812f61ae2cf934cc3a5411eda085a0bf6194"} err="failed to get container status \"346c30b9a9faa8432b3782ba026d812f61ae2cf934cc3a5411eda085a0bf6194\": rpc error: code = NotFound desc = could not find container \"346c30b9a9faa8432b3782ba026d812f61ae2cf934cc3a5411eda085a0bf6194\": container with ID starting with 346c30b9a9faa8432b3782ba026d812f61ae2cf934cc3a5411eda085a0bf6194 not found: ID does not exist" Aug 13 20:07:42 crc kubenswrapper[4183]: I0813 20:07:42.154541 4183 scope.go:117] "RemoveContainer" containerID="c8e3392d204770a3cdf4591df44d1933cb69dee9401552f91464c20b12ca2d0d" Aug 13 20:07:42 crc kubenswrapper[4183]: E0813 20:07:42.155376 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8e3392d204770a3cdf4591df44d1933cb69dee9401552f91464c20b12ca2d0d\": container with ID starting with c8e3392d204770a3cdf4591df44d1933cb69dee9401552f91464c20b12ca2d0d not found: ID does not exist" containerID="c8e3392d204770a3cdf4591df44d1933cb69dee9401552f91464c20b12ca2d0d" Aug 13 20:07:42 crc kubenswrapper[4183]: I0813 20:07:42.155404 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8e3392d204770a3cdf4591df44d1933cb69dee9401552f91464c20b12ca2d0d"} err="failed to get container status \"c8e3392d204770a3cdf4591df44d1933cb69dee9401552f91464c20b12ca2d0d\": rpc error: code = NotFound desc = could not find container \"c8e3392d204770a3cdf4591df44d1933cb69dee9401552f91464c20b12ca2d0d\": container with ID starting with c8e3392d204770a3cdf4591df44d1933cb69dee9401552f91464c20b12ca2d0d not found: ID does not exist" Aug 13 20:07:42 crc kubenswrapper[4183]: I0813 20:07:42.155414 4183 scope.go:117] "RemoveContainer" containerID="75cca3df20371dce976a94a74005beaf51017e82ce1c4f10505ef46633dcb26b" Aug 13 20:07:42 crc kubenswrapper[4183]: E0813 20:07:42.162089 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75cca3df20371dce976a94a74005beaf51017e82ce1c4f10505ef46633dcb26b\": container with ID starting with 75cca3df20371dce976a94a74005beaf51017e82ce1c4f10505ef46633dcb26b not found: ID does not exist" containerID="75cca3df20371dce976a94a74005beaf51017e82ce1c4f10505ef46633dcb26b" Aug 13 20:07:42 crc kubenswrapper[4183]: I0813 20:07:42.162170 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75cca3df20371dce976a94a74005beaf51017e82ce1c4f10505ef46633dcb26b"} err="failed to get container status \"75cca3df20371dce976a94a74005beaf51017e82ce1c4f10505ef46633dcb26b\": rpc error: code = NotFound desc = could not find container \"75cca3df20371dce976a94a74005beaf51017e82ce1c4f10505ef46633dcb26b\": container with ID starting with 75cca3df20371dce976a94a74005beaf51017e82ce1c4f10505ef46633dcb26b not found: ID does not exist" Aug 13 20:07:42 crc kubenswrapper[4183]: I0813 20:07:42.363078 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pmqwc" podStartSLOduration=4.845765531 podStartE2EDuration="1m6.363011681s" podCreationTimestamp="2025-08-13 20:06:36 +0000 UTC" firstStartedPulling="2025-08-13 20:06:38.788419425 +0000 UTC m=+1365.481084033" lastFinishedPulling="2025-08-13 20:07:40.305665565 +0000 UTC m=+1426.998330183" observedRunningTime="2025-08-13 20:07:42.355966279 +0000 UTC m=+1429.048631407" watchObservedRunningTime="2025-08-13 20:07:42.363011681 +0000 UTC m=+1429.055676399" Aug 13 20:07:42 crc kubenswrapper[4183]: I0813 20:07:42.473599 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8518239d-8dab-48ac-a3c1-e775566b9bff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8518239d-8dab-48ac-a3c1-e775566b9bff" (UID: "8518239d-8dab-48ac-a3c1-e775566b9bff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:07:42 crc kubenswrapper[4183]: I0813 20:07:42.527765 4183 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8518239d-8dab-48ac-a3c1-e775566b9bff-catalog-content\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:42 crc kubenswrapper[4183]: I0813 20:07:42.615264 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p7svp"] Aug 13 20:07:42 crc kubenswrapper[4183]: I0813 20:07:42.643988 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-p7svp"] Aug 13 20:07:43 crc kubenswrapper[4183]: I0813 20:07:43.217590 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8518239d-8dab-48ac-a3c1-e775566b9bff" path="/var/lib/kubelet/pods/8518239d-8dab-48ac-a3c1-e775566b9bff/volumes" Aug 13 20:07:45 crc kubenswrapper[4183]: I0813 20:07:45.212168 4183 scope.go:117] "RemoveContainer" containerID="200de7f83d9a904f95a828b45ad75259caec176a8dddad3b3d43cc421fdead44" Aug 13 20:07:45 crc kubenswrapper[4183]: E0813 20:07:45.212932 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ingress-operator pod=ingress-operator-7d46d5bb6d-rrg6t_openshift-ingress-operator(7d51f445-054a-4e4f-a67b-a828f5a32511)\"" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 20:07:47 crc kubenswrapper[4183]: I0813 20:07:47.152606 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pmqwc" Aug 13 20:07:47 crc kubenswrapper[4183]: I0813 20:07:47.153146 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pmqwc" Aug 13 20:07:48 crc kubenswrapper[4183]: I0813 20:07:48.274609 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pmqwc" podUID="0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed" containerName="registry-server" probeResult="failure" output=< Aug 13 20:07:48 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:07:48 crc kubenswrapper[4183]: > Aug 13 20:07:54 crc kubenswrapper[4183]: I0813 20:07:54.746623 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:07:54 crc kubenswrapper[4183]: I0813 20:07:54.747374 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:07:54 crc kubenswrapper[4183]: I0813 20:07:54.747426 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:07:54 crc kubenswrapper[4183]: I0813 20:07:54.747463 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:07:54 crc kubenswrapper[4183]: I0813 20:07:54.747494 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.327978 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pmqwc" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.333721 4183 kubelet.go:2439] "SyncLoop REMOVE" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.336866 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="92b2a8634cfe8a21cffcc98cc8c87160" containerName="kube-scheduler" containerID="cri-o://5b04274f5ebeb54ec142f28db67158b3f20014bf0046505512a20f576eb7c4b4" gracePeriod=30 Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.337094 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="92b2a8634cfe8a21cffcc98cc8c87160" containerName="kube-scheduler-recovery-controller" containerID="cri-o://da6e49e577c89776d78e03c12b1aa711de8c3b6ceb252a9c05b51d38a6e6fd8a" gracePeriod=30 Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.337181 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="92b2a8634cfe8a21cffcc98cc8c87160" containerName="kube-scheduler-cert-syncer" containerID="cri-o://daf74224d04a5859b6f3ea7213d84dd41f91a9dfefadc077c041aabcb8247fdd" gracePeriod=30 Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.346086 4183 kubelet.go:2429] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.346238 4183 topology_manager.go:215] "Topology Admit Handler" podUID="6a57a7fb1944b43a6bd11a349520d301" podNamespace="openshift-kube-scheduler" podName="openshift-kube-scheduler-crc" Aug 13 20:07:57 crc kubenswrapper[4183]: E0813 20:07:57.346406 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="92b2a8634cfe8a21cffcc98cc8c87160" containerName="wait-for-host-port" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.346436 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="92b2a8634cfe8a21cffcc98cc8c87160" containerName="wait-for-host-port" Aug 13 20:07:57 crc kubenswrapper[4183]: E0813 20:07:57.346453 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="8518239d-8dab-48ac-a3c1-e775566b9bff" containerName="registry-server" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.346461 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="8518239d-8dab-48ac-a3c1-e775566b9bff" containerName="registry-server" Aug 13 20:07:57 crc kubenswrapper[4183]: E0813 20:07:57.346471 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="8518239d-8dab-48ac-a3c1-e775566b9bff" containerName="extract-utilities" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.346479 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="8518239d-8dab-48ac-a3c1-e775566b9bff" containerName="extract-utilities" Aug 13 20:07:57 crc kubenswrapper[4183]: E0813 20:07:57.346492 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="47a054e4-19c2-4c12-a054-fc5edc98978a" containerName="installer" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.346498 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="47a054e4-19c2-4c12-a054-fc5edc98978a" containerName="installer" Aug 13 20:07:57 crc kubenswrapper[4183]: E0813 20:07:57.346511 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="92b2a8634cfe8a21cffcc98cc8c87160" containerName="kube-scheduler-recovery-controller" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.346519 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="92b2a8634cfe8a21cffcc98cc8c87160" containerName="kube-scheduler-recovery-controller" Aug 13 20:07:57 crc kubenswrapper[4183]: E0813 20:07:57.346529 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="92b2a8634cfe8a21cffcc98cc8c87160" containerName="kube-scheduler" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.346535 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="92b2a8634cfe8a21cffcc98cc8c87160" containerName="kube-scheduler" Aug 13 20:07:57 crc kubenswrapper[4183]: E0813 20:07:57.346547 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="8518239d-8dab-48ac-a3c1-e775566b9bff" containerName="extract-content" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.346554 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="8518239d-8dab-48ac-a3c1-e775566b9bff" containerName="extract-content" Aug 13 20:07:57 crc kubenswrapper[4183]: E0813 20:07:57.346565 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="92b2a8634cfe8a21cffcc98cc8c87160" containerName="kube-scheduler-cert-syncer" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.346574 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="92b2a8634cfe8a21cffcc98cc8c87160" containerName="kube-scheduler-cert-syncer" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.346714 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="92b2a8634cfe8a21cffcc98cc8c87160" containerName="kube-scheduler-cert-syncer" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.346729 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="47a054e4-19c2-4c12-a054-fc5edc98978a" containerName="installer" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.346740 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="92b2a8634cfe8a21cffcc98cc8c87160" containerName="kube-scheduler" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.346756 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="92b2a8634cfe8a21cffcc98cc8c87160" containerName="kube-scheduler-recovery-controller" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.346765 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="8518239d-8dab-48ac-a3c1-e775566b9bff" containerName="registry-server" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.447443 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/6a57a7fb1944b43a6bd11a349520d301-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"6a57a7fb1944b43a6bd11a349520d301\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.447855 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/6a57a7fb1944b43a6bd11a349520d301-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"6a57a7fb1944b43a6bd11a349520d301\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.548995 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/6a57a7fb1944b43a6bd11a349520d301-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"6a57a7fb1944b43a6bd11a349520d301\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.549096 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/6a57a7fb1944b43a6bd11a349520d301-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"6a57a7fb1944b43a6bd11a349520d301\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.549212 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/6a57a7fb1944b43a6bd11a349520d301-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"6a57a7fb1944b43a6bd11a349520d301\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.549286 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/6a57a7fb1944b43a6bd11a349520d301-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"6a57a7fb1944b43a6bd11a349520d301\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.582463 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pmqwc" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.602443 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_92b2a8634cfe8a21cffcc98cc8c87160/kube-scheduler-cert-syncer/0.log" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.604392 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.624543 4183 status_manager.go:863] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" oldPodUID="92b2a8634cfe8a21cffcc98cc8c87160" podUID="6a57a7fb1944b43a6bd11a349520d301" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.664649 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pmqwc"] Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.751139 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/92b2a8634cfe8a21cffcc98cc8c87160-resource-dir\") pod \"92b2a8634cfe8a21cffcc98cc8c87160\" (UID: \"92b2a8634cfe8a21cffcc98cc8c87160\") " Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.751244 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/92b2a8634cfe8a21cffcc98cc8c87160-cert-dir\") pod \"92b2a8634cfe8a21cffcc98cc8c87160\" (UID: \"92b2a8634cfe8a21cffcc98cc8c87160\") " Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.751279 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92b2a8634cfe8a21cffcc98cc8c87160-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "92b2a8634cfe8a21cffcc98cc8c87160" (UID: "92b2a8634cfe8a21cffcc98cc8c87160"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.751451 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92b2a8634cfe8a21cffcc98cc8c87160-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "92b2a8634cfe8a21cffcc98cc8c87160" (UID: "92b2a8634cfe8a21cffcc98cc8c87160"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.751558 4183 reconciler_common.go:300] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/92b2a8634cfe8a21cffcc98cc8c87160-resource-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.853326 4183 reconciler_common.go:300] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/92b2a8634cfe8a21cffcc98cc8c87160-cert-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:58 crc kubenswrapper[4183]: I0813 20:07:58.090766 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_92b2a8634cfe8a21cffcc98cc8c87160/kube-scheduler-cert-syncer/0.log" Aug 13 20:07:58 crc kubenswrapper[4183]: I0813 20:07:58.094243 4183 generic.go:334] "Generic (PLEG): container finished" podID="92b2a8634cfe8a21cffcc98cc8c87160" containerID="da6e49e577c89776d78e03c12b1aa711de8c3b6ceb252a9c05b51d38a6e6fd8a" exitCode=0 Aug 13 20:07:58 crc kubenswrapper[4183]: I0813 20:07:58.094309 4183 generic.go:334] "Generic (PLEG): container finished" podID="92b2a8634cfe8a21cffcc98cc8c87160" containerID="daf74224d04a5859b6f3ea7213d84dd41f91a9dfefadc077c041aabcb8247fdd" exitCode=2 Aug 13 20:07:58 crc kubenswrapper[4183]: I0813 20:07:58.094315 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:07:58 crc kubenswrapper[4183]: I0813 20:07:58.094332 4183 generic.go:334] "Generic (PLEG): container finished" podID="92b2a8634cfe8a21cffcc98cc8c87160" containerID="5b04274f5ebeb54ec142f28db67158b3f20014bf0046505512a20f576eb7c4b4" exitCode=0 Aug 13 20:07:58 crc kubenswrapper[4183]: I0813 20:07:58.094538 4183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3aeac3b3f0abd9616c32591e8c03ee04ad93d9eaa1a57f5f009d1e5534dc9bf" Aug 13 20:07:58 crc kubenswrapper[4183]: I0813 20:07:58.099010 4183 generic.go:334] "Generic (PLEG): container finished" podID="aca1f9ff-a685-4a78-b461-3931b757f754" containerID="f4f5bb6e58084ee7338acaefbb6a6dac0e4bc0801ff33d60707cf12512275cd2" exitCode=0 Aug 13 20:07:58 crc kubenswrapper[4183]: I0813 20:07:58.099494 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-8-crc" event={"ID":"aca1f9ff-a685-4a78-b461-3931b757f754","Type":"ContainerDied","Data":"f4f5bb6e58084ee7338acaefbb6a6dac0e4bc0801ff33d60707cf12512275cd2"} Aug 13 20:07:58 crc kubenswrapper[4183]: I0813 20:07:58.100631 4183 status_manager.go:863] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" oldPodUID="92b2a8634cfe8a21cffcc98cc8c87160" podUID="6a57a7fb1944b43a6bd11a349520d301" Aug 13 20:07:58 crc kubenswrapper[4183]: I0813 20:07:58.152190 4183 status_manager.go:863] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" oldPodUID="92b2a8634cfe8a21cffcc98cc8c87160" podUID="6a57a7fb1944b43a6bd11a349520d301" Aug 13 20:07:59 crc kubenswrapper[4183]: I0813 20:07:59.105101 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pmqwc" podUID="0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed" containerName="registry-server" containerID="cri-o://18ee63c59f6a1fec2a9a9cca96016647026294fd85d2b3d9bab846314db76012" gracePeriod=2 Aug 13 20:07:59 crc kubenswrapper[4183]: I0813 20:07:59.209677 4183 scope.go:117] "RemoveContainer" containerID="200de7f83d9a904f95a828b45ad75259caec176a8dddad3b3d43cc421fdead44" Aug 13 20:07:59 crc kubenswrapper[4183]: I0813 20:07:59.221052 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92b2a8634cfe8a21cffcc98cc8c87160" path="/var/lib/kubelet/pods/92b2a8634cfe8a21cffcc98cc8c87160/volumes" Aug 13 20:07:59 crc kubenswrapper[4183]: I0813 20:07:59.553184 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-8-crc" Aug 13 20:07:59 crc kubenswrapper[4183]: I0813 20:07:59.676586 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pmqwc" Aug 13 20:07:59 crc kubenswrapper[4183]: I0813 20:07:59.680046 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/aca1f9ff-a685-4a78-b461-3931b757f754-var-lock\") pod \"aca1f9ff-a685-4a78-b461-3931b757f754\" (UID: \"aca1f9ff-a685-4a78-b461-3931b757f754\") " Aug 13 20:07:59 crc kubenswrapper[4183]: I0813 20:07:59.680156 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aca1f9ff-a685-4a78-b461-3931b757f754-kube-api-access\") pod \"aca1f9ff-a685-4a78-b461-3931b757f754\" (UID: \"aca1f9ff-a685-4a78-b461-3931b757f754\") " Aug 13 20:07:59 crc kubenswrapper[4183]: I0813 20:07:59.680224 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aca1f9ff-a685-4a78-b461-3931b757f754-kubelet-dir\") pod \"aca1f9ff-a685-4a78-b461-3931b757f754\" (UID: \"aca1f9ff-a685-4a78-b461-3931b757f754\") " Aug 13 20:07:59 crc kubenswrapper[4183]: I0813 20:07:59.680443 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aca1f9ff-a685-4a78-b461-3931b757f754-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "aca1f9ff-a685-4a78-b461-3931b757f754" (UID: "aca1f9ff-a685-4a78-b461-3931b757f754"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:07:59 crc kubenswrapper[4183]: I0813 20:07:59.680477 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aca1f9ff-a685-4a78-b461-3931b757f754-var-lock" (OuterVolumeSpecName: "var-lock") pod "aca1f9ff-a685-4a78-b461-3931b757f754" (UID: "aca1f9ff-a685-4a78-b461-3931b757f754"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:07:59 crc kubenswrapper[4183]: I0813 20:07:59.689991 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aca1f9ff-a685-4a78-b461-3931b757f754-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "aca1f9ff-a685-4a78-b461-3931b757f754" (UID: "aca1f9ff-a685-4a78-b461-3931b757f754"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:07:59 crc kubenswrapper[4183]: I0813 20:07:59.781577 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed-utilities\") pod \"0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed\" (UID: \"0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed\") " Aug 13 20:07:59 crc kubenswrapper[4183]: I0813 20:07:59.781662 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4g78\" (UniqueName: \"kubernetes.io/projected/0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed-kube-api-access-h4g78\") pod \"0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed\" (UID: \"0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed\") " Aug 13 20:07:59 crc kubenswrapper[4183]: I0813 20:07:59.781847 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed-catalog-content\") pod \"0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed\" (UID: \"0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed\") " Aug 13 20:07:59 crc kubenswrapper[4183]: I0813 20:07:59.782093 4183 reconciler_common.go:300] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aca1f9ff-a685-4a78-b461-3931b757f754-kubelet-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:59 crc kubenswrapper[4183]: I0813 20:07:59.782114 4183 reconciler_common.go:300] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/aca1f9ff-a685-4a78-b461-3931b757f754-var-lock\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:59 crc kubenswrapper[4183]: I0813 20:07:59.782133 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aca1f9ff-a685-4a78-b461-3931b757f754-kube-api-access\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:59 crc kubenswrapper[4183]: I0813 20:07:59.782925 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed-utilities" (OuterVolumeSpecName: "utilities") pod "0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed" (UID: "0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:07:59 crc kubenswrapper[4183]: I0813 20:07:59.789589 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed-kube-api-access-h4g78" (OuterVolumeSpecName: "kube-api-access-h4g78") pod "0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed" (UID: "0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed"). InnerVolumeSpecName "kube-api-access-h4g78". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:07:59 crc kubenswrapper[4183]: I0813 20:07:59.883253 4183 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed-utilities\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:59 crc kubenswrapper[4183]: I0813 20:07:59.883325 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-h4g78\" (UniqueName: \"kubernetes.io/projected/0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed-kube-api-access-h4g78\") on node \"crc\" DevicePath \"\"" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.114082 4183 generic.go:334] "Generic (PLEG): container finished" podID="0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed" containerID="18ee63c59f6a1fec2a9a9cca96016647026294fd85d2b3d9bab846314db76012" exitCode=0 Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.115157 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pmqwc" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.115204 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pmqwc" event={"ID":"0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed","Type":"ContainerDied","Data":"18ee63c59f6a1fec2a9a9cca96016647026294fd85d2b3d9bab846314db76012"} Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.116555 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pmqwc" event={"ID":"0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed","Type":"ContainerDied","Data":"3025039c6358002d40f5661f0d4ebe701c314f685e0a46fd007206a116acffb8"} Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.116586 4183 scope.go:117] "RemoveContainer" containerID="18ee63c59f6a1fec2a9a9cca96016647026294fd85d2b3d9bab846314db76012" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.126548 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-8-crc" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.126932 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-8-crc" event={"ID":"aca1f9ff-a685-4a78-b461-3931b757f754","Type":"ContainerDied","Data":"d0ba8aa29fc697e8bf02d629bbdd14aece0c6f0cdf3711bdd960f2de5046f056"} Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.126988 4183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0ba8aa29fc697e8bf02d629bbdd14aece0c6f0cdf3711bdd960f2de5046f056" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.130167 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/2.log" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.130727 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" event={"ID":"7d51f445-054a-4e4f-a67b-a828f5a32511","Type":"ContainerStarted","Data":"2be75d1e514468ff600570e8a9d6f13a97a775a4d62bca4f69b639c8be59cf64"} Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.207987 4183 scope.go:117] "RemoveContainer" containerID="89a368507993ea42c79b3af991cc9b1cccf950682066ea5091d608d27e68cbe1" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.295514 4183 kubelet.go:2439] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.320057 4183 kubelet.go:2429] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.320538 4183 topology_manager.go:215] "Topology Admit Handler" podUID="bd6a3a59e513625ca0ae3724df2686bc" podNamespace="openshift-kube-controller-manager" podName="kube-controller-manager-crc" Aug 13 20:08:00 crc kubenswrapper[4183]: E0813 20:08:00.320963 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed" containerName="extract-content" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.321206 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed" containerName="extract-content" Aug 13 20:08:00 crc kubenswrapper[4183]: E0813 20:08:00.321231 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="56d9256d8ee968b89d58cda59af60969" containerName="cluster-policy-controller" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.321239 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="56d9256d8ee968b89d58cda59af60969" containerName="cluster-policy-controller" Aug 13 20:08:00 crc kubenswrapper[4183]: E0813 20:08:00.321300 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="56d9256d8ee968b89d58cda59af60969" containerName="kube-controller-manager-cert-syncer" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.321309 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="56d9256d8ee968b89d58cda59af60969" containerName="kube-controller-manager-cert-syncer" Aug 13 20:08:00 crc kubenswrapper[4183]: E0813 20:08:00.321319 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="aca1f9ff-a685-4a78-b461-3931b757f754" containerName="installer" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.321327 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="aca1f9ff-a685-4a78-b461-3931b757f754" containerName="installer" Aug 13 20:08:00 crc kubenswrapper[4183]: E0813 20:08:00.321342 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed" containerName="extract-utilities" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.321349 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed" containerName="extract-utilities" Aug 13 20:08:00 crc kubenswrapper[4183]: E0813 20:08:00.321360 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="56d9256d8ee968b89d58cda59af60969" containerName="kube-controller-manager-recovery-controller" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.321367 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="56d9256d8ee968b89d58cda59af60969" containerName="kube-controller-manager-recovery-controller" Aug 13 20:08:00 crc kubenswrapper[4183]: E0813 20:08:00.321379 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed" containerName="registry-server" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.321385 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed" containerName="registry-server" Aug 13 20:08:00 crc kubenswrapper[4183]: E0813 20:08:00.321395 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="56d9256d8ee968b89d58cda59af60969" containerName="kube-controller-manager" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.321405 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="56d9256d8ee968b89d58cda59af60969" containerName="kube-controller-manager" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.321518 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="56d9256d8ee968b89d58cda59af60969" containerName="cluster-policy-controller" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.321530 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="56d9256d8ee968b89d58cda59af60969" containerName="kube-controller-manager" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.321543 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed" containerName="registry-server" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.321554 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="56d9256d8ee968b89d58cda59af60969" containerName="kube-controller-manager-recovery-controller" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.321564 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="aca1f9ff-a685-4a78-b461-3931b757f754" containerName="installer" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.321575 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="56d9256d8ee968b89d58cda59af60969" containerName="kube-controller-manager-cert-syncer" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.326298 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="56d9256d8ee968b89d58cda59af60969" containerName="kube-controller-manager" containerID="cri-o://4159ba877f8ff7e1e08f72bf3d12699149238f2597dfea0b4882ee6797fe2c98" gracePeriod=30 Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.326705 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="56d9256d8ee968b89d58cda59af60969" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://844a16e08b8b6f6647fb07d6bae6657e732727da7ada45f1211b70ff85887202" gracePeriod=30 Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.326757 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="56d9256d8ee968b89d58cda59af60969" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://be1e0c86831f89f585cd2c81563266389f6b99fe3a2b00e25563c193b7ae2289" gracePeriod=30 Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.326866 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="56d9256d8ee968b89d58cda59af60969" containerName="cluster-policy-controller" containerID="cri-o://6fac670aec99a6e895db54957107db545029859582d9e7bfff8bcb8b8323317b" gracePeriod=30 Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.395709 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/bd6a3a59e513625ca0ae3724df2686bc-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"bd6a3a59e513625ca0ae3724df2686bc\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.395815 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/bd6a3a59e513625ca0ae3724df2686bc-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"bd6a3a59e513625ca0ae3724df2686bc\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.497307 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/bd6a3a59e513625ca0ae3724df2686bc-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"bd6a3a59e513625ca0ae3724df2686bc\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.497385 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/bd6a3a59e513625ca0ae3724df2686bc-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"bd6a3a59e513625ca0ae3724df2686bc\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.497494 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/bd6a3a59e513625ca0ae3724df2686bc-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"bd6a3a59e513625ca0ae3724df2686bc\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.497539 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/bd6a3a59e513625ca0ae3724df2686bc-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"bd6a3a59e513625ca0ae3724df2686bc\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.549594 4183 scope.go:117] "RemoveContainer" containerID="29c42b8a41289c4fea25430048589dc9dedd4b658b109126c4e196ce9807773d" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.673146 4183 scope.go:117] "RemoveContainer" containerID="18ee63c59f6a1fec2a9a9cca96016647026294fd85d2b3d9bab846314db76012" Aug 13 20:08:00 crc kubenswrapper[4183]: E0813 20:08:00.674149 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18ee63c59f6a1fec2a9a9cca96016647026294fd85d2b3d9bab846314db76012\": container with ID starting with 18ee63c59f6a1fec2a9a9cca96016647026294fd85d2b3d9bab846314db76012 not found: ID does not exist" containerID="18ee63c59f6a1fec2a9a9cca96016647026294fd85d2b3d9bab846314db76012" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.674212 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18ee63c59f6a1fec2a9a9cca96016647026294fd85d2b3d9bab846314db76012"} err="failed to get container status \"18ee63c59f6a1fec2a9a9cca96016647026294fd85d2b3d9bab846314db76012\": rpc error: code = NotFound desc = could not find container \"18ee63c59f6a1fec2a9a9cca96016647026294fd85d2b3d9bab846314db76012\": container with ID starting with 18ee63c59f6a1fec2a9a9cca96016647026294fd85d2b3d9bab846314db76012 not found: ID does not exist" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.674225 4183 scope.go:117] "RemoveContainer" containerID="89a368507993ea42c79b3af991cc9b1cccf950682066ea5091d608d27e68cbe1" Aug 13 20:08:00 crc kubenswrapper[4183]: E0813 20:08:00.677462 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89a368507993ea42c79b3af991cc9b1cccf950682066ea5091d608d27e68cbe1\": container with ID starting with 89a368507993ea42c79b3af991cc9b1cccf950682066ea5091d608d27e68cbe1 not found: ID does not exist" containerID="89a368507993ea42c79b3af991cc9b1cccf950682066ea5091d608d27e68cbe1" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.677521 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89a368507993ea42c79b3af991cc9b1cccf950682066ea5091d608d27e68cbe1"} err="failed to get container status \"89a368507993ea42c79b3af991cc9b1cccf950682066ea5091d608d27e68cbe1\": rpc error: code = NotFound desc = could not find container \"89a368507993ea42c79b3af991cc9b1cccf950682066ea5091d608d27e68cbe1\": container with ID starting with 89a368507993ea42c79b3af991cc9b1cccf950682066ea5091d608d27e68cbe1 not found: ID does not exist" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.677535 4183 scope.go:117] "RemoveContainer" containerID="29c42b8a41289c4fea25430048589dc9dedd4b658b109126c4e196ce9807773d" Aug 13 20:08:00 crc kubenswrapper[4183]: E0813 20:08:00.678622 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29c42b8a41289c4fea25430048589dc9dedd4b658b109126c4e196ce9807773d\": container with ID starting with 29c42b8a41289c4fea25430048589dc9dedd4b658b109126c4e196ce9807773d not found: ID does not exist" containerID="29c42b8a41289c4fea25430048589dc9dedd4b658b109126c4e196ce9807773d" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.678687 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29c42b8a41289c4fea25430048589dc9dedd4b658b109126c4e196ce9807773d"} err="failed to get container status \"29c42b8a41289c4fea25430048589dc9dedd4b658b109126c4e196ce9807773d\": rpc error: code = NotFound desc = could not find container \"29c42b8a41289c4fea25430048589dc9dedd4b658b109126c4e196ce9807773d\": container with ID starting with 29c42b8a41289c4fea25430048589dc9dedd4b658b109126c4e196ce9807773d not found: ID does not exist" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.718601 4183 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.718702 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="56d9256d8ee968b89d58cda59af60969" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.718973 4183 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": dial tcp 192.168.126.11:10357: connect: connection refused" start-of-body= Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.719119 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="56d9256d8ee968b89d58cda59af60969" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": dial tcp 192.168.126.11:10357: connect: connection refused" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.737956 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_56d9256d8ee968b89d58cda59af60969/kube-controller-manager-cert-syncer/0.log" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.740496 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.749570 4183 status_manager.go:863] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-crc" oldPodUID="56d9256d8ee968b89d58cda59af60969" podUID="bd6a3a59e513625ca0ae3724df2686bc" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.801739 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/56d9256d8ee968b89d58cda59af60969-cert-dir\") pod \"56d9256d8ee968b89d58cda59af60969\" (UID: \"56d9256d8ee968b89d58cda59af60969\") " Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.801960 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/56d9256d8ee968b89d58cda59af60969-resource-dir\") pod \"56d9256d8ee968b89d58cda59af60969\" (UID: \"56d9256d8ee968b89d58cda59af60969\") " Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.802251 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56d9256d8ee968b89d58cda59af60969-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "56d9256d8ee968b89d58cda59af60969" (UID: "56d9256d8ee968b89d58cda59af60969"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.802286 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56d9256d8ee968b89d58cda59af60969-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "56d9256d8ee968b89d58cda59af60969" (UID: "56d9256d8ee968b89d58cda59af60969"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.814840 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed" (UID: "0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.903427 4183 reconciler_common.go:300] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/56d9256d8ee968b89d58cda59af60969-cert-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.903510 4183 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed-catalog-content\") on node \"crc\" DevicePath \"\"" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.903528 4183 reconciler_common.go:300] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/56d9256d8ee968b89d58cda59af60969-resource-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:08:01 crc kubenswrapper[4183]: I0813 20:08:01.072465 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pmqwc"] Aug 13 20:08:01 crc kubenswrapper[4183]: I0813 20:08:01.084490 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pmqwc"] Aug 13 20:08:01 crc kubenswrapper[4183]: I0813 20:08:01.142231 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_56d9256d8ee968b89d58cda59af60969/kube-controller-manager-cert-syncer/0.log" Aug 13 20:08:01 crc kubenswrapper[4183]: I0813 20:08:01.144623 4183 generic.go:334] "Generic (PLEG): container finished" podID="56d9256d8ee968b89d58cda59af60969" containerID="844a16e08b8b6f6647fb07d6bae6657e732727da7ada45f1211b70ff85887202" exitCode=0 Aug 13 20:08:01 crc kubenswrapper[4183]: I0813 20:08:01.144689 4183 generic.go:334] "Generic (PLEG): container finished" podID="56d9256d8ee968b89d58cda59af60969" containerID="be1e0c86831f89f585cd2c81563266389f6b99fe3a2b00e25563c193b7ae2289" exitCode=2 Aug 13 20:08:01 crc kubenswrapper[4183]: I0813 20:08:01.144712 4183 generic.go:334] "Generic (PLEG): container finished" podID="56d9256d8ee968b89d58cda59af60969" containerID="6fac670aec99a6e895db54957107db545029859582d9e7bfff8bcb8b8323317b" exitCode=0 Aug 13 20:08:01 crc kubenswrapper[4183]: I0813 20:08:01.144729 4183 generic.go:334] "Generic (PLEG): container finished" podID="56d9256d8ee968b89d58cda59af60969" containerID="4159ba877f8ff7e1e08f72bf3d12699149238f2597dfea0b4882ee6797fe2c98" exitCode=0 Aug 13 20:08:01 crc kubenswrapper[4183]: I0813 20:08:01.144739 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:08:01 crc kubenswrapper[4183]: I0813 20:08:01.144967 4183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a386295a4836609efa126cdad0f8da6cec9163b751ff142e15d9693c89cf9866" Aug 13 20:08:01 crc kubenswrapper[4183]: I0813 20:08:01.149350 4183 status_manager.go:863] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-crc" oldPodUID="56d9256d8ee968b89d58cda59af60969" podUID="bd6a3a59e513625ca0ae3724df2686bc" Aug 13 20:08:01 crc kubenswrapper[4183]: I0813 20:08:01.150471 4183 generic.go:334] "Generic (PLEG): container finished" podID="a45bfab9-f78b-4d72-b5b7-903e60401124" containerID="0028ed1d2f2b6b7f754d78a66fe28befb02bf632d29bbafaf101bd5630ca0ce6" exitCode=0 Aug 13 20:08:01 crc kubenswrapper[4183]: I0813 20:08:01.150531 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-11-crc" event={"ID":"a45bfab9-f78b-4d72-b5b7-903e60401124","Type":"ContainerDied","Data":"0028ed1d2f2b6b7f754d78a66fe28befb02bf632d29bbafaf101bd5630ca0ce6"} Aug 13 20:08:01 crc kubenswrapper[4183]: I0813 20:08:01.272296 4183 status_manager.go:863] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-crc" oldPodUID="56d9256d8ee968b89d58cda59af60969" podUID="bd6a3a59e513625ca0ae3724df2686bc" Aug 13 20:08:01 crc kubenswrapper[4183]: I0813 20:08:01.307600 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed" path="/var/lib/kubelet/pods/0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed/volumes" Aug 13 20:08:01 crc kubenswrapper[4183]: I0813 20:08:01.308471 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56d9256d8ee968b89d58cda59af60969" path="/var/lib/kubelet/pods/56d9256d8ee968b89d58cda59af60969/volumes" Aug 13 20:08:01 crc kubenswrapper[4183]: E0813 20:08:01.370919 4183 cadvisor_stats_provider.go:501] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56d9256d8ee968b89d58cda59af60969.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56d9256d8ee968b89d58cda59af60969.slice/crio-a386295a4836609efa126cdad0f8da6cec9163b751ff142e15d9693c89cf9866\": RecentStats: unable to find data in memory cache]" Aug 13 20:08:02 crc kubenswrapper[4183]: I0813 20:08:02.701939 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-11-crc" Aug 13 20:08:02 crc kubenswrapper[4183]: I0813 20:08:02.726456 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a45bfab9-f78b-4d72-b5b7-903e60401124-var-lock\") pod \"a45bfab9-f78b-4d72-b5b7-903e60401124\" (UID: \"a45bfab9-f78b-4d72-b5b7-903e60401124\") " Aug 13 20:08:02 crc kubenswrapper[4183]: I0813 20:08:02.726566 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a45bfab9-f78b-4d72-b5b7-903e60401124-kubelet-dir\") pod \"a45bfab9-f78b-4d72-b5b7-903e60401124\" (UID: \"a45bfab9-f78b-4d72-b5b7-903e60401124\") " Aug 13 20:08:02 crc kubenswrapper[4183]: I0813 20:08:02.726656 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a45bfab9-f78b-4d72-b5b7-903e60401124-kube-api-access\") pod \"a45bfab9-f78b-4d72-b5b7-903e60401124\" (UID: \"a45bfab9-f78b-4d72-b5b7-903e60401124\") " Aug 13 20:08:02 crc kubenswrapper[4183]: I0813 20:08:02.726837 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a45bfab9-f78b-4d72-b5b7-903e60401124-var-lock" (OuterVolumeSpecName: "var-lock") pod "a45bfab9-f78b-4d72-b5b7-903e60401124" (UID: "a45bfab9-f78b-4d72-b5b7-903e60401124"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:08:02 crc kubenswrapper[4183]: I0813 20:08:02.726907 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a45bfab9-f78b-4d72-b5b7-903e60401124-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a45bfab9-f78b-4d72-b5b7-903e60401124" (UID: "a45bfab9-f78b-4d72-b5b7-903e60401124"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:08:02 crc kubenswrapper[4183]: I0813 20:08:02.727044 4183 reconciler_common.go:300] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a45bfab9-f78b-4d72-b5b7-903e60401124-var-lock\") on node \"crc\" DevicePath \"\"" Aug 13 20:08:02 crc kubenswrapper[4183]: I0813 20:08:02.727061 4183 reconciler_common.go:300] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a45bfab9-f78b-4d72-b5b7-903e60401124-kubelet-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:08:02 crc kubenswrapper[4183]: I0813 20:08:02.737672 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a45bfab9-f78b-4d72-b5b7-903e60401124-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a45bfab9-f78b-4d72-b5b7-903e60401124" (UID: "a45bfab9-f78b-4d72-b5b7-903e60401124"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:08:02 crc kubenswrapper[4183]: I0813 20:08:02.828096 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a45bfab9-f78b-4d72-b5b7-903e60401124-kube-api-access\") on node \"crc\" DevicePath \"\"" Aug 13 20:08:03 crc kubenswrapper[4183]: I0813 20:08:03.164692 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-11-crc" event={"ID":"a45bfab9-f78b-4d72-b5b7-903e60401124","Type":"ContainerDied","Data":"8f0bbf4ce8e2b74d4c5a52712776bba9158d1913b3bd281fb7184ad1a80ceb31"} Aug 13 20:08:03 crc kubenswrapper[4183]: I0813 20:08:03.164755 4183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f0bbf4ce8e2b74d4c5a52712776bba9158d1913b3bd281fb7184ad1a80ceb31" Aug 13 20:08:03 crc kubenswrapper[4183]: I0813 20:08:03.164921 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-11-crc" Aug 13 20:08:08 crc kubenswrapper[4183]: I0813 20:08:08.210374 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:08:08 crc kubenswrapper[4183]: I0813 20:08:08.233240 4183 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="1f93bc40-081c-4dbc-905a-acda15a1c6ce" Aug 13 20:08:08 crc kubenswrapper[4183]: I0813 20:08:08.233318 4183 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="1f93bc40-081c-4dbc-905a-acda15a1c6ce" Aug 13 20:08:08 crc kubenswrapper[4183]: I0813 20:08:08.254392 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Aug 13 20:08:08 crc kubenswrapper[4183]: I0813 20:08:08.259540 4183 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:08:08 crc kubenswrapper[4183]: I0813 20:08:08.267557 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Aug 13 20:08:08 crc kubenswrapper[4183]: I0813 20:08:08.285068 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:08:08 crc kubenswrapper[4183]: I0813 20:08:08.294482 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Aug 13 20:08:09 crc kubenswrapper[4183]: I0813 20:08:09.207101 4183 generic.go:334] "Generic (PLEG): container finished" podID="6a57a7fb1944b43a6bd11a349520d301" containerID="ecc1c7aa8cb60b63c1dc3d6b8b1d65f58dad0f51d174f6d245650a3c918170f3" exitCode=0 Aug 13 20:08:09 crc kubenswrapper[4183]: I0813 20:08:09.207402 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"6a57a7fb1944b43a6bd11a349520d301","Type":"ContainerDied","Data":"ecc1c7aa8cb60b63c1dc3d6b8b1d65f58dad0f51d174f6d245650a3c918170f3"} Aug 13 20:08:09 crc kubenswrapper[4183]: I0813 20:08:09.207460 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"6a57a7fb1944b43a6bd11a349520d301","Type":"ContainerStarted","Data":"7d38e4405721e751ffe695369180693433405ae4331549aed5834d79ed44b3ee"} Aug 13 20:08:10 crc kubenswrapper[4183]: I0813 20:08:10.242468 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"6a57a7fb1944b43a6bd11a349520d301","Type":"ContainerStarted","Data":"f484dd54fa6f1d9458704164d3b0d07e7de45fc1c5c3732080db88204b97a260"} Aug 13 20:08:10 crc kubenswrapper[4183]: I0813 20:08:10.242541 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"6a57a7fb1944b43a6bd11a349520d301","Type":"ContainerStarted","Data":"321449b7baef718aa4f8e6a5e8027626824e675a08ec111132c5033a8de2bea4"} Aug 13 20:08:11 crc kubenswrapper[4183]: I0813 20:08:11.251534 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"6a57a7fb1944b43a6bd11a349520d301","Type":"ContainerStarted","Data":"748707f199ebf717d7b583f31dd21339f68d06a1f3fe2bd66ad8cd355863d0b6"} Aug 13 20:08:11 crc kubenswrapper[4183]: I0813 20:08:11.252067 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:08:12 crc kubenswrapper[4183]: I0813 20:08:12.208554 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:08:12 crc kubenswrapper[4183]: I0813 20:08:12.230189 4183 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="953c24d8-ecc7-443c-a9ae-a3caf95e5e63" Aug 13 20:08:12 crc kubenswrapper[4183]: I0813 20:08:12.230240 4183 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="953c24d8-ecc7-443c-a9ae-a3caf95e5e63" Aug 13 20:08:12 crc kubenswrapper[4183]: I0813 20:08:12.257216 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=4.2571685630000005 podStartE2EDuration="4.257168563s" podCreationTimestamp="2025-08-13 20:08:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:08:11.277452103 +0000 UTC m=+1457.970116921" watchObservedRunningTime="2025-08-13 20:08:12.257168563 +0000 UTC m=+1458.949833291" Aug 13 20:08:12 crc kubenswrapper[4183]: I0813 20:08:12.259925 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Aug 13 20:08:12 crc kubenswrapper[4183]: I0813 20:08:12.268844 4183 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:08:12 crc kubenswrapper[4183]: I0813 20:08:12.272823 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Aug 13 20:08:12 crc kubenswrapper[4183]: I0813 20:08:12.292493 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:08:12 crc kubenswrapper[4183]: I0813 20:08:12.302328 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Aug 13 20:08:13 crc kubenswrapper[4183]: I0813 20:08:13.288033 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerStarted","Data":"0be6c231766bb308c5fd1c35f7d778e9085ef87b609e771c9b8c0562273f73af"} Aug 13 20:08:13 crc kubenswrapper[4183]: I0813 20:08:13.288425 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerStarted","Data":"2a5d2c4f8091434e96a501a9652a7fc6eabd91a48a80b63a8e598b375d046dcf"} Aug 13 20:08:13 crc kubenswrapper[4183]: I0813 20:08:13.288449 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerStarted","Data":"134690fa1c76729c58b7776be3ce993405e907d37bcd9895349f1550b9cb7b4e"} Aug 13 20:08:14 crc kubenswrapper[4183]: I0813 20:08:14.298722 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerStarted","Data":"b3f81ba7d134155fdc498a60346928d213e2da7a3f20f0b50f64409568a246cc"} Aug 13 20:08:14 crc kubenswrapper[4183]: I0813 20:08:14.298848 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerStarted","Data":"dd5de1da9d2aa603827fd445dd57c562cf58ea00258cc5b64a324701843c502b"} Aug 13 20:08:14 crc kubenswrapper[4183]: I0813 20:08:14.346705 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=2.34665693 podStartE2EDuration="2.34665693s" podCreationTimestamp="2025-08-13 20:08:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:08:14.341638536 +0000 UTC m=+1461.034303354" watchObservedRunningTime="2025-08-13 20:08:14.34665693 +0000 UTC m=+1461.039321658" Aug 13 20:08:22 crc kubenswrapper[4183]: I0813 20:08:22.293526 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:08:22 crc kubenswrapper[4183]: I0813 20:08:22.294368 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:08:22 crc kubenswrapper[4183]: I0813 20:08:22.298199 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:08:22 crc kubenswrapper[4183]: I0813 20:08:22.298330 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:08:22 crc kubenswrapper[4183]: I0813 20:08:22.299395 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:08:22 crc kubenswrapper[4183]: I0813 20:08:22.301153 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:08:22 crc kubenswrapper[4183]: I0813 20:08:22.369525 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.361444 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.769578 4183 kubelet.go:2429] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.769759 4183 topology_manager.go:215] "Topology Admit Handler" podUID="7f47300841026200cf071984642de38e" podNamespace="openshift-kube-apiserver" podName="kube-apiserver-startup-monitor-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: E0813 20:08:23.770065 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="a45bfab9-f78b-4d72-b5b7-903e60401124" containerName="installer" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.770092 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="a45bfab9-f78b-4d72-b5b7-903e60401124" containerName="installer" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.770233 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="a45bfab9-f78b-4d72-b5b7-903e60401124" containerName="installer" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.770659 4183 kubelet.go:2439] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.770874 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.771150 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="48128e8d38b5cbcd2691da698bd9cac3" containerName="kube-apiserver" containerID="cri-o://cc3b998787ca6834bc0a8e76f29b082be5c1e343717bbe7707559989e9554f12" gracePeriod=15 Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.771208 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="48128e8d38b5cbcd2691da698bd9cac3" containerName="kube-apiserver-cert-syncer" containerID="cri-o://bb37d165f1c10d3b09fbe44a52f35b204201086505dc6f64b89245df7312c343" gracePeriod=15 Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.771215 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="48128e8d38b5cbcd2691da698bd9cac3" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://955a586517e3a80d51e63d25ab6529e5a5465596e05a4fd7f9f0729d7998cbc9" gracePeriod=15 Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.771239 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="48128e8d38b5cbcd2691da698bd9cac3" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://8bb841779401bd078d2cc708da9ac3cfd63491bf70c3a4f9e582b8786fa96b83" gracePeriod=15 Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.771375 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="48128e8d38b5cbcd2691da698bd9cac3" containerName="kube-apiserver-check-endpoints" containerID="cri-o://6e4f959539810eaf11abed055957cc9d830327c14164adc78761f27b297f44b9" gracePeriod=15 Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.772366 4183 kubelet.go:2429] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.772453 4183 topology_manager.go:215] "Topology Admit Handler" podUID="ae85115fdc231b4002b57317b41a6400" podNamespace="openshift-kube-apiserver" podName="kube-apiserver-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: E0813 20:08:23.772611 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="48128e8d38b5cbcd2691da698bd9cac3" containerName="kube-apiserver-check-endpoints" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.772625 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="48128e8d38b5cbcd2691da698bd9cac3" containerName="kube-apiserver-check-endpoints" Aug 13 20:08:23 crc kubenswrapper[4183]: E0813 20:08:23.772647 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="48128e8d38b5cbcd2691da698bd9cac3" containerName="kube-apiserver" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.772655 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="48128e8d38b5cbcd2691da698bd9cac3" containerName="kube-apiserver" Aug 13 20:08:23 crc kubenswrapper[4183]: E0813 20:08:23.772665 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="48128e8d38b5cbcd2691da698bd9cac3" containerName="setup" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.772674 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="48128e8d38b5cbcd2691da698bd9cac3" containerName="setup" Aug 13 20:08:23 crc kubenswrapper[4183]: E0813 20:08:23.772684 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="48128e8d38b5cbcd2691da698bd9cac3" containerName="kube-apiserver-cert-syncer" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.772692 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="48128e8d38b5cbcd2691da698bd9cac3" containerName="kube-apiserver-cert-syncer" Aug 13 20:08:23 crc kubenswrapper[4183]: E0813 20:08:23.772704 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="48128e8d38b5cbcd2691da698bd9cac3" containerName="kube-apiserver-insecure-readyz" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.772712 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="48128e8d38b5cbcd2691da698bd9cac3" containerName="kube-apiserver-insecure-readyz" Aug 13 20:08:23 crc kubenswrapper[4183]: E0813 20:08:23.772721 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="48128e8d38b5cbcd2691da698bd9cac3" containerName="kube-apiserver-cert-regeneration-controller" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.772728 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="48128e8d38b5cbcd2691da698bd9cac3" containerName="kube-apiserver-cert-regeneration-controller" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.772885 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="48128e8d38b5cbcd2691da698bd9cac3" containerName="kube-apiserver" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.772925 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="48128e8d38b5cbcd2691da698bd9cac3" containerName="kube-apiserver-insecure-readyz" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.772939 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="48128e8d38b5cbcd2691da698bd9cac3" containerName="kube-apiserver-cert-regeneration-controller" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.772952 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="48128e8d38b5cbcd2691da698bd9cac3" containerName="kube-apiserver-check-endpoints" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.772961 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="48128e8d38b5cbcd2691da698bd9cac3" containerName="kube-apiserver-cert-syncer" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.852631 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7f47300841026200cf071984642de38e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.852745 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7f47300841026200cf071984642de38e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.852875 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7f47300841026200cf071984642de38e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.852946 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.852979 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7f47300841026200cf071984642de38e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.853006 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.853028 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7f47300841026200cf071984642de38e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.853139 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.878338 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.954727 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7f47300841026200cf071984642de38e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.954844 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.954931 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7f47300841026200cf071984642de38e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.954966 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.954988 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7f47300841026200cf071984642de38e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.955017 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.955063 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7f47300841026200cf071984642de38e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.955089 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7f47300841026200cf071984642de38e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.955161 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7f47300841026200cf071984642de38e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.955272 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.955281 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.955310 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7f47300841026200cf071984642de38e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.955310 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7f47300841026200cf071984642de38e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.955338 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.955346 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7f47300841026200cf071984642de38e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.955367 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7f47300841026200cf071984642de38e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:08:24 crc kubenswrapper[4183]: I0813 20:08:24.174115 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:08:24 crc kubenswrapper[4183]: E0813 20:08:24.241628 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.130.11:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.185b6c6f19d3379d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:7f47300841026200cf071984642de38e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 20:08:24.221382557 +0000 UTC m=+1470.914047315,LastTimestamp:2025-08-13 20:08:24.221382557 +0000 UTC m=+1470.914047315,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 20:08:24 crc kubenswrapper[4183]: I0813 20:08:24.372432 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"7f47300841026200cf071984642de38e","Type":"ContainerStarted","Data":"887b3913b57be6cd6694b563992e615df63b28b24f279e51986fb9dfc689f5d5"} Aug 13 20:08:24 crc kubenswrapper[4183]: I0813 20:08:24.390453 4183 generic.go:334] "Generic (PLEG): container finished" podID="3557248c-8f70-4165-aa66-8df983e7e01a" containerID="6b580ba621276e10a232c15451ffaeddf32ec7044f6dad05aaf5e3b8fd52877a" exitCode=0 Aug 13 20:08:24 crc kubenswrapper[4183]: I0813 20:08:24.390594 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"3557248c-8f70-4165-aa66-8df983e7e01a","Type":"ContainerDied","Data":"6b580ba621276e10a232c15451ffaeddf32ec7044f6dad05aaf5e3b8fd52877a"} Aug 13 20:08:24 crc kubenswrapper[4183]: I0813 20:08:24.395765 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:24 crc kubenswrapper[4183]: I0813 20:08:24.397652 4183 status_manager.go:853] "Failed to get status for pod" podUID="3557248c-8f70-4165-aa66-8df983e7e01a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:24 crc kubenswrapper[4183]: I0813 20:08:24.399281 4183 status_manager.go:853] "Failed to get status for pod" podUID="7f47300841026200cf071984642de38e" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:24 crc kubenswrapper[4183]: I0813 20:08:24.414309 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_48128e8d38b5cbcd2691da698bd9cac3/kube-apiserver-cert-syncer/0.log" Aug 13 20:08:24 crc kubenswrapper[4183]: I0813 20:08:24.416055 4183 generic.go:334] "Generic (PLEG): container finished" podID="48128e8d38b5cbcd2691da698bd9cac3" containerID="6e4f959539810eaf11abed055957cc9d830327c14164adc78761f27b297f44b9" exitCode=0 Aug 13 20:08:24 crc kubenswrapper[4183]: I0813 20:08:24.416100 4183 generic.go:334] "Generic (PLEG): container finished" podID="48128e8d38b5cbcd2691da698bd9cac3" containerID="8bb841779401bd078d2cc708da9ac3cfd63491bf70c3a4f9e582b8786fa96b83" exitCode=0 Aug 13 20:08:24 crc kubenswrapper[4183]: I0813 20:08:24.416115 4183 generic.go:334] "Generic (PLEG): container finished" podID="48128e8d38b5cbcd2691da698bd9cac3" containerID="955a586517e3a80d51e63d25ab6529e5a5465596e05a4fd7f9f0729d7998cbc9" exitCode=0 Aug 13 20:08:24 crc kubenswrapper[4183]: I0813 20:08:24.416127 4183 generic.go:334] "Generic (PLEG): container finished" podID="48128e8d38b5cbcd2691da698bd9cac3" containerID="bb37d165f1c10d3b09fbe44a52f35b204201086505dc6f64b89245df7312c343" exitCode=2 Aug 13 20:08:25 crc kubenswrapper[4183]: I0813 20:08:25.214399 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:25 crc kubenswrapper[4183]: I0813 20:08:25.216001 4183 status_manager.go:853] "Failed to get status for pod" podUID="3557248c-8f70-4165-aa66-8df983e7e01a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:25 crc kubenswrapper[4183]: I0813 20:08:25.217007 4183 status_manager.go:853] "Failed to get status for pod" podUID="7f47300841026200cf071984642de38e" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:25 crc kubenswrapper[4183]: I0813 20:08:25.440382 4183 status_manager.go:853] "Failed to get status for pod" podUID="3557248c-8f70-4165-aa66-8df983e7e01a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:25 crc kubenswrapper[4183]: I0813 20:08:25.442184 4183 status_manager.go:853] "Failed to get status for pod" podUID="7f47300841026200cf071984642de38e" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:25 crc kubenswrapper[4183]: I0813 20:08:25.436735 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"7f47300841026200cf071984642de38e","Type":"ContainerStarted","Data":"92928a395bcb4b479dc083922bbe86ac38b51d98cd589eedcbc4c18744b69d89"} Aug 13 20:08:25 crc kubenswrapper[4183]: I0813 20:08:25.886490 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Aug 13 20:08:25 crc kubenswrapper[4183]: I0813 20:08:25.888411 4183 status_manager.go:853] "Failed to get status for pod" podUID="3557248c-8f70-4165-aa66-8df983e7e01a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:25 crc kubenswrapper[4183]: I0813 20:08:25.889866 4183 status_manager.go:853] "Failed to get status for pod" podUID="7f47300841026200cf071984642de38e" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:25 crc kubenswrapper[4183]: I0813 20:08:25.995965 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3557248c-8f70-4165-aa66-8df983e7e01a-kube-api-access\") pod \"3557248c-8f70-4165-aa66-8df983e7e01a\" (UID: \"3557248c-8f70-4165-aa66-8df983e7e01a\") " Aug 13 20:08:25 crc kubenswrapper[4183]: I0813 20:08:25.996063 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3557248c-8f70-4165-aa66-8df983e7e01a-var-lock\") pod \"3557248c-8f70-4165-aa66-8df983e7e01a\" (UID: \"3557248c-8f70-4165-aa66-8df983e7e01a\") " Aug 13 20:08:25 crc kubenswrapper[4183]: I0813 20:08:25.996135 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3557248c-8f70-4165-aa66-8df983e7e01a-kubelet-dir\") pod \"3557248c-8f70-4165-aa66-8df983e7e01a\" (UID: \"3557248c-8f70-4165-aa66-8df983e7e01a\") " Aug 13 20:08:25 crc kubenswrapper[4183]: I0813 20:08:25.996285 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3557248c-8f70-4165-aa66-8df983e7e01a-var-lock" (OuterVolumeSpecName: "var-lock") pod "3557248c-8f70-4165-aa66-8df983e7e01a" (UID: "3557248c-8f70-4165-aa66-8df983e7e01a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:08:25 crc kubenswrapper[4183]: I0813 20:08:25.996363 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3557248c-8f70-4165-aa66-8df983e7e01a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3557248c-8f70-4165-aa66-8df983e7e01a" (UID: "3557248c-8f70-4165-aa66-8df983e7e01a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.005385 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3557248c-8f70-4165-aa66-8df983e7e01a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3557248c-8f70-4165-aa66-8df983e7e01a" (UID: "3557248c-8f70-4165-aa66-8df983e7e01a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.097962 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3557248c-8f70-4165-aa66-8df983e7e01a-kube-api-access\") on node \"crc\" DevicePath \"\"" Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.098312 4183 reconciler_common.go:300] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3557248c-8f70-4165-aa66-8df983e7e01a-var-lock\") on node \"crc\" DevicePath \"\"" Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.098332 4183 reconciler_common.go:300] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3557248c-8f70-4165-aa66-8df983e7e01a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:08:26 crc kubenswrapper[4183]: E0813 20:08:26.174745 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?resourceVersion=0&timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:26 crc kubenswrapper[4183]: E0813 20:08:26.178136 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:26 crc kubenswrapper[4183]: E0813 20:08:26.181246 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:26 crc kubenswrapper[4183]: E0813 20:08:26.182057 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:26 crc kubenswrapper[4183]: E0813 20:08:26.183114 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:26 crc kubenswrapper[4183]: E0813 20:08:26.183129 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.445472 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.445476 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"3557248c-8f70-4165-aa66-8df983e7e01a","Type":"ContainerDied","Data":"afb6a839e21ef78ccbdf5a295971cba7dafad8761ac11e55edbab58d304e4309"} Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.445574 4183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afb6a839e21ef78ccbdf5a295971cba7dafad8761ac11e55edbab58d304e4309" Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.449279 4183 status_manager.go:853] "Failed to get status for pod" podUID="3557248c-8f70-4165-aa66-8df983e7e01a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.451519 4183 status_manager.go:853] "Failed to get status for pod" podUID="7f47300841026200cf071984642de38e" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.478514 4183 status_manager.go:853] "Failed to get status for pod" podUID="3557248c-8f70-4165-aa66-8df983e7e01a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.479931 4183 status_manager.go:853] "Failed to get status for pod" podUID="7f47300841026200cf071984642de38e" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.858069 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_48128e8d38b5cbcd2691da698bd9cac3/kube-apiserver-cert-syncer/0.log" Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.859873 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.862061 4183 status_manager.go:853] "Failed to get status for pod" podUID="3557248c-8f70-4165-aa66-8df983e7e01a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.863006 4183 status_manager.go:853] "Failed to get status for pod" podUID="7f47300841026200cf071984642de38e" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.863981 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.920653 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/48128e8d38b5cbcd2691da698bd9cac3-cert-dir\") pod \"48128e8d38b5cbcd2691da698bd9cac3\" (UID: \"48128e8d38b5cbcd2691da698bd9cac3\") " Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.920747 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/48128e8d38b5cbcd2691da698bd9cac3-resource-dir\") pod \"48128e8d38b5cbcd2691da698bd9cac3\" (UID: \"48128e8d38b5cbcd2691da698bd9cac3\") " Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.920915 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/48128e8d38b5cbcd2691da698bd9cac3-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "48128e8d38b5cbcd2691da698bd9cac3" (UID: "48128e8d38b5cbcd2691da698bd9cac3"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.920952 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48128e8d38b5cbcd2691da698bd9cac3-audit-dir\") pod \"48128e8d38b5cbcd2691da698bd9cac3\" (UID: \"48128e8d38b5cbcd2691da698bd9cac3\") " Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.920982 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/48128e8d38b5cbcd2691da698bd9cac3-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "48128e8d38b5cbcd2691da698bd9cac3" (UID: "48128e8d38b5cbcd2691da698bd9cac3"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.921140 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/48128e8d38b5cbcd2691da698bd9cac3-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "48128e8d38b5cbcd2691da698bd9cac3" (UID: "48128e8d38b5cbcd2691da698bd9cac3"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.921497 4183 reconciler_common.go:300] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/48128e8d38b5cbcd2691da698bd9cac3-cert-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.921532 4183 reconciler_common.go:300] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/48128e8d38b5cbcd2691da698bd9cac3-resource-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.921543 4183 reconciler_common.go:300] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48128e8d38b5cbcd2691da698bd9cac3-audit-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.218998 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48128e8d38b5cbcd2691da698bd9cac3" path="/var/lib/kubelet/pods/48128e8d38b5cbcd2691da698bd9cac3/volumes" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.458319 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_48128e8d38b5cbcd2691da698bd9cac3/kube-apiserver-cert-syncer/0.log" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.459534 4183 generic.go:334] "Generic (PLEG): container finished" podID="48128e8d38b5cbcd2691da698bd9cac3" containerID="cc3b998787ca6834bc0a8e76f29b082be5c1e343717bbe7707559989e9554f12" exitCode=0 Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.459608 4183 scope.go:117] "RemoveContainer" containerID="6e4f959539810eaf11abed055957cc9d830327c14164adc78761f27b297f44b9" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.459755 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.462241 4183 status_manager.go:853] "Failed to get status for pod" podUID="7f47300841026200cf071984642de38e" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.464065 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.466914 4183 status_manager.go:853] "Failed to get status for pod" podUID="3557248c-8f70-4165-aa66-8df983e7e01a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.468362 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.470527 4183 status_manager.go:853] "Failed to get status for pod" podUID="3557248c-8f70-4165-aa66-8df983e7e01a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.471441 4183 status_manager.go:853] "Failed to get status for pod" podUID="7f47300841026200cf071984642de38e" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.513125 4183 scope.go:117] "RemoveContainer" containerID="8bb841779401bd078d2cc708da9ac3cfd63491bf70c3a4f9e582b8786fa96b83" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.624083 4183 scope.go:117] "RemoveContainer" containerID="955a586517e3a80d51e63d25ab6529e5a5465596e05a4fd7f9f0729d7998cbc9" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.690658 4183 scope.go:117] "RemoveContainer" containerID="bb37d165f1c10d3b09fbe44a52f35b204201086505dc6f64b89245df7312c343" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.727822 4183 scope.go:117] "RemoveContainer" containerID="cc3b998787ca6834bc0a8e76f29b082be5c1e343717bbe7707559989e9554f12" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.785051 4183 scope.go:117] "RemoveContainer" containerID="c71c0072a7c08ea4ae494694be88f8491b485a84b46f62cedff5223a7c75b5ba" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.863453 4183 scope.go:117] "RemoveContainer" containerID="6e4f959539810eaf11abed055957cc9d830327c14164adc78761f27b297f44b9" Aug 13 20:08:27 crc kubenswrapper[4183]: E0813 20:08:27.864654 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e4f959539810eaf11abed055957cc9d830327c14164adc78761f27b297f44b9\": container with ID starting with 6e4f959539810eaf11abed055957cc9d830327c14164adc78761f27b297f44b9 not found: ID does not exist" containerID="6e4f959539810eaf11abed055957cc9d830327c14164adc78761f27b297f44b9" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.864760 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e4f959539810eaf11abed055957cc9d830327c14164adc78761f27b297f44b9"} err="failed to get container status \"6e4f959539810eaf11abed055957cc9d830327c14164adc78761f27b297f44b9\": rpc error: code = NotFound desc = could not find container \"6e4f959539810eaf11abed055957cc9d830327c14164adc78761f27b297f44b9\": container with ID starting with 6e4f959539810eaf11abed055957cc9d830327c14164adc78761f27b297f44b9 not found: ID does not exist" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.864855 4183 scope.go:117] "RemoveContainer" containerID="8bb841779401bd078d2cc708da9ac3cfd63491bf70c3a4f9e582b8786fa96b83" Aug 13 20:08:27 crc kubenswrapper[4183]: E0813 20:08:27.865988 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bb841779401bd078d2cc708da9ac3cfd63491bf70c3a4f9e582b8786fa96b83\": container with ID starting with 8bb841779401bd078d2cc708da9ac3cfd63491bf70c3a4f9e582b8786fa96b83 not found: ID does not exist" containerID="8bb841779401bd078d2cc708da9ac3cfd63491bf70c3a4f9e582b8786fa96b83" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.866096 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bb841779401bd078d2cc708da9ac3cfd63491bf70c3a4f9e582b8786fa96b83"} err="failed to get container status \"8bb841779401bd078d2cc708da9ac3cfd63491bf70c3a4f9e582b8786fa96b83\": rpc error: code = NotFound desc = could not find container \"8bb841779401bd078d2cc708da9ac3cfd63491bf70c3a4f9e582b8786fa96b83\": container with ID starting with 8bb841779401bd078d2cc708da9ac3cfd63491bf70c3a4f9e582b8786fa96b83 not found: ID does not exist" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.866111 4183 scope.go:117] "RemoveContainer" containerID="955a586517e3a80d51e63d25ab6529e5a5465596e05a4fd7f9f0729d7998cbc9" Aug 13 20:08:27 crc kubenswrapper[4183]: E0813 20:08:27.866831 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"955a586517e3a80d51e63d25ab6529e5a5465596e05a4fd7f9f0729d7998cbc9\": container with ID starting with 955a586517e3a80d51e63d25ab6529e5a5465596e05a4fd7f9f0729d7998cbc9 not found: ID does not exist" containerID="955a586517e3a80d51e63d25ab6529e5a5465596e05a4fd7f9f0729d7998cbc9" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.866880 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"955a586517e3a80d51e63d25ab6529e5a5465596e05a4fd7f9f0729d7998cbc9"} err="failed to get container status \"955a586517e3a80d51e63d25ab6529e5a5465596e05a4fd7f9f0729d7998cbc9\": rpc error: code = NotFound desc = could not find container \"955a586517e3a80d51e63d25ab6529e5a5465596e05a4fd7f9f0729d7998cbc9\": container with ID starting with 955a586517e3a80d51e63d25ab6529e5a5465596e05a4fd7f9f0729d7998cbc9 not found: ID does not exist" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.866925 4183 scope.go:117] "RemoveContainer" containerID="bb37d165f1c10d3b09fbe44a52f35b204201086505dc6f64b89245df7312c343" Aug 13 20:08:27 crc kubenswrapper[4183]: E0813 20:08:27.868091 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb37d165f1c10d3b09fbe44a52f35b204201086505dc6f64b89245df7312c343\": container with ID starting with bb37d165f1c10d3b09fbe44a52f35b204201086505dc6f64b89245df7312c343 not found: ID does not exist" containerID="bb37d165f1c10d3b09fbe44a52f35b204201086505dc6f64b89245df7312c343" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.868222 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb37d165f1c10d3b09fbe44a52f35b204201086505dc6f64b89245df7312c343"} err="failed to get container status \"bb37d165f1c10d3b09fbe44a52f35b204201086505dc6f64b89245df7312c343\": rpc error: code = NotFound desc = could not find container \"bb37d165f1c10d3b09fbe44a52f35b204201086505dc6f64b89245df7312c343\": container with ID starting with bb37d165f1c10d3b09fbe44a52f35b204201086505dc6f64b89245df7312c343 not found: ID does not exist" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.868252 4183 scope.go:117] "RemoveContainer" containerID="cc3b998787ca6834bc0a8e76f29b082be5c1e343717bbe7707559989e9554f12" Aug 13 20:08:27 crc kubenswrapper[4183]: E0813 20:08:27.869097 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc3b998787ca6834bc0a8e76f29b082be5c1e343717bbe7707559989e9554f12\": container with ID starting with cc3b998787ca6834bc0a8e76f29b082be5c1e343717bbe7707559989e9554f12 not found: ID does not exist" containerID="cc3b998787ca6834bc0a8e76f29b082be5c1e343717bbe7707559989e9554f12" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.869152 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc3b998787ca6834bc0a8e76f29b082be5c1e343717bbe7707559989e9554f12"} err="failed to get container status \"cc3b998787ca6834bc0a8e76f29b082be5c1e343717bbe7707559989e9554f12\": rpc error: code = NotFound desc = could not find container \"cc3b998787ca6834bc0a8e76f29b082be5c1e343717bbe7707559989e9554f12\": container with ID starting with cc3b998787ca6834bc0a8e76f29b082be5c1e343717bbe7707559989e9554f12 not found: ID does not exist" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.869166 4183 scope.go:117] "RemoveContainer" containerID="c71c0072a7c08ea4ae494694be88f8491b485a84b46f62cedff5223a7c75b5ba" Aug 13 20:08:27 crc kubenswrapper[4183]: E0813 20:08:27.870079 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c71c0072a7c08ea4ae494694be88f8491b485a84b46f62cedff5223a7c75b5ba\": container with ID starting with c71c0072a7c08ea4ae494694be88f8491b485a84b46f62cedff5223a7c75b5ba not found: ID does not exist" containerID="c71c0072a7c08ea4ae494694be88f8491b485a84b46f62cedff5223a7c75b5ba" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.870130 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c71c0072a7c08ea4ae494694be88f8491b485a84b46f62cedff5223a7c75b5ba"} err="failed to get container status \"c71c0072a7c08ea4ae494694be88f8491b485a84b46f62cedff5223a7c75b5ba\": rpc error: code = NotFound desc = could not find container \"c71c0072a7c08ea4ae494694be88f8491b485a84b46f62cedff5223a7c75b5ba\": container with ID starting with c71c0072a7c08ea4ae494694be88f8491b485a84b46f62cedff5223a7c75b5ba not found: ID does not exist" Aug 13 20:08:28 crc kubenswrapper[4183]: E0813 20:08:28.434605 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.130.11:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.185b6c6f19d3379d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:7f47300841026200cf071984642de38e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 20:08:24.221382557 +0000 UTC m=+1470.914047315,LastTimestamp:2025-08-13 20:08:24.221382557 +0000 UTC m=+1470.914047315,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 20:08:32 crc kubenswrapper[4183]: E0813 20:08:32.410013 4183 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:32 crc kubenswrapper[4183]: E0813 20:08:32.412321 4183 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:32 crc kubenswrapper[4183]: E0813 20:08:32.413478 4183 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:32 crc kubenswrapper[4183]: E0813 20:08:32.414387 4183 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:32 crc kubenswrapper[4183]: E0813 20:08:32.415398 4183 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:32 crc kubenswrapper[4183]: I0813 20:08:32.422569 4183 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Aug 13 20:08:32 crc kubenswrapper[4183]: E0813 20:08:32.424377 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="200ms" Aug 13 20:08:32 crc kubenswrapper[4183]: E0813 20:08:32.626301 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="400ms" Aug 13 20:08:33 crc kubenswrapper[4183]: E0813 20:08:33.028474 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="800ms" Aug 13 20:08:33 crc kubenswrapper[4183]: E0813 20:08:33.830041 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="1.6s" Aug 13 20:08:35 crc kubenswrapper[4183]: I0813 20:08:35.213617 4183 status_manager.go:853] "Failed to get status for pod" podUID="3557248c-8f70-4165-aa66-8df983e7e01a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:35 crc kubenswrapper[4183]: I0813 20:08:35.215381 4183 status_manager.go:853] "Failed to get status for pod" podUID="7f47300841026200cf071984642de38e" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:35 crc kubenswrapper[4183]: E0813 20:08:35.431177 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="3.2s" Aug 13 20:08:36 crc kubenswrapper[4183]: E0813 20:08:36.521459 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?resourceVersion=0&timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:36 crc kubenswrapper[4183]: E0813 20:08:36.523202 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:36 crc kubenswrapper[4183]: E0813 20:08:36.524232 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:36 crc kubenswrapper[4183]: E0813 20:08:36.525871 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:36 crc kubenswrapper[4183]: E0813 20:08:36.526512 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:36 crc kubenswrapper[4183]: E0813 20:08:36.526527 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 20:08:37 crc kubenswrapper[4183]: I0813 20:08:37.209360 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:08:37 crc kubenswrapper[4183]: I0813 20:08:37.211765 4183 status_manager.go:853] "Failed to get status for pod" podUID="3557248c-8f70-4165-aa66-8df983e7e01a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:37 crc kubenswrapper[4183]: I0813 20:08:37.212614 4183 status_manager.go:853] "Failed to get status for pod" podUID="7f47300841026200cf071984642de38e" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:37 crc kubenswrapper[4183]: I0813 20:08:37.231367 4183 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c20181-da08-4c94-91d7-6f71a843fa75" Aug 13 20:08:37 crc kubenswrapper[4183]: I0813 20:08:37.231761 4183 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c20181-da08-4c94-91d7-6f71a843fa75" Aug 13 20:08:37 crc kubenswrapper[4183]: E0813 20:08:37.233020 4183 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:08:37 crc kubenswrapper[4183]: I0813 20:08:37.233654 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:08:37 crc kubenswrapper[4183]: I0813 20:08:37.538540 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"ae85115fdc231b4002b57317b41a6400","Type":"ContainerStarted","Data":"302d89cfbab2c80a69d727fd8c30e727ff36453533105813906fa746343277a0"} Aug 13 20:08:38 crc kubenswrapper[4183]: E0813 20:08:38.437606 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.130.11:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.185b6c6f19d3379d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:7f47300841026200cf071984642de38e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 20:08:24.221382557 +0000 UTC m=+1470.914047315,LastTimestamp:2025-08-13 20:08:24.221382557 +0000 UTC m=+1470.914047315,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 20:08:38 crc kubenswrapper[4183]: I0813 20:08:38.546455 4183 generic.go:334] "Generic (PLEG): container finished" podID="ae85115fdc231b4002b57317b41a6400" containerID="05c582e8404bde997b8ba5640dc26199d47b5ebbea2e230e2e412df871d70fb0" exitCode=0 Aug 13 20:08:38 crc kubenswrapper[4183]: I0813 20:08:38.546519 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"ae85115fdc231b4002b57317b41a6400","Type":"ContainerDied","Data":"05c582e8404bde997b8ba5640dc26199d47b5ebbea2e230e2e412df871d70fb0"} Aug 13 20:08:38 crc kubenswrapper[4183]: I0813 20:08:38.546956 4183 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c20181-da08-4c94-91d7-6f71a843fa75" Aug 13 20:08:38 crc kubenswrapper[4183]: I0813 20:08:38.546972 4183 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c20181-da08-4c94-91d7-6f71a843fa75" Aug 13 20:08:38 crc kubenswrapper[4183]: E0813 20:08:38.548383 4183 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:08:38 crc kubenswrapper[4183]: I0813 20:08:38.551440 4183 status_manager.go:853] "Failed to get status for pod" podUID="3557248c-8f70-4165-aa66-8df983e7e01a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:38 crc kubenswrapper[4183]: I0813 20:08:38.553221 4183 status_manager.go:853] "Failed to get status for pod" podUID="7f47300841026200cf071984642de38e" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:38 crc kubenswrapper[4183]: I0813 20:08:38.554631 4183 status_manager.go:853] "Failed to get status for pod" podUID="ae85115fdc231b4002b57317b41a6400" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:38 crc kubenswrapper[4183]: E0813 20:08:38.633940 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="6.4s" Aug 13 20:08:39 crc kubenswrapper[4183]: I0813 20:08:39.559148 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"ae85115fdc231b4002b57317b41a6400","Type":"ContainerStarted","Data":"a6d2ed4439a7191ab2bfda0bfba1dd031d0a4d540b63ab481e85ae9fcff31282"} Aug 13 20:08:39 crc kubenswrapper[4183]: I0813 20:08:39.559214 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"ae85115fdc231b4002b57317b41a6400","Type":"ContainerStarted","Data":"53c859e04188764b0d92baab2d894b8e5cc24fc74718e7837e9bf64ec1096807"} Aug 13 20:08:40 crc kubenswrapper[4183]: I0813 20:08:40.599184 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"ae85115fdc231b4002b57317b41a6400","Type":"ContainerStarted","Data":"caf1498eec5b51d72767ade594459626b076c4bb41f3b23c2fc33eb01453a9a3"} Aug 13 20:08:40 crc kubenswrapper[4183]: I0813 20:08:40.599535 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"ae85115fdc231b4002b57317b41a6400","Type":"ContainerStarted","Data":"8ec028dd58f3480de1c152178877ef20363db5cdec32732223f3a6419a431078"} Aug 13 20:08:41 crc kubenswrapper[4183]: I0813 20:08:41.611076 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"ae85115fdc231b4002b57317b41a6400","Type":"ContainerStarted","Data":"ea11448c0ee33a569f6d69d267e792b452d2024239768810e787c3c52f080333"} Aug 13 20:08:41 crc kubenswrapper[4183]: I0813 20:08:41.611749 4183 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c20181-da08-4c94-91d7-6f71a843fa75" Aug 13 20:08:41 crc kubenswrapper[4183]: I0813 20:08:41.611849 4183 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c20181-da08-4c94-91d7-6f71a843fa75" Aug 13 20:08:41 crc kubenswrapper[4183]: I0813 20:08:41.612213 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:08:42 crc kubenswrapper[4183]: I0813 20:08:42.234267 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:08:42 crc kubenswrapper[4183]: I0813 20:08:42.234736 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:08:42 crc kubenswrapper[4183]: I0813 20:08:42.342162 4183 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403} Aug 13 20:08:42 crc kubenswrapper[4183]: I0813 20:08:42.342428 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Aug 13 20:08:47 crc kubenswrapper[4183]: I0813 20:08:47.273716 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:08:47 crc kubenswrapper[4183]: I0813 20:08:47.471929 4183 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:08:47 crc kubenswrapper[4183]: I0813 20:08:47.525141 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53c20181-da08-4c94-91d7-6f71a843fa75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T20:08:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T20:08:38Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T20:08:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T20:08:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c859e04188764b0d92baab2d894b8e5cc24fc74718e7837e9bf64ec1096807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T20:08:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8ec028dd58f3480de1c152178877ef20363db5cdec32732223f3a6419a431078\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T20:08:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a6d2ed4439a7191ab2bfda0bfba1dd031d0a4d540b63ab481e85ae9fcff31282\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T20:08:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ea11448c0ee33a569f6d69d267e792b452d2024239768810e787c3c52f080333\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T20:08:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://caf1498eec5b51d72767ade594459626b076c4bb41f3b23c2fc33eb01453a9a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T20:08:40Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05c582e8404bde997b8ba5640dc26199d47b5ebbea2e230e2e412df871d70fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05c582e8404bde997b8ba5640dc26199d47b5ebbea2e230e2e412df871d70fb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T20:08:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T20:08:37Z\\\"}}}]}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Pod \"kube-apiserver-crc\" is invalid: metadata.uid: Invalid value: \"53c20181-da08-4c94-91d7-6f71a843fa75\": field is immutable" Aug 13 20:08:47 crc kubenswrapper[4183]: I0813 20:08:47.593733 4183 status_manager.go:863] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="ae85115fdc231b4002b57317b41a6400" podUID="d1b73e61-d8d2-4892-8a19-005929c9d4e1" Aug 13 20:08:47 crc kubenswrapper[4183]: I0813 20:08:47.653927 4183 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c20181-da08-4c94-91d7-6f71a843fa75" Aug 13 20:08:47 crc kubenswrapper[4183]: I0813 20:08:47.653970 4183 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c20181-da08-4c94-91d7-6f71a843fa75" Aug 13 20:08:47 crc kubenswrapper[4183]: I0813 20:08:47.665200 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:08:47 crc kubenswrapper[4183]: I0813 20:08:47.671109 4183 status_manager.go:863] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="ae85115fdc231b4002b57317b41a6400" podUID="d1b73e61-d8d2-4892-8a19-005929c9d4e1" Aug 13 20:08:48 crc kubenswrapper[4183]: I0813 20:08:48.660687 4183 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c20181-da08-4c94-91d7-6f71a843fa75" Aug 13 20:08:48 crc kubenswrapper[4183]: I0813 20:08:48.660738 4183 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c20181-da08-4c94-91d7-6f71a843fa75" Aug 13 20:08:54 crc kubenswrapper[4183]: I0813 20:08:54.748075 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:08:54 crc kubenswrapper[4183]: I0813 20:08:54.748960 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" status="Running" Aug 13 20:08:54 crc kubenswrapper[4183]: I0813 20:08:54.748992 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:08:54 crc kubenswrapper[4183]: I0813 20:08:54.749206 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:08:54 crc kubenswrapper[4183]: I0813 20:08:54.749313 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:08:54 crc kubenswrapper[4183]: I0813 20:08:54.749414 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:08:55 crc kubenswrapper[4183]: I0813 20:08:55.227202 4183 status_manager.go:863] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="ae85115fdc231b4002b57317b41a6400" podUID="d1b73e61-d8d2-4892-8a19-005929c9d4e1" Aug 13 20:08:57 crc kubenswrapper[4183]: I0813 20:08:57.627330 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Aug 13 20:08:57 crc kubenswrapper[4183]: I0813 20:08:57.631933 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Aug 13 20:08:57 crc kubenswrapper[4183]: I0813 20:08:57.982066 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Aug 13 20:08:58 crc kubenswrapper[4183]: I0813 20:08:58.147301 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Aug 13 20:08:58 crc kubenswrapper[4183]: I0813 20:08:58.293535 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:08:58 crc kubenswrapper[4183]: I0813 20:08:58.296700 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Aug 13 20:08:58 crc kubenswrapper[4183]: I0813 20:08:58.461026 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Aug 13 20:08:58 crc kubenswrapper[4183]: I0813 20:08:58.601848 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Aug 13 20:08:59 crc kubenswrapper[4183]: I0813 20:08:59.117265 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Aug 13 20:08:59 crc kubenswrapper[4183]: I0813 20:08:59.177676 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Aug 13 20:08:59 crc kubenswrapper[4183]: I0813 20:08:59.254728 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Aug 13 20:08:59 crc kubenswrapper[4183]: I0813 20:08:59.262980 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Aug 13 20:08:59 crc kubenswrapper[4183]: I0813 20:08:59.335459 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Aug 13 20:08:59 crc kubenswrapper[4183]: I0813 20:08:59.630933 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Aug 13 20:08:59 crc kubenswrapper[4183]: I0813 20:08:59.789658 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Aug 13 20:08:59 crc kubenswrapper[4183]: I0813 20:08:59.845263 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Aug 13 20:08:59 crc kubenswrapper[4183]: I0813 20:08:59.903631 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Aug 13 20:09:00 crc kubenswrapper[4183]: I0813 20:09:00.057338 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Aug 13 20:09:00 crc kubenswrapper[4183]: I0813 20:09:00.074697 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Aug 13 20:09:00 crc kubenswrapper[4183]: I0813 20:09:00.110668 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Aug 13 20:09:00 crc kubenswrapper[4183]: I0813 20:09:00.303377 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Aug 13 20:09:00 crc kubenswrapper[4183]: I0813 20:09:00.360247 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Aug 13 20:09:00 crc kubenswrapper[4183]: I0813 20:09:00.464834 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Aug 13 20:09:00 crc kubenswrapper[4183]: I0813 20:09:00.489071 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Aug 13 20:09:00 crc kubenswrapper[4183]: I0813 20:09:00.607957 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Aug 13 20:09:00 crc kubenswrapper[4183]: I0813 20:09:00.720412 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Aug 13 20:09:00 crc kubenswrapper[4183]: I0813 20:09:00.780720 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Aug 13 20:09:00 crc kubenswrapper[4183]: I0813 20:09:00.784394 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Aug 13 20:09:00 crc kubenswrapper[4183]: I0813 20:09:00.795747 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Aug 13 20:09:00 crc kubenswrapper[4183]: I0813 20:09:00.862674 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Aug 13 20:09:00 crc kubenswrapper[4183]: I0813 20:09:00.940179 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Aug 13 20:09:00 crc kubenswrapper[4183]: I0813 20:09:00.956659 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Aug 13 20:09:01 crc kubenswrapper[4183]: I0813 20:09:01.085377 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-79vsd" Aug 13 20:09:01 crc kubenswrapper[4183]: I0813 20:09:01.178096 4183 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Aug 13 20:09:01 crc kubenswrapper[4183]: I0813 20:09:01.328063 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Aug 13 20:09:01 crc kubenswrapper[4183]: I0813 20:09:01.447104 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Aug 13 20:09:01 crc kubenswrapper[4183]: I0813 20:09:01.476288 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Aug 13 20:09:01 crc kubenswrapper[4183]: I0813 20:09:01.547427 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Aug 13 20:09:01 crc kubenswrapper[4183]: I0813 20:09:01.641589 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Aug 13 20:09:01 crc kubenswrapper[4183]: I0813 20:09:01.665206 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Aug 13 20:09:01 crc kubenswrapper[4183]: I0813 20:09:01.676310 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Aug 13 20:09:01 crc kubenswrapper[4183]: I0813 20:09:01.681567 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Aug 13 20:09:01 crc kubenswrapper[4183]: I0813 20:09:01.692079 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-r9fjc" Aug 13 20:09:01 crc kubenswrapper[4183]: I0813 20:09:01.769757 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Aug 13 20:09:01 crc kubenswrapper[4183]: I0813 20:09:01.785259 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Aug 13 20:09:01 crc kubenswrapper[4183]: I0813 20:09:01.957170 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Aug 13 20:09:01 crc kubenswrapper[4183]: I0813 20:09:01.977180 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Aug 13 20:09:02 crc kubenswrapper[4183]: I0813 20:09:02.081278 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Aug 13 20:09:02 crc kubenswrapper[4183]: I0813 20:09:02.096022 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Aug 13 20:09:02 crc kubenswrapper[4183]: I0813 20:09:02.099320 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Aug 13 20:09:02 crc kubenswrapper[4183]: I0813 20:09:02.378915 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Aug 13 20:09:02 crc kubenswrapper[4183]: I0813 20:09:02.386933 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Aug 13 20:09:02 crc kubenswrapper[4183]: I0813 20:09:02.493464 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Aug 13 20:09:02 crc kubenswrapper[4183]: I0813 20:09:02.498007 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Aug 13 20:09:02 crc kubenswrapper[4183]: I0813 20:09:02.511713 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Aug 13 20:09:02 crc kubenswrapper[4183]: I0813 20:09:02.686008 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Aug 13 20:09:02 crc kubenswrapper[4183]: I0813 20:09:02.695292 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Aug 13 20:09:02 crc kubenswrapper[4183]: I0813 20:09:02.961043 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Aug 13 20:09:03 crc kubenswrapper[4183]: I0813 20:09:03.031525 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Aug 13 20:09:03 crc kubenswrapper[4183]: I0813 20:09:03.102611 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Aug 13 20:09:03 crc kubenswrapper[4183]: I0813 20:09:03.110397 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Aug 13 20:09:03 crc kubenswrapper[4183]: I0813 20:09:03.141717 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Aug 13 20:09:03 crc kubenswrapper[4183]: I0813 20:09:03.320726 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Aug 13 20:09:03 crc kubenswrapper[4183]: I0813 20:09:03.446960 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Aug 13 20:09:03 crc kubenswrapper[4183]: I0813 20:09:03.478887 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Aug 13 20:09:03 crc kubenswrapper[4183]: I0813 20:09:03.509574 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Aug 13 20:09:03 crc kubenswrapper[4183]: I0813 20:09:03.607414 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Aug 13 20:09:03 crc kubenswrapper[4183]: I0813 20:09:03.648203 4183 reflector.go:351] Caches populated for *v1.CSIDriver from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Aug 13 20:09:03 crc kubenswrapper[4183]: I0813 20:09:03.774962 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Aug 13 20:09:03 crc kubenswrapper[4183]: I0813 20:09:03.947576 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Aug 13 20:09:03 crc kubenswrapper[4183]: I0813 20:09:03.993438 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Aug 13 20:09:03 crc kubenswrapper[4183]: I0813 20:09:03.998076 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Aug 13 20:09:04 crc kubenswrapper[4183]: I0813 20:09:04.033861 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Aug 13 20:09:04 crc kubenswrapper[4183]: I0813 20:09:04.037003 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Aug 13 20:09:04 crc kubenswrapper[4183]: I0813 20:09:04.042158 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Aug 13 20:09:04 crc kubenswrapper[4183]: I0813 20:09:04.068241 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-twmwc" Aug 13 20:09:04 crc kubenswrapper[4183]: I0813 20:09:04.081452 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Aug 13 20:09:04 crc kubenswrapper[4183]: I0813 20:09:04.101661 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Aug 13 20:09:04 crc kubenswrapper[4183]: I0813 20:09:04.189515 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Aug 13 20:09:04 crc kubenswrapper[4183]: I0813 20:09:04.265058 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Aug 13 20:09:04 crc kubenswrapper[4183]: I0813 20:09:04.324465 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Aug 13 20:09:04 crc kubenswrapper[4183]: I0813 20:09:04.326161 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Aug 13 20:09:04 crc kubenswrapper[4183]: I0813 20:09:04.543695 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Aug 13 20:09:04 crc kubenswrapper[4183]: I0813 20:09:04.547105 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Aug 13 20:09:04 crc kubenswrapper[4183]: I0813 20:09:04.572449 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Aug 13 20:09:04 crc kubenswrapper[4183]: I0813 20:09:04.598540 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Aug 13 20:09:04 crc kubenswrapper[4183]: I0813 20:09:04.654289 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Aug 13 20:09:04 crc kubenswrapper[4183]: I0813 20:09:04.672610 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Aug 13 20:09:04 crc kubenswrapper[4183]: I0813 20:09:04.717240 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Aug 13 20:09:04 crc kubenswrapper[4183]: I0813 20:09:04.822302 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Aug 13 20:09:04 crc kubenswrapper[4183]: I0813 20:09:04.968089 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Aug 13 20:09:05 crc kubenswrapper[4183]: I0813 20:09:05.057616 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Aug 13 20:09:05 crc kubenswrapper[4183]: I0813 20:09:05.199184 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-58g82" Aug 13 20:09:05 crc kubenswrapper[4183]: I0813 20:09:05.244267 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Aug 13 20:09:05 crc kubenswrapper[4183]: I0813 20:09:05.296634 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Aug 13 20:09:05 crc kubenswrapper[4183]: I0813 20:09:05.313920 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-dwn4s" Aug 13 20:09:05 crc kubenswrapper[4183]: I0813 20:09:05.472644 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Aug 13 20:09:05 crc kubenswrapper[4183]: I0813 20:09:05.481972 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-q786x" Aug 13 20:09:05 crc kubenswrapper[4183]: I0813 20:09:05.506429 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Aug 13 20:09:05 crc kubenswrapper[4183]: I0813 20:09:05.556529 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Aug 13 20:09:05 crc kubenswrapper[4183]: I0813 20:09:05.669561 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Aug 13 20:09:05 crc kubenswrapper[4183]: I0813 20:09:05.695473 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Aug 13 20:09:05 crc kubenswrapper[4183]: I0813 20:09:05.866327 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Aug 13 20:09:05 crc kubenswrapper[4183]: I0813 20:09:05.914427 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Aug 13 20:09:05 crc kubenswrapper[4183]: I0813 20:09:05.977991 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Aug 13 20:09:06 crc kubenswrapper[4183]: I0813 20:09:06.000600 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Aug 13 20:09:06 crc kubenswrapper[4183]: I0813 20:09:06.010262 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Aug 13 20:09:06 crc kubenswrapper[4183]: I0813 20:09:06.018669 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Aug 13 20:09:06 crc kubenswrapper[4183]: I0813 20:09:06.055596 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Aug 13 20:09:06 crc kubenswrapper[4183]: I0813 20:09:06.095466 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Aug 13 20:09:06 crc kubenswrapper[4183]: I0813 20:09:06.112337 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Aug 13 20:09:06 crc kubenswrapper[4183]: I0813 20:09:06.114240 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Aug 13 20:09:06 crc kubenswrapper[4183]: I0813 20:09:06.126649 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Aug 13 20:09:06 crc kubenswrapper[4183]: I0813 20:09:06.308156 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Aug 13 20:09:06 crc kubenswrapper[4183]: I0813 20:09:06.309407 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Aug 13 20:09:06 crc kubenswrapper[4183]: I0813 20:09:06.369216 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Aug 13 20:09:06 crc kubenswrapper[4183]: I0813 20:09:06.518110 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Aug 13 20:09:06 crc kubenswrapper[4183]: I0813 20:09:06.585833 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Aug 13 20:09:06 crc kubenswrapper[4183]: I0813 20:09:06.595313 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Aug 13 20:09:06 crc kubenswrapper[4183]: I0813 20:09:06.778450 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Aug 13 20:09:06 crc kubenswrapper[4183]: I0813 20:09:06.831825 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Aug 13 20:09:06 crc kubenswrapper[4183]: I0813 20:09:06.850352 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Aug 13 20:09:06 crc kubenswrapper[4183]: I0813 20:09:06.962435 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Aug 13 20:09:07 crc kubenswrapper[4183]: I0813 20:09:07.157179 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Aug 13 20:09:07 crc kubenswrapper[4183]: I0813 20:09:07.180116 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Aug 13 20:09:07 crc kubenswrapper[4183]: I0813 20:09:07.221351 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Aug 13 20:09:07 crc kubenswrapper[4183]: I0813 20:09:07.250856 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Aug 13 20:09:07 crc kubenswrapper[4183]: I0813 20:09:07.257683 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Aug 13 20:09:07 crc kubenswrapper[4183]: I0813 20:09:07.279858 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Aug 13 20:09:07 crc kubenswrapper[4183]: I0813 20:09:07.280641 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Aug 13 20:09:07 crc kubenswrapper[4183]: I0813 20:09:07.301944 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Aug 13 20:09:07 crc kubenswrapper[4183]: I0813 20:09:07.371653 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Aug 13 20:09:07 crc kubenswrapper[4183]: I0813 20:09:07.376765 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Aug 13 20:09:07 crc kubenswrapper[4183]: I0813 20:09:07.558063 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Aug 13 20:09:07 crc kubenswrapper[4183]: I0813 20:09:07.609699 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Aug 13 20:09:07 crc kubenswrapper[4183]: I0813 20:09:07.620979 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Aug 13 20:09:07 crc kubenswrapper[4183]: I0813 20:09:07.644389 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-sv888" Aug 13 20:09:07 crc kubenswrapper[4183]: I0813 20:09:07.671435 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Aug 13 20:09:07 crc kubenswrapper[4183]: I0813 20:09:07.696221 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-ng44q" Aug 13 20:09:07 crc kubenswrapper[4183]: I0813 20:09:07.869656 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Aug 13 20:09:07 crc kubenswrapper[4183]: I0813 20:09:07.871617 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Aug 13 20:09:07 crc kubenswrapper[4183]: I0813 20:09:07.884152 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Aug 13 20:09:07 crc kubenswrapper[4183]: I0813 20:09:07.902953 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Aug 13 20:09:08 crc kubenswrapper[4183]: I0813 20:09:08.098194 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Aug 13 20:09:08 crc kubenswrapper[4183]: I0813 20:09:08.125093 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Aug 13 20:09:08 crc kubenswrapper[4183]: I0813 20:09:08.177401 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Aug 13 20:09:08 crc kubenswrapper[4183]: I0813 20:09:08.363241 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Aug 13 20:09:08 crc kubenswrapper[4183]: I0813 20:09:08.532440 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Aug 13 20:09:08 crc kubenswrapper[4183]: I0813 20:09:08.672480 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Aug 13 20:09:08 crc kubenswrapper[4183]: I0813 20:09:08.699313 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Aug 13 20:09:08 crc kubenswrapper[4183]: I0813 20:09:08.700878 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Aug 13 20:09:08 crc kubenswrapper[4183]: I0813 20:09:08.705558 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Aug 13 20:09:08 crc kubenswrapper[4183]: I0813 20:09:08.782818 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Aug 13 20:09:08 crc kubenswrapper[4183]: I0813 20:09:08.783315 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Aug 13 20:09:08 crc kubenswrapper[4183]: I0813 20:09:08.858137 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Aug 13 20:09:08 crc kubenswrapper[4183]: I0813 20:09:08.868186 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Aug 13 20:09:08 crc kubenswrapper[4183]: I0813 20:09:08.999092 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Aug 13 20:09:09 crc kubenswrapper[4183]: I0813 20:09:09.148008 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Aug 13 20:09:09 crc kubenswrapper[4183]: I0813 20:09:09.199442 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Aug 13 20:09:09 crc kubenswrapper[4183]: I0813 20:09:09.265032 4183 reflector.go:351] Caches populated for *v1.RuntimeClass from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Aug 13 20:09:09 crc kubenswrapper[4183]: I0813 20:09:09.405863 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Aug 13 20:09:09 crc kubenswrapper[4183]: I0813 20:09:09.430381 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Aug 13 20:09:09 crc kubenswrapper[4183]: I0813 20:09:09.460881 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Aug 13 20:09:09 crc kubenswrapper[4183]: I0813 20:09:09.505573 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Aug 13 20:09:09 crc kubenswrapper[4183]: I0813 20:09:09.664845 4183 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Aug 13 20:09:09 crc kubenswrapper[4183]: I0813 20:09:09.780304 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Aug 13 20:09:09 crc kubenswrapper[4183]: I0813 20:09:09.924032 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Aug 13 20:09:09 crc kubenswrapper[4183]: I0813 20:09:09.937226 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-kpdvz" Aug 13 20:09:10 crc kubenswrapper[4183]: I0813 20:09:10.072708 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Aug 13 20:09:10 crc kubenswrapper[4183]: I0813 20:09:10.134052 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Aug 13 20:09:10 crc kubenswrapper[4183]: I0813 20:09:10.164281 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Aug 13 20:09:10 crc kubenswrapper[4183]: I0813 20:09:10.227498 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Aug 13 20:09:10 crc kubenswrapper[4183]: I0813 20:09:10.276419 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Aug 13 20:09:10 crc kubenswrapper[4183]: I0813 20:09:10.288036 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Aug 13 20:09:10 crc kubenswrapper[4183]: I0813 20:09:10.370724 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Aug 13 20:09:10 crc kubenswrapper[4183]: I0813 20:09:10.456064 4183 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Aug 13 20:09:10 crc kubenswrapper[4183]: I0813 20:09:10.457612 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Aug 13 20:09:10 crc kubenswrapper[4183]: I0813 20:09:10.458203 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=47.458141811 podStartE2EDuration="47.458141811s" podCreationTimestamp="2025-08-13 20:08:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:08:47.588553361 +0000 UTC m=+1494.281218409" watchObservedRunningTime="2025-08-13 20:09:10.458141811 +0000 UTC m=+1517.150806510" Aug 13 20:09:10 crc kubenswrapper[4183]: I0813 20:09:10.462790 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Aug 13 20:09:10 crc kubenswrapper[4183]: I0813 20:09:10.462937 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Aug 13 20:09:10 crc kubenswrapper[4183]: I0813 20:09:10.481349 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:09:10 crc kubenswrapper[4183]: I0813 20:09:10.495878 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Aug 13 20:09:10 crc kubenswrapper[4183]: I0813 20:09:10.498050 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Aug 13 20:09:10 crc kubenswrapper[4183]: I0813 20:09:10.506394 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Aug 13 20:09:10 crc kubenswrapper[4183]: I0813 20:09:10.516937 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=23.516769112 podStartE2EDuration="23.516769112s" podCreationTimestamp="2025-08-13 20:08:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:09:10.508199597 +0000 UTC m=+1517.200864395" watchObservedRunningTime="2025-08-13 20:09:10.516769112 +0000 UTC m=+1517.209433890" Aug 13 20:09:10 crc kubenswrapper[4183]: I0813 20:09:10.610135 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Aug 13 20:09:10 crc kubenswrapper[4183]: I0813 20:09:10.712759 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Aug 13 20:09:10 crc kubenswrapper[4183]: I0813 20:09:10.743313 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Aug 13 20:09:10 crc kubenswrapper[4183]: I0813 20:09:10.840994 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Aug 13 20:09:10 crc kubenswrapper[4183]: I0813 20:09:10.942279 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Aug 13 20:09:11 crc kubenswrapper[4183]: I0813 20:09:11.032092 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Aug 13 20:09:11 crc kubenswrapper[4183]: I0813 20:09:11.093276 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Aug 13 20:09:11 crc kubenswrapper[4183]: I0813 20:09:11.243481 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Aug 13 20:09:11 crc kubenswrapper[4183]: I0813 20:09:11.289761 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Aug 13 20:09:11 crc kubenswrapper[4183]: I0813 20:09:11.342288 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Aug 13 20:09:11 crc kubenswrapper[4183]: I0813 20:09:11.384979 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console-operator"/"webhook-serving-cert" Aug 13 20:09:11 crc kubenswrapper[4183]: I0813 20:09:11.572094 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Aug 13 20:09:11 crc kubenswrapper[4183]: I0813 20:09:11.624107 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Aug 13 20:09:12 crc kubenswrapper[4183]: I0813 20:09:12.101727 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Aug 13 20:09:12 crc kubenswrapper[4183]: I0813 20:09:12.141251 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Aug 13 20:09:12 crc kubenswrapper[4183]: I0813 20:09:12.263078 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Aug 13 20:09:12 crc kubenswrapper[4183]: I0813 20:09:12.362504 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Aug 13 20:09:12 crc kubenswrapper[4183]: I0813 20:09:12.444336 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Aug 13 20:09:12 crc kubenswrapper[4183]: I0813 20:09:12.801094 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Aug 13 20:09:12 crc kubenswrapper[4183]: I0813 20:09:12.813525 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-6sd5l" Aug 13 20:09:13 crc kubenswrapper[4183]: I0813 20:09:13.016540 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Aug 13 20:09:13 crc kubenswrapper[4183]: I0813 20:09:13.393057 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Aug 13 20:09:13 crc kubenswrapper[4183]: I0813 20:09:13.499447 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Aug 13 20:09:13 crc kubenswrapper[4183]: I0813 20:09:13.526685 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Aug 13 20:09:13 crc kubenswrapper[4183]: I0813 20:09:13.600389 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Aug 13 20:09:13 crc kubenswrapper[4183]: I0813 20:09:13.632243 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Aug 13 20:09:13 crc kubenswrapper[4183]: I0813 20:09:13.857723 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Aug 13 20:09:13 crc kubenswrapper[4183]: I0813 20:09:13.992095 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Aug 13 20:09:21 crc kubenswrapper[4183]: I0813 20:09:21.399619 4183 kubelet.go:2439] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Aug 13 20:09:21 crc kubenswrapper[4183]: I0813 20:09:21.401000 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="7f47300841026200cf071984642de38e" containerName="startup-monitor" containerID="cri-o://92928a395bcb4b479dc083922bbe86ac38b51d98cd589eedcbc4c18744b69d89" gracePeriod=5 Aug 13 20:09:26 crc kubenswrapper[4183]: I0813 20:09:26.975279 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_7f47300841026200cf071984642de38e/startup-monitor/0.log" Aug 13 20:09:26 crc kubenswrapper[4183]: I0813 20:09:26.975935 4183 generic.go:334] "Generic (PLEG): container finished" podID="7f47300841026200cf071984642de38e" containerID="92928a395bcb4b479dc083922bbe86ac38b51d98cd589eedcbc4c18744b69d89" exitCode=137 Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.058440 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_7f47300841026200cf071984642de38e/startup-monitor/0.log" Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.058580 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.170217 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-pod-resource-dir\") pod \"7f47300841026200cf071984642de38e\" (UID: \"7f47300841026200cf071984642de38e\") " Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.170309 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-var-log\") pod \"7f47300841026200cf071984642de38e\" (UID: \"7f47300841026200cf071984642de38e\") " Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.170448 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-resource-dir\") pod \"7f47300841026200cf071984642de38e\" (UID: \"7f47300841026200cf071984642de38e\") " Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.170487 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-var-lock\") pod \"7f47300841026200cf071984642de38e\" (UID: \"7f47300841026200cf071984642de38e\") " Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.170552 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-manifests\") pod \"7f47300841026200cf071984642de38e\" (UID: \"7f47300841026200cf071984642de38e\") " Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.170629 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f47300841026200cf071984642de38e-var-log" (OuterVolumeSpecName: "var-log") pod "7f47300841026200cf071984642de38e" (UID: "7f47300841026200cf071984642de38e"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.170679 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f47300841026200cf071984642de38e-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "7f47300841026200cf071984642de38e" (UID: "7f47300841026200cf071984642de38e"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.170706 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f47300841026200cf071984642de38e-var-lock" (OuterVolumeSpecName: "var-lock") pod "7f47300841026200cf071984642de38e" (UID: "7f47300841026200cf071984642de38e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.170749 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f47300841026200cf071984642de38e-manifests" (OuterVolumeSpecName: "manifests") pod "7f47300841026200cf071984642de38e" (UID: "7f47300841026200cf071984642de38e"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.170949 4183 reconciler_common.go:300] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-manifests\") on node \"crc\" DevicePath \"\"" Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.170975 4183 reconciler_common.go:300] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-var-log\") on node \"crc\" DevicePath \"\"" Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.170991 4183 reconciler_common.go:300] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-resource-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.171005 4183 reconciler_common.go:300] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-var-lock\") on node \"crc\" DevicePath \"\"" Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.181996 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f47300841026200cf071984642de38e-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "7f47300841026200cf071984642de38e" (UID: "7f47300841026200cf071984642de38e"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.218138 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f47300841026200cf071984642de38e" path="/var/lib/kubelet/pods/7f47300841026200cf071984642de38e/volumes" Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.218546 4183 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.272738 4183 reconciler_common.go:300] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.289033 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.289098 4183 kubelet.go:2639] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="0724fd71-838e-4f2e-b139-bb1fd482d17e" Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.293089 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.293166 4183 kubelet.go:2663] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="0724fd71-838e-4f2e-b139-bb1fd482d17e" Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.984729 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_7f47300841026200cf071984642de38e/startup-monitor/0.log" Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.984982 4183 scope.go:117] "RemoveContainer" containerID="92928a395bcb4b479dc083922bbe86ac38b51d98cd589eedcbc4c18744b69d89" Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.985206 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:09:34 crc kubenswrapper[4183]: I0813 20:09:34.861454 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Aug 13 20:09:42 crc kubenswrapper[4183]: I0813 20:09:42.336888 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-9r4gl" Aug 13 20:09:54 crc kubenswrapper[4183]: I0813 20:09:54.750946 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:09:54 crc kubenswrapper[4183]: I0813 20:09:54.751742 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:09:54 crc kubenswrapper[4183]: I0813 20:09:54.751858 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:09:54 crc kubenswrapper[4183]: I0813 20:09:54.751927 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:09:54 crc kubenswrapper[4183]: I0813 20:09:54.751981 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:09:55 crc kubenswrapper[4183]: I0813 20:09:55.597745 4183 scope.go:117] "RemoveContainer" containerID="dc3b34e8b871f3bd864f0c456c6ee0a0f7a97f171f4c0c5d20a5a451b26196e9" Aug 13 20:10:15 crc kubenswrapper[4183]: I0813 20:10:15.277768 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-jx5m8"] Aug 13 20:10:15 crc kubenswrapper[4183]: I0813 20:10:15.278765 4183 topology_manager.go:215] "Topology Admit Handler" podUID="b78e72e3-8ece-4d66-aa9c-25445bacdc99" podNamespace="openshift-multus" podName="cni-sysctl-allowlist-ds-jx5m8" Aug 13 20:10:15 crc kubenswrapper[4183]: E0813 20:10:15.279955 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="3557248c-8f70-4165-aa66-8df983e7e01a" containerName="installer" Aug 13 20:10:15 crc kubenswrapper[4183]: I0813 20:10:15.279984 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="3557248c-8f70-4165-aa66-8df983e7e01a" containerName="installer" Aug 13 20:10:15 crc kubenswrapper[4183]: E0813 20:10:15.280009 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="7f47300841026200cf071984642de38e" containerName="startup-monitor" Aug 13 20:10:15 crc kubenswrapper[4183]: I0813 20:10:15.280021 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f47300841026200cf071984642de38e" containerName="startup-monitor" Aug 13 20:10:15 crc kubenswrapper[4183]: I0813 20:10:15.280316 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="3557248c-8f70-4165-aa66-8df983e7e01a" containerName="installer" Aug 13 20:10:15 crc kubenswrapper[4183]: I0813 20:10:15.280345 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f47300841026200cf071984642de38e" containerName="startup-monitor" Aug 13 20:10:15 crc kubenswrapper[4183]: I0813 20:10:15.283142 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" Aug 13 20:10:15 crc kubenswrapper[4183]: I0813 20:10:15.289029 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Aug 13 20:10:15 crc kubenswrapper[4183]: I0813 20:10:15.289532 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-smth4" Aug 13 20:10:15 crc kubenswrapper[4183]: I0813 20:10:15.378578 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25pz9\" (UniqueName: \"kubernetes.io/projected/b78e72e3-8ece-4d66-aa9c-25445bacdc99-kube-api-access-25pz9\") pod \"cni-sysctl-allowlist-ds-jx5m8\" (UID: \"b78e72e3-8ece-4d66-aa9c-25445bacdc99\") " pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" Aug 13 20:10:15 crc kubenswrapper[4183]: I0813 20:10:15.379062 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b78e72e3-8ece-4d66-aa9c-25445bacdc99-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-jx5m8\" (UID: \"b78e72e3-8ece-4d66-aa9c-25445bacdc99\") " pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" Aug 13 20:10:15 crc kubenswrapper[4183]: I0813 20:10:15.379570 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b78e72e3-8ece-4d66-aa9c-25445bacdc99-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-jx5m8\" (UID: \"b78e72e3-8ece-4d66-aa9c-25445bacdc99\") " pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" Aug 13 20:10:15 crc kubenswrapper[4183]: I0813 20:10:15.380575 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/b78e72e3-8ece-4d66-aa9c-25445bacdc99-ready\") pod \"cni-sysctl-allowlist-ds-jx5m8\" (UID: \"b78e72e3-8ece-4d66-aa9c-25445bacdc99\") " pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" Aug 13 20:10:15 crc kubenswrapper[4183]: I0813 20:10:15.481719 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/b78e72e3-8ece-4d66-aa9c-25445bacdc99-ready\") pod \"cni-sysctl-allowlist-ds-jx5m8\" (UID: \"b78e72e3-8ece-4d66-aa9c-25445bacdc99\") " pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" Aug 13 20:10:15 crc kubenswrapper[4183]: I0813 20:10:15.481975 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-25pz9\" (UniqueName: \"kubernetes.io/projected/b78e72e3-8ece-4d66-aa9c-25445bacdc99-kube-api-access-25pz9\") pod \"cni-sysctl-allowlist-ds-jx5m8\" (UID: \"b78e72e3-8ece-4d66-aa9c-25445bacdc99\") " pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" Aug 13 20:10:15 crc kubenswrapper[4183]: I0813 20:10:15.482381 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/b78e72e3-8ece-4d66-aa9c-25445bacdc99-ready\") pod \"cni-sysctl-allowlist-ds-jx5m8\" (UID: \"b78e72e3-8ece-4d66-aa9c-25445bacdc99\") " pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" Aug 13 20:10:15 crc kubenswrapper[4183]: I0813 20:10:15.482417 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b78e72e3-8ece-4d66-aa9c-25445bacdc99-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-jx5m8\" (UID: \"b78e72e3-8ece-4d66-aa9c-25445bacdc99\") " pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" Aug 13 20:10:15 crc kubenswrapper[4183]: I0813 20:10:15.482748 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b78e72e3-8ece-4d66-aa9c-25445bacdc99-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-jx5m8\" (UID: \"b78e72e3-8ece-4d66-aa9c-25445bacdc99\") " pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" Aug 13 20:10:15 crc kubenswrapper[4183]: I0813 20:10:15.483053 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b78e72e3-8ece-4d66-aa9c-25445bacdc99-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-jx5m8\" (UID: \"b78e72e3-8ece-4d66-aa9c-25445bacdc99\") " pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" Aug 13 20:10:15 crc kubenswrapper[4183]: I0813 20:10:15.483370 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b78e72e3-8ece-4d66-aa9c-25445bacdc99-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-jx5m8\" (UID: \"b78e72e3-8ece-4d66-aa9c-25445bacdc99\") " pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" Aug 13 20:10:15 crc kubenswrapper[4183]: I0813 20:10:15.525627 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-25pz9\" (UniqueName: \"kubernetes.io/projected/b78e72e3-8ece-4d66-aa9c-25445bacdc99-kube-api-access-25pz9\") pod \"cni-sysctl-allowlist-ds-jx5m8\" (UID: \"b78e72e3-8ece-4d66-aa9c-25445bacdc99\") " pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" Aug 13 20:10:15 crc kubenswrapper[4183]: I0813 20:10:15.609972 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" Aug 13 20:10:16 crc kubenswrapper[4183]: I0813 20:10:16.323726 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" event={"ID":"b78e72e3-8ece-4d66-aa9c-25445bacdc99","Type":"ContainerStarted","Data":"e8b2e7f930d500cf3c7f8ae13874b47c586ff96efdacd975bab28dc614898646"} Aug 13 20:10:16 crc kubenswrapper[4183]: I0813 20:10:16.323769 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" event={"ID":"b78e72e3-8ece-4d66-aa9c-25445bacdc99","Type":"ContainerStarted","Data":"7f3fc61d9433e4a7d56e81573eb626edd2106764ab8b801202688d1a24986dc2"} Aug 13 20:10:16 crc kubenswrapper[4183]: I0813 20:10:16.324092 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" Aug 13 20:10:16 crc kubenswrapper[4183]: I0813 20:10:16.363837 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" podStartSLOduration=1.363730948 podStartE2EDuration="1.363730948s" podCreationTimestamp="2025-08-13 20:10:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:10:16.360329401 +0000 UTC m=+1583.052994299" watchObservedRunningTime="2025-08-13 20:10:16.363730948 +0000 UTC m=+1583.056395666" Aug 13 20:10:17 crc kubenswrapper[4183]: I0813 20:10:17.407369 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" Aug 13 20:10:18 crc kubenswrapper[4183]: I0813 20:10:18.241296 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-jx5m8"] Aug 13 20:10:19 crc kubenswrapper[4183]: I0813 20:10:19.343356 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" podUID="b78e72e3-8ece-4d66-aa9c-25445bacdc99" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://e8b2e7f930d500cf3c7f8ae13874b47c586ff96efdacd975bab28dc614898646" gracePeriod=30 Aug 13 20:10:25 crc kubenswrapper[4183]: E0813 20:10:25.615052 4183 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e8b2e7f930d500cf3c7f8ae13874b47c586ff96efdacd975bab28dc614898646" cmd=["/bin/bash","-c","test -f /ready/ready"] Aug 13 20:10:25 crc kubenswrapper[4183]: E0813 20:10:25.619515 4183 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e8b2e7f930d500cf3c7f8ae13874b47c586ff96efdacd975bab28dc614898646" cmd=["/bin/bash","-c","test -f /ready/ready"] Aug 13 20:10:25 crc kubenswrapper[4183]: E0813 20:10:25.621844 4183 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e8b2e7f930d500cf3c7f8ae13874b47c586ff96efdacd975bab28dc614898646" cmd=["/bin/bash","-c","test -f /ready/ready"] Aug 13 20:10:25 crc kubenswrapper[4183]: E0813 20:10:25.621965 4183 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" podUID="b78e72e3-8ece-4d66-aa9c-25445bacdc99" containerName="kube-multus-additional-cni-plugins" Aug 13 20:10:35 crc kubenswrapper[4183]: E0813 20:10:35.614950 4183 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e8b2e7f930d500cf3c7f8ae13874b47c586ff96efdacd975bab28dc614898646" cmd=["/bin/bash","-c","test -f /ready/ready"] Aug 13 20:10:35 crc kubenswrapper[4183]: E0813 20:10:35.617609 4183 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e8b2e7f930d500cf3c7f8ae13874b47c586ff96efdacd975bab28dc614898646" cmd=["/bin/bash","-c","test -f /ready/ready"] Aug 13 20:10:35 crc kubenswrapper[4183]: E0813 20:10:35.621472 4183 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e8b2e7f930d500cf3c7f8ae13874b47c586ff96efdacd975bab28dc614898646" cmd=["/bin/bash","-c","test -f /ready/ready"] Aug 13 20:10:35 crc kubenswrapper[4183]: E0813 20:10:35.621559 4183 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" podUID="b78e72e3-8ece-4d66-aa9c-25445bacdc99" containerName="kube-multus-additional-cni-plugins" Aug 13 20:10:45 crc kubenswrapper[4183]: E0813 20:10:45.618009 4183 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e8b2e7f930d500cf3c7f8ae13874b47c586ff96efdacd975bab28dc614898646" cmd=["/bin/bash","-c","test -f /ready/ready"] Aug 13 20:10:45 crc kubenswrapper[4183]: E0813 20:10:45.623908 4183 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e8b2e7f930d500cf3c7f8ae13874b47c586ff96efdacd975bab28dc614898646" cmd=["/bin/bash","-c","test -f /ready/ready"] Aug 13 20:10:45 crc kubenswrapper[4183]: E0813 20:10:45.626362 4183 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e8b2e7f930d500cf3c7f8ae13874b47c586ff96efdacd975bab28dc614898646" cmd=["/bin/bash","-c","test -f /ready/ready"] Aug 13 20:10:45 crc kubenswrapper[4183]: E0813 20:10:45.626486 4183 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" podUID="b78e72e3-8ece-4d66-aa9c-25445bacdc99" containerName="kube-multus-additional-cni-plugins" Aug 13 20:10:49 crc kubenswrapper[4183]: I0813 20:10:49.550765 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-jx5m8_b78e72e3-8ece-4d66-aa9c-25445bacdc99/kube-multus-additional-cni-plugins/0.log" Aug 13 20:10:49 crc kubenswrapper[4183]: I0813 20:10:49.550945 4183 generic.go:334] "Generic (PLEG): container finished" podID="b78e72e3-8ece-4d66-aa9c-25445bacdc99" containerID="e8b2e7f930d500cf3c7f8ae13874b47c586ff96efdacd975bab28dc614898646" exitCode=137 Aug 13 20:10:49 crc kubenswrapper[4183]: I0813 20:10:49.551009 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" event={"ID":"b78e72e3-8ece-4d66-aa9c-25445bacdc99","Type":"ContainerDied","Data":"e8b2e7f930d500cf3c7f8ae13874b47c586ff96efdacd975bab28dc614898646"} Aug 13 20:10:49 crc kubenswrapper[4183]: I0813 20:10:49.551044 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" event={"ID":"b78e72e3-8ece-4d66-aa9c-25445bacdc99","Type":"ContainerDied","Data":"7f3fc61d9433e4a7d56e81573eb626edd2106764ab8b801202688d1a24986dc2"} Aug 13 20:10:49 crc kubenswrapper[4183]: I0813 20:10:49.551075 4183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f3fc61d9433e4a7d56e81573eb626edd2106764ab8b801202688d1a24986dc2" Aug 13 20:10:49 crc kubenswrapper[4183]: I0813 20:10:49.584207 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-jx5m8_b78e72e3-8ece-4d66-aa9c-25445bacdc99/kube-multus-additional-cni-plugins/0.log" Aug 13 20:10:49 crc kubenswrapper[4183]: I0813 20:10:49.584448 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" Aug 13 20:10:49 crc kubenswrapper[4183]: I0813 20:10:49.706635 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b78e72e3-8ece-4d66-aa9c-25445bacdc99-tuning-conf-dir\") pod \"b78e72e3-8ece-4d66-aa9c-25445bacdc99\" (UID: \"b78e72e3-8ece-4d66-aa9c-25445bacdc99\") " Aug 13 20:10:49 crc kubenswrapper[4183]: I0813 20:10:49.706906 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b78e72e3-8ece-4d66-aa9c-25445bacdc99-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "b78e72e3-8ece-4d66-aa9c-25445bacdc99" (UID: "b78e72e3-8ece-4d66-aa9c-25445bacdc99"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:10:49 crc kubenswrapper[4183]: I0813 20:10:49.707146 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25pz9\" (UniqueName: \"kubernetes.io/projected/b78e72e3-8ece-4d66-aa9c-25445bacdc99-kube-api-access-25pz9\") pod \"b78e72e3-8ece-4d66-aa9c-25445bacdc99\" (UID: \"b78e72e3-8ece-4d66-aa9c-25445bacdc99\") " Aug 13 20:10:49 crc kubenswrapper[4183]: I0813 20:10:49.707314 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b78e72e3-8ece-4d66-aa9c-25445bacdc99-cni-sysctl-allowlist\") pod \"b78e72e3-8ece-4d66-aa9c-25445bacdc99\" (UID: \"b78e72e3-8ece-4d66-aa9c-25445bacdc99\") " Aug 13 20:10:49 crc kubenswrapper[4183]: I0813 20:10:49.708152 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b78e72e3-8ece-4d66-aa9c-25445bacdc99-ready" (OuterVolumeSpecName: "ready") pod "b78e72e3-8ece-4d66-aa9c-25445bacdc99" (UID: "b78e72e3-8ece-4d66-aa9c-25445bacdc99"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:10:49 crc kubenswrapper[4183]: I0813 20:10:49.708195 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b78e72e3-8ece-4d66-aa9c-25445bacdc99-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "b78e72e3-8ece-4d66-aa9c-25445bacdc99" (UID: "b78e72e3-8ece-4d66-aa9c-25445bacdc99"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:10:49 crc kubenswrapper[4183]: I0813 20:10:49.707465 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/b78e72e3-8ece-4d66-aa9c-25445bacdc99-ready\") pod \"b78e72e3-8ece-4d66-aa9c-25445bacdc99\" (UID: \"b78e72e3-8ece-4d66-aa9c-25445bacdc99\") " Aug 13 20:10:49 crc kubenswrapper[4183]: I0813 20:10:49.708648 4183 reconciler_common.go:300] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b78e72e3-8ece-4d66-aa9c-25445bacdc99-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Aug 13 20:10:49 crc kubenswrapper[4183]: I0813 20:10:49.708672 4183 reconciler_common.go:300] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/b78e72e3-8ece-4d66-aa9c-25445bacdc99-ready\") on node \"crc\" DevicePath \"\"" Aug 13 20:10:49 crc kubenswrapper[4183]: I0813 20:10:49.708683 4183 reconciler_common.go:300] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b78e72e3-8ece-4d66-aa9c-25445bacdc99-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:10:49 crc kubenswrapper[4183]: I0813 20:10:49.719169 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b78e72e3-8ece-4d66-aa9c-25445bacdc99-kube-api-access-25pz9" (OuterVolumeSpecName: "kube-api-access-25pz9") pod "b78e72e3-8ece-4d66-aa9c-25445bacdc99" (UID: "b78e72e3-8ece-4d66-aa9c-25445bacdc99"). InnerVolumeSpecName "kube-api-access-25pz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:10:49 crc kubenswrapper[4183]: I0813 20:10:49.810314 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-25pz9\" (UniqueName: \"kubernetes.io/projected/b78e72e3-8ece-4d66-aa9c-25445bacdc99-kube-api-access-25pz9\") on node \"crc\" DevicePath \"\"" Aug 13 20:10:50 crc kubenswrapper[4183]: I0813 20:10:50.560008 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" Aug 13 20:10:50 crc kubenswrapper[4183]: I0813 20:10:50.605358 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-jx5m8"] Aug 13 20:10:50 crc kubenswrapper[4183]: I0813 20:10:50.611870 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-jx5m8"] Aug 13 20:10:51 crc kubenswrapper[4183]: I0813 20:10:51.217828 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b78e72e3-8ece-4d66-aa9c-25445bacdc99" path="/var/lib/kubelet/pods/b78e72e3-8ece-4d66-aa9c-25445bacdc99/volumes" Aug 13 20:10:54 crc kubenswrapper[4183]: I0813 20:10:54.752861 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:10:54 crc kubenswrapper[4183]: I0813 20:10:54.753521 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:10:54 crc kubenswrapper[4183]: I0813 20:10:54.753599 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:10:54 crc kubenswrapper[4183]: I0813 20:10:54.753657 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:10:54 crc kubenswrapper[4183]: I0813 20:10:54.753739 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:10:55 crc kubenswrapper[4183]: I0813 20:10:55.709489 4183 scope.go:117] "RemoveContainer" containerID="da6e49e577c89776d78e03c12b1aa711de8c3b6ceb252a9c05b51d38a6e6fd8a" Aug 13 20:10:55 crc kubenswrapper[4183]: I0813 20:10:55.758106 4183 scope.go:117] "RemoveContainer" containerID="5b04274f5ebeb54ec142f28db67158b3f20014bf0046505512a20f576eb7c4b4" Aug 13 20:10:55 crc kubenswrapper[4183]: I0813 20:10:55.792646 4183 scope.go:117] "RemoveContainer" containerID="daf74224d04a5859b6f3ea7213d84dd41f91a9dfefadc077c041aabcb8247fdd" Aug 13 20:10:59 crc kubenswrapper[4183]: I0813 20:10:59.755707 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx"] Aug 13 20:10:59 crc kubenswrapper[4183]: I0813 20:10:59.756438 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" podUID="becc7e17-2bc7-417d-832f-55127299d70f" containerName="route-controller-manager" containerID="cri-o://764b4421d338c0c0f1baf8c5cf39f8312e1a50dc3eb5f025196bf23f93fcbe75" gracePeriod=30 Aug 13 20:10:59 crc kubenswrapper[4183]: I0813 20:10:59.790837 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-598fc85fd4-8wlsm"] Aug 13 20:10:59 crc kubenswrapper[4183]: I0813 20:10:59.791152 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" podUID="8b8d1c48-5762-450f-bd4d-9134869f432b" containerName="controller-manager" containerID="cri-o://3a7af3bd6c985bd2cf1c0ebb554af4bd79e961a7f0b299ecb95e5c8f07b051d8" gracePeriod=30 Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.353873 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.468116 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.469581 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b8d1c48-5762-450f-bd4d-9134869f432b-client-ca\") pod \"8b8d1c48-5762-450f-bd4d-9134869f432b\" (UID: \"8b8d1c48-5762-450f-bd4d-9134869f432b\") " Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.469685 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b8d1c48-5762-450f-bd4d-9134869f432b-serving-cert\") pod \"8b8d1c48-5762-450f-bd4d-9134869f432b\" (UID: \"8b8d1c48-5762-450f-bd4d-9134869f432b\") " Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.469734 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b8d1c48-5762-450f-bd4d-9134869f432b-proxy-ca-bundles\") pod \"8b8d1c48-5762-450f-bd4d-9134869f432b\" (UID: \"8b8d1c48-5762-450f-bd4d-9134869f432b\") " Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.470165 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b8d1c48-5762-450f-bd4d-9134869f432b-config\") pod \"8b8d1c48-5762-450f-bd4d-9134869f432b\" (UID: \"8b8d1c48-5762-450f-bd4d-9134869f432b\") " Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.470498 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spb98\" (UniqueName: \"kubernetes.io/projected/8b8d1c48-5762-450f-bd4d-9134869f432b-kube-api-access-spb98\") pod \"8b8d1c48-5762-450f-bd4d-9134869f432b\" (UID: \"8b8d1c48-5762-450f-bd4d-9134869f432b\") " Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.473699 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b8d1c48-5762-450f-bd4d-9134869f432b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "8b8d1c48-5762-450f-bd4d-9134869f432b" (UID: "8b8d1c48-5762-450f-bd4d-9134869f432b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.476019 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b8d1c48-5762-450f-bd4d-9134869f432b-client-ca" (OuterVolumeSpecName: "client-ca") pod "8b8d1c48-5762-450f-bd4d-9134869f432b" (UID: "8b8d1c48-5762-450f-bd4d-9134869f432b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.478873 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b8d1c48-5762-450f-bd4d-9134869f432b-config" (OuterVolumeSpecName: "config") pod "8b8d1c48-5762-450f-bd4d-9134869f432b" (UID: "8b8d1c48-5762-450f-bd4d-9134869f432b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.487118 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b8d1c48-5762-450f-bd4d-9134869f432b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8b8d1c48-5762-450f-bd4d-9134869f432b" (UID: "8b8d1c48-5762-450f-bd4d-9134869f432b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.490218 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b8d1c48-5762-450f-bd4d-9134869f432b-kube-api-access-spb98" (OuterVolumeSpecName: "kube-api-access-spb98") pod "8b8d1c48-5762-450f-bd4d-9134869f432b" (UID: "8b8d1c48-5762-450f-bd4d-9134869f432b"). InnerVolumeSpecName "kube-api-access-spb98". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.572528 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/becc7e17-2bc7-417d-832f-55127299d70f-serving-cert\") pod \"becc7e17-2bc7-417d-832f-55127299d70f\" (UID: \"becc7e17-2bc7-417d-832f-55127299d70f\") " Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.572630 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/becc7e17-2bc7-417d-832f-55127299d70f-client-ca\") pod \"becc7e17-2bc7-417d-832f-55127299d70f\" (UID: \"becc7e17-2bc7-417d-832f-55127299d70f\") " Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.572681 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvfwr\" (UniqueName: \"kubernetes.io/projected/becc7e17-2bc7-417d-832f-55127299d70f-kube-api-access-nvfwr\") pod \"becc7e17-2bc7-417d-832f-55127299d70f\" (UID: \"becc7e17-2bc7-417d-832f-55127299d70f\") " Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.572732 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/becc7e17-2bc7-417d-832f-55127299d70f-config\") pod \"becc7e17-2bc7-417d-832f-55127299d70f\" (UID: \"becc7e17-2bc7-417d-832f-55127299d70f\") " Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.573142 4183 reconciler_common.go:300] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b8d1c48-5762-450f-bd4d-9134869f432b-client-ca\") on node \"crc\" DevicePath \"\"" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.573163 4183 reconciler_common.go:300] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b8d1c48-5762-450f-bd4d-9134869f432b-serving-cert\") on node \"crc\" DevicePath \"\"" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.573175 4183 reconciler_common.go:300] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b8d1c48-5762-450f-bd4d-9134869f432b-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.573186 4183 reconciler_common.go:300] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b8d1c48-5762-450f-bd4d-9134869f432b-config\") on node \"crc\" DevicePath \"\"" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.573198 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-spb98\" (UniqueName: \"kubernetes.io/projected/8b8d1c48-5762-450f-bd4d-9134869f432b-kube-api-access-spb98\") on node \"crc\" DevicePath \"\"" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.574269 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/becc7e17-2bc7-417d-832f-55127299d70f-client-ca" (OuterVolumeSpecName: "client-ca") pod "becc7e17-2bc7-417d-832f-55127299d70f" (UID: "becc7e17-2bc7-417d-832f-55127299d70f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.574419 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/becc7e17-2bc7-417d-832f-55127299d70f-config" (OuterVolumeSpecName: "config") pod "becc7e17-2bc7-417d-832f-55127299d70f" (UID: "becc7e17-2bc7-417d-832f-55127299d70f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.578612 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/becc7e17-2bc7-417d-832f-55127299d70f-kube-api-access-nvfwr" (OuterVolumeSpecName: "kube-api-access-nvfwr") pod "becc7e17-2bc7-417d-832f-55127299d70f" (UID: "becc7e17-2bc7-417d-832f-55127299d70f"). InnerVolumeSpecName "kube-api-access-nvfwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.579214 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/becc7e17-2bc7-417d-832f-55127299d70f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "becc7e17-2bc7-417d-832f-55127299d70f" (UID: "becc7e17-2bc7-417d-832f-55127299d70f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.631669 4183 generic.go:334] "Generic (PLEG): container finished" podID="8b8d1c48-5762-450f-bd4d-9134869f432b" containerID="3a7af3bd6c985bd2cf1c0ebb554af4bd79e961a7f0b299ecb95e5c8f07b051d8" exitCode=0 Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.631834 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" event={"ID":"8b8d1c48-5762-450f-bd4d-9134869f432b","Type":"ContainerDied","Data":"3a7af3bd6c985bd2cf1c0ebb554af4bd79e961a7f0b299ecb95e5c8f07b051d8"} Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.631841 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.631874 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" event={"ID":"8b8d1c48-5762-450f-bd4d-9134869f432b","Type":"ContainerDied","Data":"7814bf45dce77ed8a8c744f06e62839eae09ee6a9538e182ca2f0ea610392efb"} Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.632014 4183 scope.go:117] "RemoveContainer" containerID="3a7af3bd6c985bd2cf1c0ebb554af4bd79e961a7f0b299ecb95e5c8f07b051d8" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.639087 4183 generic.go:334] "Generic (PLEG): container finished" podID="becc7e17-2bc7-417d-832f-55127299d70f" containerID="764b4421d338c0c0f1baf8c5cf39f8312e1a50dc3eb5f025196bf23f93fcbe75" exitCode=0 Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.639175 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" event={"ID":"becc7e17-2bc7-417d-832f-55127299d70f","Type":"ContainerDied","Data":"764b4421d338c0c0f1baf8c5cf39f8312e1a50dc3eb5f025196bf23f93fcbe75"} Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.639256 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" event={"ID":"becc7e17-2bc7-417d-832f-55127299d70f","Type":"ContainerDied","Data":"924f68f94ccf00f51d9670a79dea4855d290329c9234e55ec074960babbce6d7"} Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.639536 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.674046 4183 reconciler_common.go:300] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/becc7e17-2bc7-417d-832f-55127299d70f-client-ca\") on node \"crc\" DevicePath \"\"" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.674428 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-nvfwr\" (UniqueName: \"kubernetes.io/projected/becc7e17-2bc7-417d-832f-55127299d70f-kube-api-access-nvfwr\") on node \"crc\" DevicePath \"\"" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.674522 4183 reconciler_common.go:300] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/becc7e17-2bc7-417d-832f-55127299d70f-config\") on node \"crc\" DevicePath \"\"" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.674622 4183 reconciler_common.go:300] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/becc7e17-2bc7-417d-832f-55127299d70f-serving-cert\") on node \"crc\" DevicePath \"\"" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.718560 4183 scope.go:117] "RemoveContainer" containerID="3a7af3bd6c985bd2cf1c0ebb554af4bd79e961a7f0b299ecb95e5c8f07b051d8" Aug 13 20:11:00 crc kubenswrapper[4183]: E0813 20:11:00.719728 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a7af3bd6c985bd2cf1c0ebb554af4bd79e961a7f0b299ecb95e5c8f07b051d8\": container with ID starting with 3a7af3bd6c985bd2cf1c0ebb554af4bd79e961a7f0b299ecb95e5c8f07b051d8 not found: ID does not exist" containerID="3a7af3bd6c985bd2cf1c0ebb554af4bd79e961a7f0b299ecb95e5c8f07b051d8" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.720139 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a7af3bd6c985bd2cf1c0ebb554af4bd79e961a7f0b299ecb95e5c8f07b051d8"} err="failed to get container status \"3a7af3bd6c985bd2cf1c0ebb554af4bd79e961a7f0b299ecb95e5c8f07b051d8\": rpc error: code = NotFound desc = could not find container \"3a7af3bd6c985bd2cf1c0ebb554af4bd79e961a7f0b299ecb95e5c8f07b051d8\": container with ID starting with 3a7af3bd6c985bd2cf1c0ebb554af4bd79e961a7f0b299ecb95e5c8f07b051d8 not found: ID does not exist" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.720430 4183 scope.go:117] "RemoveContainer" containerID="764b4421d338c0c0f1baf8c5cf39f8312e1a50dc3eb5f025196bf23f93fcbe75" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.775971 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-598fc85fd4-8wlsm"] Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.787427 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-598fc85fd4-8wlsm"] Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.790274 4183 scope.go:117] "RemoveContainer" containerID="764b4421d338c0c0f1baf8c5cf39f8312e1a50dc3eb5f025196bf23f93fcbe75" Aug 13 20:11:00 crc kubenswrapper[4183]: E0813 20:11:00.793167 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"764b4421d338c0c0f1baf8c5cf39f8312e1a50dc3eb5f025196bf23f93fcbe75\": container with ID starting with 764b4421d338c0c0f1baf8c5cf39f8312e1a50dc3eb5f025196bf23f93fcbe75 not found: ID does not exist" containerID="764b4421d338c0c0f1baf8c5cf39f8312e1a50dc3eb5f025196bf23f93fcbe75" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.793238 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"764b4421d338c0c0f1baf8c5cf39f8312e1a50dc3eb5f025196bf23f93fcbe75"} err="failed to get container status \"764b4421d338c0c0f1baf8c5cf39f8312e1a50dc3eb5f025196bf23f93fcbe75\": rpc error: code = NotFound desc = could not find container \"764b4421d338c0c0f1baf8c5cf39f8312e1a50dc3eb5f025196bf23f93fcbe75\": container with ID starting with 764b4421d338c0c0f1baf8c5cf39f8312e1a50dc3eb5f025196bf23f93fcbe75 not found: ID does not exist" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.822961 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx"] Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.846342 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx"] Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.219888 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b8d1c48-5762-450f-bd4d-9134869f432b" path="/var/lib/kubelet/pods/8b8d1c48-5762-450f-bd4d-9134869f432b/volumes" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.220771 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="becc7e17-2bc7-417d-832f-55127299d70f" path="/var/lib/kubelet/pods/becc7e17-2bc7-417d-832f-55127299d70f/volumes" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.529530 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-778975cc4f-x5vcf"] Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.529740 4183 topology_manager.go:215] "Topology Admit Handler" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" podNamespace="openshift-controller-manager" podName="controller-manager-778975cc4f-x5vcf" Aug 13 20:11:01 crc kubenswrapper[4183]: E0813 20:11:01.530159 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="becc7e17-2bc7-417d-832f-55127299d70f" containerName="route-controller-manager" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.530179 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="becc7e17-2bc7-417d-832f-55127299d70f" containerName="route-controller-manager" Aug 13 20:11:01 crc kubenswrapper[4183]: E0813 20:11:01.530191 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="b78e72e3-8ece-4d66-aa9c-25445bacdc99" containerName="kube-multus-additional-cni-plugins" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.530199 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="b78e72e3-8ece-4d66-aa9c-25445bacdc99" containerName="kube-multus-additional-cni-plugins" Aug 13 20:11:01 crc kubenswrapper[4183]: E0813 20:11:01.530215 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="8b8d1c48-5762-450f-bd4d-9134869f432b" containerName="controller-manager" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.530222 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b8d1c48-5762-450f-bd4d-9134869f432b" containerName="controller-manager" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.530383 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b8d1c48-5762-450f-bd4d-9134869f432b" containerName="controller-manager" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.530400 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="becc7e17-2bc7-417d-832f-55127299d70f" containerName="route-controller-manager" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.530411 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="b78e72e3-8ece-4d66-aa9c-25445bacdc99" containerName="kube-multus-additional-cni-plugins" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.530999 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.535306 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs"] Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.535403 4183 topology_manager.go:215] "Topology Admit Handler" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" podNamespace="openshift-route-controller-manager" podName="route-controller-manager-776b8b7477-sfpvs" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.535706 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.536177 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.545713 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.546083 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.546286 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.546479 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.546608 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.546723 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.548592 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-58g82" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.550836 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.553742 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.554245 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-9r4gl" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.554485 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.555215 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.572420 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-778975cc4f-x5vcf"] Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.600311 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs"] Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.688129 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkhl4\" (UniqueName: \"kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.688249 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.688301 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.688335 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.688738 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.688877 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.689031 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7vkr\" (UniqueName: \"kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.689097 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.689156 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.790450 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.792008 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.790906 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.793212 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-v7vkr\" (UniqueName: \"kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.793305 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.793338 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.793351 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.793433 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pkhl4\" (UniqueName: \"kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.793497 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.793556 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.793591 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.795037 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.795161 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.795292 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.806724 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.817740 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.832039 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkhl4\" (UniqueName: \"kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.834455 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7vkr\" (UniqueName: \"kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.860524 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.888227 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Aug 13 20:11:02 crc kubenswrapper[4183]: I0813 20:11:02.196323 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-778975cc4f-x5vcf"] Aug 13 20:11:02 crc kubenswrapper[4183]: I0813 20:11:02.292702 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs"] Aug 13 20:11:02 crc kubenswrapper[4183]: W0813 20:11:02.303249 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21d29937_debd_4407_b2b1_d1053cb0f342.slice/crio-c5bff19800c2cb507bcf9fddcebd0a76d4998afb979fbc87c373bf9ec3c52c88 WatchSource:0}: Error finding container c5bff19800c2cb507bcf9fddcebd0a76d4998afb979fbc87c373bf9ec3c52c88: Status 404 returned error can't find the container with id c5bff19800c2cb507bcf9fddcebd0a76d4998afb979fbc87c373bf9ec3c52c88 Aug 13 20:11:02 crc kubenswrapper[4183]: I0813 20:11:02.667677 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" event={"ID":"21d29937-debd-4407-b2b1-d1053cb0f342","Type":"ContainerStarted","Data":"0f10a0ff7dcdf058546a57661d593bbd03d3e01cad1ad00d318c0219c343a8ba"} Aug 13 20:11:02 crc kubenswrapper[4183]: I0813 20:11:02.668407 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Aug 13 20:11:02 crc kubenswrapper[4183]: I0813 20:11:02.670753 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" event={"ID":"21d29937-debd-4407-b2b1-d1053cb0f342","Type":"ContainerStarted","Data":"c5bff19800c2cb507bcf9fddcebd0a76d4998afb979fbc87c373bf9ec3c52c88"} Aug 13 20:11:02 crc kubenswrapper[4183]: I0813 20:11:02.670864 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" event={"ID":"1a3e81c3-c292-4130-9436-f94062c91efd","Type":"ContainerStarted","Data":"de330230a01f03a2d68126ab9eeb5198d7000aa6559b4f3461344585212eb3fe"} Aug 13 20:11:02 crc kubenswrapper[4183]: I0813 20:11:02.670889 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" event={"ID":"1a3e81c3-c292-4130-9436-f94062c91efd","Type":"ContainerStarted","Data":"67a3c779a8c87e71b43d6cb834c45eddf91ef0c21c030e8ec0df8e8304073b3c"} Aug 13 20:11:02 crc kubenswrapper[4183]: I0813 20:11:02.671078 4183 patch_prober.go:28] interesting pod/route-controller-manager-776b8b7477-sfpvs container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.88:8443/healthz\": dial tcp 10.217.0.88:8443: connect: connection refused" start-of-body= Aug 13 20:11:02 crc kubenswrapper[4183]: I0813 20:11:02.671181 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.88:8443/healthz\": dial tcp 10.217.0.88:8443: connect: connection refused" Aug 13 20:11:02 crc kubenswrapper[4183]: I0813 20:11:02.671541 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Aug 13 20:11:02 crc kubenswrapper[4183]: I0813 20:11:02.673582 4183 patch_prober.go:28] interesting pod/controller-manager-778975cc4f-x5vcf container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.87:8443/healthz\": dial tcp 10.217.0.87:8443: connect: connection refused" start-of-body= Aug 13 20:11:02 crc kubenswrapper[4183]: I0813 20:11:02.673645 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.87:8443/healthz\": dial tcp 10.217.0.87:8443: connect: connection refused" Aug 13 20:11:02 crc kubenswrapper[4183]: I0813 20:11:02.701285 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podStartSLOduration=3.701183908 podStartE2EDuration="3.701183908s" podCreationTimestamp="2025-08-13 20:10:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:11:02.699009676 +0000 UTC m=+1629.391674674" watchObservedRunningTime="2025-08-13 20:11:02.701183908 +0000 UTC m=+1629.393848866" Aug 13 20:11:02 crc kubenswrapper[4183]: I0813 20:11:02.740758 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podStartSLOduration=3.740696931 podStartE2EDuration="3.740696931s" podCreationTimestamp="2025-08-13 20:10:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:11:02.739829186 +0000 UTC m=+1629.432494084" watchObservedRunningTime="2025-08-13 20:11:02.740696931 +0000 UTC m=+1629.433361929" Aug 13 20:11:03 crc kubenswrapper[4183]: I0813 20:11:03.682819 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Aug 13 20:11:03 crc kubenswrapper[4183]: I0813 20:11:03.689194 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Aug 13 20:11:54 crc kubenswrapper[4183]: I0813 20:11:54.755271 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:11:54 crc kubenswrapper[4183]: I0813 20:11:54.755913 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:11:54 crc kubenswrapper[4183]: I0813 20:11:54.756028 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:11:54 crc kubenswrapper[4183]: I0813 20:11:54.756079 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:11:54 crc kubenswrapper[4183]: I0813 20:11:54.756124 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:12:54 crc kubenswrapper[4183]: I0813 20:12:54.757243 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:12:54 crc kubenswrapper[4183]: I0813 20:12:54.758015 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:12:54 crc kubenswrapper[4183]: I0813 20:12:54.758059 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:12:54 crc kubenswrapper[4183]: I0813 20:12:54.758090 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:12:54 crc kubenswrapper[4183]: I0813 20:12:54.758135 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:12:55 crc kubenswrapper[4183]: I0813 20:12:55.917583 4183 scope.go:117] "RemoveContainer" containerID="be1e0c86831f89f585cd2c81563266389f6b99fe3a2b00e25563c193b7ae2289" Aug 13 20:12:55 crc kubenswrapper[4183]: I0813 20:12:55.959001 4183 scope.go:117] "RemoveContainer" containerID="6fac670aec99a6e895db54957107db545029859582d9e7bfff8bcb8b8323317b" Aug 13 20:12:56 crc kubenswrapper[4183]: I0813 20:12:56.001663 4183 scope.go:117] "RemoveContainer" containerID="4159ba877f8ff7e1e08f72bf3d12699149238f2597dfea0b4882ee6797fe2c98" Aug 13 20:12:56 crc kubenswrapper[4183]: I0813 20:12:56.041888 4183 scope.go:117] "RemoveContainer" containerID="844a16e08b8b6f6647fb07d6bae6657e732727da7ada45f1211b70ff85887202" Aug 13 20:13:54 crc kubenswrapper[4183]: I0813 20:13:54.759301 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:13:54 crc kubenswrapper[4183]: I0813 20:13:54.760034 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:13:54 crc kubenswrapper[4183]: I0813 20:13:54.760078 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:13:54 crc kubenswrapper[4183]: I0813 20:13:54.760115 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:13:54 crc kubenswrapper[4183]: I0813 20:13:54.760150 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:14:54 crc kubenswrapper[4183]: I0813 20:14:54.760866 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:14:54 crc kubenswrapper[4183]: I0813 20:14:54.761674 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:14:54 crc kubenswrapper[4183]: I0813 20:14:54.761741 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:14:54 crc kubenswrapper[4183]: I0813 20:14:54.761815 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:14:54 crc kubenswrapper[4183]: I0813 20:14:54.761868 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:15:00 crc kubenswrapper[4183]: I0813 20:15:00.374435 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j"] Aug 13 20:15:00 crc kubenswrapper[4183]: I0813 20:15:00.374945 4183 topology_manager.go:215] "Topology Admit Handler" podUID="51936587-a4af-470d-ad92-8ab9062cbc72" podNamespace="openshift-operator-lifecycle-manager" podName="collect-profiles-29251935-d7x6j" Aug 13 20:15:00 crc kubenswrapper[4183]: I0813 20:15:00.375673 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j" Aug 13 20:15:00 crc kubenswrapper[4183]: I0813 20:15:00.378592 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Aug 13 20:15:00 crc kubenswrapper[4183]: I0813 20:15:00.379408 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-45g9d" Aug 13 20:15:00 crc kubenswrapper[4183]: I0813 20:15:00.416621 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j"] Aug 13 20:15:00 crc kubenswrapper[4183]: I0813 20:15:00.471537 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/51936587-a4af-470d-ad92-8ab9062cbc72-secret-volume\") pod \"collect-profiles-29251935-d7x6j\" (UID: \"51936587-a4af-470d-ad92-8ab9062cbc72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j" Aug 13 20:15:00 crc kubenswrapper[4183]: I0813 20:15:00.472052 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf6f7\" (UniqueName: \"kubernetes.io/projected/51936587-a4af-470d-ad92-8ab9062cbc72-kube-api-access-wf6f7\") pod \"collect-profiles-29251935-d7x6j\" (UID: \"51936587-a4af-470d-ad92-8ab9062cbc72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j" Aug 13 20:15:00 crc kubenswrapper[4183]: I0813 20:15:00.472270 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51936587-a4af-470d-ad92-8ab9062cbc72-config-volume\") pod \"collect-profiles-29251935-d7x6j\" (UID: \"51936587-a4af-470d-ad92-8ab9062cbc72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j" Aug 13 20:15:00 crc kubenswrapper[4183]: I0813 20:15:00.573741 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/51936587-a4af-470d-ad92-8ab9062cbc72-secret-volume\") pod \"collect-profiles-29251935-d7x6j\" (UID: \"51936587-a4af-470d-ad92-8ab9062cbc72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j" Aug 13 20:15:00 crc kubenswrapper[4183]: I0813 20:15:00.574275 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wf6f7\" (UniqueName: \"kubernetes.io/projected/51936587-a4af-470d-ad92-8ab9062cbc72-kube-api-access-wf6f7\") pod \"collect-profiles-29251935-d7x6j\" (UID: \"51936587-a4af-470d-ad92-8ab9062cbc72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j" Aug 13 20:15:00 crc kubenswrapper[4183]: I0813 20:15:00.574554 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51936587-a4af-470d-ad92-8ab9062cbc72-config-volume\") pod \"collect-profiles-29251935-d7x6j\" (UID: \"51936587-a4af-470d-ad92-8ab9062cbc72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j" Aug 13 20:15:00 crc kubenswrapper[4183]: I0813 20:15:00.576120 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51936587-a4af-470d-ad92-8ab9062cbc72-config-volume\") pod \"collect-profiles-29251935-d7x6j\" (UID: \"51936587-a4af-470d-ad92-8ab9062cbc72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j" Aug 13 20:15:00 crc kubenswrapper[4183]: I0813 20:15:00.585446 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/51936587-a4af-470d-ad92-8ab9062cbc72-secret-volume\") pod \"collect-profiles-29251935-d7x6j\" (UID: \"51936587-a4af-470d-ad92-8ab9062cbc72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j" Aug 13 20:15:00 crc kubenswrapper[4183]: I0813 20:15:00.598138 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf6f7\" (UniqueName: \"kubernetes.io/projected/51936587-a4af-470d-ad92-8ab9062cbc72-kube-api-access-wf6f7\") pod \"collect-profiles-29251935-d7x6j\" (UID: \"51936587-a4af-470d-ad92-8ab9062cbc72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j" Aug 13 20:15:00 crc kubenswrapper[4183]: I0813 20:15:00.699457 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j" Aug 13 20:15:01 crc kubenswrapper[4183]: I0813 20:15:01.025171 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j"] Aug 13 20:15:01 crc kubenswrapper[4183]: I0813 20:15:01.315680 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j" event={"ID":"51936587-a4af-470d-ad92-8ab9062cbc72","Type":"ContainerStarted","Data":"21feea149913711f5f5cb056c6f29099adea6ffae9788ce014d1175df1602855"} Aug 13 20:15:02 crc kubenswrapper[4183]: I0813 20:15:02.324076 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j" event={"ID":"51936587-a4af-470d-ad92-8ab9062cbc72","Type":"ContainerStarted","Data":"13053062c85d9edb3365e456db12e124816e6411643a8553c324352ece2c7373"} Aug 13 20:15:02 crc kubenswrapper[4183]: I0813 20:15:02.375455 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j" podStartSLOduration=2.375358886 podStartE2EDuration="2.375358886s" podCreationTimestamp="2025-08-13 20:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:15:02.373158483 +0000 UTC m=+1869.065823261" watchObservedRunningTime="2025-08-13 20:15:02.375358886 +0000 UTC m=+1869.068023744" Aug 13 20:15:03 crc kubenswrapper[4183]: I0813 20:15:03.334093 4183 generic.go:334] "Generic (PLEG): container finished" podID="51936587-a4af-470d-ad92-8ab9062cbc72" containerID="13053062c85d9edb3365e456db12e124816e6411643a8553c324352ece2c7373" exitCode=0 Aug 13 20:15:03 crc kubenswrapper[4183]: I0813 20:15:03.334182 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j" event={"ID":"51936587-a4af-470d-ad92-8ab9062cbc72","Type":"ContainerDied","Data":"13053062c85d9edb3365e456db12e124816e6411643a8553c324352ece2c7373"} Aug 13 20:15:04 crc kubenswrapper[4183]: I0813 20:15:04.645413 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j" Aug 13 20:15:04 crc kubenswrapper[4183]: I0813 20:15:04.728715 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wf6f7\" (UniqueName: \"kubernetes.io/projected/51936587-a4af-470d-ad92-8ab9062cbc72-kube-api-access-wf6f7\") pod \"51936587-a4af-470d-ad92-8ab9062cbc72\" (UID: \"51936587-a4af-470d-ad92-8ab9062cbc72\") " Aug 13 20:15:04 crc kubenswrapper[4183]: I0813 20:15:04.728881 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/51936587-a4af-470d-ad92-8ab9062cbc72-secret-volume\") pod \"51936587-a4af-470d-ad92-8ab9062cbc72\" (UID: \"51936587-a4af-470d-ad92-8ab9062cbc72\") " Aug 13 20:15:04 crc kubenswrapper[4183]: I0813 20:15:04.728956 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51936587-a4af-470d-ad92-8ab9062cbc72-config-volume\") pod \"51936587-a4af-470d-ad92-8ab9062cbc72\" (UID: \"51936587-a4af-470d-ad92-8ab9062cbc72\") " Aug 13 20:15:04 crc kubenswrapper[4183]: I0813 20:15:04.730207 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51936587-a4af-470d-ad92-8ab9062cbc72-config-volume" (OuterVolumeSpecName: "config-volume") pod "51936587-a4af-470d-ad92-8ab9062cbc72" (UID: "51936587-a4af-470d-ad92-8ab9062cbc72"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:15:04 crc kubenswrapper[4183]: I0813 20:15:04.741647 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51936587-a4af-470d-ad92-8ab9062cbc72-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "51936587-a4af-470d-ad92-8ab9062cbc72" (UID: "51936587-a4af-470d-ad92-8ab9062cbc72"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:15:04 crc kubenswrapper[4183]: I0813 20:15:04.756593 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51936587-a4af-470d-ad92-8ab9062cbc72-kube-api-access-wf6f7" (OuterVolumeSpecName: "kube-api-access-wf6f7") pod "51936587-a4af-470d-ad92-8ab9062cbc72" (UID: "51936587-a4af-470d-ad92-8ab9062cbc72"). InnerVolumeSpecName "kube-api-access-wf6f7". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:15:04 crc kubenswrapper[4183]: I0813 20:15:04.830174 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-wf6f7\" (UniqueName: \"kubernetes.io/projected/51936587-a4af-470d-ad92-8ab9062cbc72-kube-api-access-wf6f7\") on node \"crc\" DevicePath \"\"" Aug 13 20:15:04 crc kubenswrapper[4183]: I0813 20:15:04.830264 4183 reconciler_common.go:300] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/51936587-a4af-470d-ad92-8ab9062cbc72-secret-volume\") on node \"crc\" DevicePath \"\"" Aug 13 20:15:04 crc kubenswrapper[4183]: I0813 20:15:04.830278 4183 reconciler_common.go:300] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51936587-a4af-470d-ad92-8ab9062cbc72-config-volume\") on node \"crc\" DevicePath \"\"" Aug 13 20:15:05 crc kubenswrapper[4183]: I0813 20:15:05.347352 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j" event={"ID":"51936587-a4af-470d-ad92-8ab9062cbc72","Type":"ContainerDied","Data":"21feea149913711f5f5cb056c6f29099adea6ffae9788ce014d1175df1602855"} Aug 13 20:15:05 crc kubenswrapper[4183]: I0813 20:15:05.347776 4183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21feea149913711f5f5cb056c6f29099adea6ffae9788ce014d1175df1602855" Aug 13 20:15:05 crc kubenswrapper[4183]: I0813 20:15:05.347539 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j" Aug 13 20:15:54 crc kubenswrapper[4183]: I0813 20:15:54.762499 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:15:54 crc kubenswrapper[4183]: I0813 20:15:54.763520 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:15:54 crc kubenswrapper[4183]: I0813 20:15:54.763609 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:15:54 crc kubenswrapper[4183]: I0813 20:15:54.763646 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:15:54 crc kubenswrapper[4183]: I0813 20:15:54.763691 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:16:54 crc kubenswrapper[4183]: I0813 20:16:54.765066 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:16:54 crc kubenswrapper[4183]: I0813 20:16:54.766207 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:16:54 crc kubenswrapper[4183]: I0813 20:16:54.766249 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:16:54 crc kubenswrapper[4183]: I0813 20:16:54.766277 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:16:54 crc kubenswrapper[4183]: I0813 20:16:54.766315 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:16:56 crc kubenswrapper[4183]: I0813 20:16:56.146559 4183 scope.go:117] "RemoveContainer" containerID="e8b2e7f930d500cf3c7f8ae13874b47c586ff96efdacd975bab28dc614898646" Aug 13 20:16:58 crc kubenswrapper[4183]: I0813 20:16:58.193441 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8bbjz"] Aug 13 20:16:58 crc kubenswrapper[4183]: I0813 20:16:58.194055 4183 topology_manager.go:215] "Topology Admit Handler" podUID="8e241cc6-c71d-4fa0-9a1a-18098bcf6594" podNamespace="openshift-marketplace" podName="certified-operators-8bbjz" Aug 13 20:16:58 crc kubenswrapper[4183]: E0813 20:16:58.194328 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="51936587-a4af-470d-ad92-8ab9062cbc72" containerName="collect-profiles" Aug 13 20:16:58 crc kubenswrapper[4183]: I0813 20:16:58.194342 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="51936587-a4af-470d-ad92-8ab9062cbc72" containerName="collect-profiles" Aug 13 20:16:58 crc kubenswrapper[4183]: I0813 20:16:58.194512 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="51936587-a4af-470d-ad92-8ab9062cbc72" containerName="collect-profiles" Aug 13 20:16:58 crc kubenswrapper[4183]: I0813 20:16:58.195638 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8bbjz" Aug 13 20:16:58 crc kubenswrapper[4183]: I0813 20:16:58.259855 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8bbjz"] Aug 13 20:16:58 crc kubenswrapper[4183]: I0813 20:16:58.389343 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e241cc6-c71d-4fa0-9a1a-18098bcf6594-catalog-content\") pod \"certified-operators-8bbjz\" (UID: \"8e241cc6-c71d-4fa0-9a1a-18098bcf6594\") " pod="openshift-marketplace/certified-operators-8bbjz" Aug 13 20:16:58 crc kubenswrapper[4183]: I0813 20:16:58.389447 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c56vw\" (UniqueName: \"kubernetes.io/projected/8e241cc6-c71d-4fa0-9a1a-18098bcf6594-kube-api-access-c56vw\") pod \"certified-operators-8bbjz\" (UID: \"8e241cc6-c71d-4fa0-9a1a-18098bcf6594\") " pod="openshift-marketplace/certified-operators-8bbjz" Aug 13 20:16:58 crc kubenswrapper[4183]: I0813 20:16:58.389506 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e241cc6-c71d-4fa0-9a1a-18098bcf6594-utilities\") pod \"certified-operators-8bbjz\" (UID: \"8e241cc6-c71d-4fa0-9a1a-18098bcf6594\") " pod="openshift-marketplace/certified-operators-8bbjz" Aug 13 20:16:58 crc kubenswrapper[4183]: I0813 20:16:58.490922 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-c56vw\" (UniqueName: \"kubernetes.io/projected/8e241cc6-c71d-4fa0-9a1a-18098bcf6594-kube-api-access-c56vw\") pod \"certified-operators-8bbjz\" (UID: \"8e241cc6-c71d-4fa0-9a1a-18098bcf6594\") " pod="openshift-marketplace/certified-operators-8bbjz" Aug 13 20:16:58 crc kubenswrapper[4183]: I0813 20:16:58.491109 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e241cc6-c71d-4fa0-9a1a-18098bcf6594-utilities\") pod \"certified-operators-8bbjz\" (UID: \"8e241cc6-c71d-4fa0-9a1a-18098bcf6594\") " pod="openshift-marketplace/certified-operators-8bbjz" Aug 13 20:16:58 crc kubenswrapper[4183]: I0813 20:16:58.491155 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e241cc6-c71d-4fa0-9a1a-18098bcf6594-catalog-content\") pod \"certified-operators-8bbjz\" (UID: \"8e241cc6-c71d-4fa0-9a1a-18098bcf6594\") " pod="openshift-marketplace/certified-operators-8bbjz" Aug 13 20:16:58 crc kubenswrapper[4183]: I0813 20:16:58.492075 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e241cc6-c71d-4fa0-9a1a-18098bcf6594-catalog-content\") pod \"certified-operators-8bbjz\" (UID: \"8e241cc6-c71d-4fa0-9a1a-18098bcf6594\") " pod="openshift-marketplace/certified-operators-8bbjz" Aug 13 20:16:58 crc kubenswrapper[4183]: I0813 20:16:58.492098 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e241cc6-c71d-4fa0-9a1a-18098bcf6594-utilities\") pod \"certified-operators-8bbjz\" (UID: \"8e241cc6-c71d-4fa0-9a1a-18098bcf6594\") " pod="openshift-marketplace/certified-operators-8bbjz" Aug 13 20:16:58 crc kubenswrapper[4183]: I0813 20:16:58.518036 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-c56vw\" (UniqueName: \"kubernetes.io/projected/8e241cc6-c71d-4fa0-9a1a-18098bcf6594-kube-api-access-c56vw\") pod \"certified-operators-8bbjz\" (UID: \"8e241cc6-c71d-4fa0-9a1a-18098bcf6594\") " pod="openshift-marketplace/certified-operators-8bbjz" Aug 13 20:16:58 crc kubenswrapper[4183]: I0813 20:16:58.521542 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8bbjz" Aug 13 20:16:58 crc kubenswrapper[4183]: I0813 20:16:58.870097 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8bbjz"] Aug 13 20:16:58 crc kubenswrapper[4183]: W0813 20:16:58.874840 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e241cc6_c71d_4fa0_9a1a_18098bcf6594.slice/crio-18af4daca70b211334d04e0a4c7f6070bf9ac31d48abf8bfcac2bc9afc14c07f WatchSource:0}: Error finding container 18af4daca70b211334d04e0a4c7f6070bf9ac31d48abf8bfcac2bc9afc14c07f: Status 404 returned error can't find the container with id 18af4daca70b211334d04e0a4c7f6070bf9ac31d48abf8bfcac2bc9afc14c07f Aug 13 20:16:59 crc kubenswrapper[4183]: I0813 20:16:59.093491 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8bbjz" event={"ID":"8e241cc6-c71d-4fa0-9a1a-18098bcf6594","Type":"ContainerStarted","Data":"18af4daca70b211334d04e0a4c7f6070bf9ac31d48abf8bfcac2bc9afc14c07f"} Aug 13 20:17:00 crc kubenswrapper[4183]: I0813 20:17:00.103133 4183 generic.go:334] "Generic (PLEG): container finished" podID="8e241cc6-c71d-4fa0-9a1a-18098bcf6594" containerID="a859c58e4fdfbde98f0fc6b6dd5b6b351283c9a369a0cf1ca5981e6dffd1d537" exitCode=0 Aug 13 20:17:00 crc kubenswrapper[4183]: I0813 20:17:00.103218 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8bbjz" event={"ID":"8e241cc6-c71d-4fa0-9a1a-18098bcf6594","Type":"ContainerDied","Data":"a859c58e4fdfbde98f0fc6b6dd5b6b351283c9a369a0cf1ca5981e6dffd1d537"} Aug 13 20:17:00 crc kubenswrapper[4183]: I0813 20:17:00.113335 4183 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Aug 13 20:17:00 crc kubenswrapper[4183]: I0813 20:17:00.181024 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nsk78"] Aug 13 20:17:00 crc kubenswrapper[4183]: I0813 20:17:00.181189 4183 topology_manager.go:215] "Topology Admit Handler" podUID="a084eaff-10e9-439e-96f3-f3450fb14db7" podNamespace="openshift-marketplace" podName="redhat-marketplace-nsk78" Aug 13 20:17:00 crc kubenswrapper[4183]: I0813 20:17:00.185407 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nsk78" Aug 13 20:17:00 crc kubenswrapper[4183]: I0813 20:17:00.265288 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nsk78"] Aug 13 20:17:00 crc kubenswrapper[4183]: I0813 20:17:00.319177 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjvpg\" (UniqueName: \"kubernetes.io/projected/a084eaff-10e9-439e-96f3-f3450fb14db7-kube-api-access-sjvpg\") pod \"redhat-marketplace-nsk78\" (UID: \"a084eaff-10e9-439e-96f3-f3450fb14db7\") " pod="openshift-marketplace/redhat-marketplace-nsk78" Aug 13 20:17:00 crc kubenswrapper[4183]: I0813 20:17:00.319326 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a084eaff-10e9-439e-96f3-f3450fb14db7-utilities\") pod \"redhat-marketplace-nsk78\" (UID: \"a084eaff-10e9-439e-96f3-f3450fb14db7\") " pod="openshift-marketplace/redhat-marketplace-nsk78" Aug 13 20:17:00 crc kubenswrapper[4183]: I0813 20:17:00.319369 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a084eaff-10e9-439e-96f3-f3450fb14db7-catalog-content\") pod \"redhat-marketplace-nsk78\" (UID: \"a084eaff-10e9-439e-96f3-f3450fb14db7\") " pod="openshift-marketplace/redhat-marketplace-nsk78" Aug 13 20:17:00 crc kubenswrapper[4183]: I0813 20:17:00.421284 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a084eaff-10e9-439e-96f3-f3450fb14db7-utilities\") pod \"redhat-marketplace-nsk78\" (UID: \"a084eaff-10e9-439e-96f3-f3450fb14db7\") " pod="openshift-marketplace/redhat-marketplace-nsk78" Aug 13 20:17:00 crc kubenswrapper[4183]: I0813 20:17:00.421378 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a084eaff-10e9-439e-96f3-f3450fb14db7-catalog-content\") pod \"redhat-marketplace-nsk78\" (UID: \"a084eaff-10e9-439e-96f3-f3450fb14db7\") " pod="openshift-marketplace/redhat-marketplace-nsk78" Aug 13 20:17:00 crc kubenswrapper[4183]: I0813 20:17:00.421424 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-sjvpg\" (UniqueName: \"kubernetes.io/projected/a084eaff-10e9-439e-96f3-f3450fb14db7-kube-api-access-sjvpg\") pod \"redhat-marketplace-nsk78\" (UID: \"a084eaff-10e9-439e-96f3-f3450fb14db7\") " pod="openshift-marketplace/redhat-marketplace-nsk78" Aug 13 20:17:00 crc kubenswrapper[4183]: I0813 20:17:00.422439 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a084eaff-10e9-439e-96f3-f3450fb14db7-utilities\") pod \"redhat-marketplace-nsk78\" (UID: \"a084eaff-10e9-439e-96f3-f3450fb14db7\") " pod="openshift-marketplace/redhat-marketplace-nsk78" Aug 13 20:17:00 crc kubenswrapper[4183]: I0813 20:17:00.422862 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a084eaff-10e9-439e-96f3-f3450fb14db7-catalog-content\") pod \"redhat-marketplace-nsk78\" (UID: \"a084eaff-10e9-439e-96f3-f3450fb14db7\") " pod="openshift-marketplace/redhat-marketplace-nsk78" Aug 13 20:17:00 crc kubenswrapper[4183]: I0813 20:17:00.462297 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjvpg\" (UniqueName: \"kubernetes.io/projected/a084eaff-10e9-439e-96f3-f3450fb14db7-kube-api-access-sjvpg\") pod \"redhat-marketplace-nsk78\" (UID: \"a084eaff-10e9-439e-96f3-f3450fb14db7\") " pod="openshift-marketplace/redhat-marketplace-nsk78" Aug 13 20:17:00 crc kubenswrapper[4183]: I0813 20:17:00.507167 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nsk78" Aug 13 20:17:01 crc kubenswrapper[4183]: I0813 20:17:01.049659 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nsk78"] Aug 13 20:17:01 crc kubenswrapper[4183]: W0813 20:17:01.065223 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda084eaff_10e9_439e_96f3_f3450fb14db7.slice/crio-95f40ae6abffb8f7f44a2ff2ed8cce3117476e86756bb59fef9e083f90e1c439 WatchSource:0}: Error finding container 95f40ae6abffb8f7f44a2ff2ed8cce3117476e86756bb59fef9e083f90e1c439: Status 404 returned error can't find the container with id 95f40ae6abffb8f7f44a2ff2ed8cce3117476e86756bb59fef9e083f90e1c439 Aug 13 20:17:01 crc kubenswrapper[4183]: I0813 20:17:01.134559 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nsk78" event={"ID":"a084eaff-10e9-439e-96f3-f3450fb14db7","Type":"ContainerStarted","Data":"95f40ae6abffb8f7f44a2ff2ed8cce3117476e86756bb59fef9e083f90e1c439"} Aug 13 20:17:02 crc kubenswrapper[4183]: I0813 20:17:02.145903 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8bbjz" event={"ID":"8e241cc6-c71d-4fa0-9a1a-18098bcf6594","Type":"ContainerStarted","Data":"81e7ca605fef6f0437d478dbda9f87bc7944dc329f70a81183a2e1f06c2bae95"} Aug 13 20:17:02 crc kubenswrapper[4183]: I0813 20:17:02.151179 4183 generic.go:334] "Generic (PLEG): container finished" podID="a084eaff-10e9-439e-96f3-f3450fb14db7" containerID="53f81688e5fd104f842edd52471938f4845344eecb7146cd6a01389e1136528a" exitCode=0 Aug 13 20:17:02 crc kubenswrapper[4183]: I0813 20:17:02.151240 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nsk78" event={"ID":"a084eaff-10e9-439e-96f3-f3450fb14db7","Type":"ContainerDied","Data":"53f81688e5fd104f842edd52471938f4845344eecb7146cd6a01389e1136528a"} Aug 13 20:17:03 crc kubenswrapper[4183]: I0813 20:17:03.161241 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nsk78" event={"ID":"a084eaff-10e9-439e-96f3-f3450fb14db7","Type":"ContainerStarted","Data":"c83a6ceb92ddb0c1bf7184148f9ba8f188093d3e9de859e304c76ea54c5ea5be"} Aug 13 20:17:16 crc kubenswrapper[4183]: I0813 20:17:16.048838 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-swl5s"] Aug 13 20:17:16 crc kubenswrapper[4183]: I0813 20:17:16.049503 4183 topology_manager.go:215] "Topology Admit Handler" podUID="407a8505-ab64-42f9-aa53-a63f8e97c189" podNamespace="openshift-marketplace" podName="redhat-operators-swl5s" Aug 13 20:17:16 crc kubenswrapper[4183]: I0813 20:17:16.050910 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-swl5s" Aug 13 20:17:16 crc kubenswrapper[4183]: I0813 20:17:16.077652 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48x8n\" (UniqueName: \"kubernetes.io/projected/407a8505-ab64-42f9-aa53-a63f8e97c189-kube-api-access-48x8n\") pod \"redhat-operators-swl5s\" (UID: \"407a8505-ab64-42f9-aa53-a63f8e97c189\") " pod="openshift-marketplace/redhat-operators-swl5s" Aug 13 20:17:16 crc kubenswrapper[4183]: I0813 20:17:16.078043 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/407a8505-ab64-42f9-aa53-a63f8e97c189-catalog-content\") pod \"redhat-operators-swl5s\" (UID: \"407a8505-ab64-42f9-aa53-a63f8e97c189\") " pod="openshift-marketplace/redhat-operators-swl5s" Aug 13 20:17:16 crc kubenswrapper[4183]: I0813 20:17:16.078266 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/407a8505-ab64-42f9-aa53-a63f8e97c189-utilities\") pod \"redhat-operators-swl5s\" (UID: \"407a8505-ab64-42f9-aa53-a63f8e97c189\") " pod="openshift-marketplace/redhat-operators-swl5s" Aug 13 20:17:16 crc kubenswrapper[4183]: I0813 20:17:16.179865 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/407a8505-ab64-42f9-aa53-a63f8e97c189-utilities\") pod \"redhat-operators-swl5s\" (UID: \"407a8505-ab64-42f9-aa53-a63f8e97c189\") " pod="openshift-marketplace/redhat-operators-swl5s" Aug 13 20:17:16 crc kubenswrapper[4183]: I0813 20:17:16.179991 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/407a8505-ab64-42f9-aa53-a63f8e97c189-utilities\") pod \"redhat-operators-swl5s\" (UID: \"407a8505-ab64-42f9-aa53-a63f8e97c189\") " pod="openshift-marketplace/redhat-operators-swl5s" Aug 13 20:17:16 crc kubenswrapper[4183]: I0813 20:17:16.180911 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-48x8n\" (UniqueName: \"kubernetes.io/projected/407a8505-ab64-42f9-aa53-a63f8e97c189-kube-api-access-48x8n\") pod \"redhat-operators-swl5s\" (UID: \"407a8505-ab64-42f9-aa53-a63f8e97c189\") " pod="openshift-marketplace/redhat-operators-swl5s" Aug 13 20:17:16 crc kubenswrapper[4183]: I0813 20:17:16.181460 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/407a8505-ab64-42f9-aa53-a63f8e97c189-catalog-content\") pod \"redhat-operators-swl5s\" (UID: \"407a8505-ab64-42f9-aa53-a63f8e97c189\") " pod="openshift-marketplace/redhat-operators-swl5s" Aug 13 20:17:16 crc kubenswrapper[4183]: I0813 20:17:16.181579 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/407a8505-ab64-42f9-aa53-a63f8e97c189-catalog-content\") pod \"redhat-operators-swl5s\" (UID: \"407a8505-ab64-42f9-aa53-a63f8e97c189\") " pod="openshift-marketplace/redhat-operators-swl5s" Aug 13 20:17:16 crc kubenswrapper[4183]: I0813 20:17:16.247450 4183 generic.go:334] "Generic (PLEG): container finished" podID="8e241cc6-c71d-4fa0-9a1a-18098bcf6594" containerID="81e7ca605fef6f0437d478dbda9f87bc7944dc329f70a81183a2e1f06c2bae95" exitCode=0 Aug 13 20:17:16 crc kubenswrapper[4183]: I0813 20:17:16.247534 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8bbjz" event={"ID":"8e241cc6-c71d-4fa0-9a1a-18098bcf6594","Type":"ContainerDied","Data":"81e7ca605fef6f0437d478dbda9f87bc7944dc329f70a81183a2e1f06c2bae95"} Aug 13 20:17:18 crc kubenswrapper[4183]: I0813 20:17:18.501218 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-swl5s"] Aug 13 20:17:19 crc kubenswrapper[4183]: I0813 20:17:19.268059 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8bbjz" event={"ID":"8e241cc6-c71d-4fa0-9a1a-18098bcf6594","Type":"ContainerStarted","Data":"f31945f91f4930b964bb19c200a97bbe2d2d546d46ca69ecc3087aeaff8c4d57"} Aug 13 20:17:20 crc kubenswrapper[4183]: I0813 20:17:20.726525 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-48x8n\" (UniqueName: \"kubernetes.io/projected/407a8505-ab64-42f9-aa53-a63f8e97c189-kube-api-access-48x8n\") pod \"redhat-operators-swl5s\" (UID: \"407a8505-ab64-42f9-aa53-a63f8e97c189\") " pod="openshift-marketplace/redhat-operators-swl5s" Aug 13 20:17:20 crc kubenswrapper[4183]: I0813 20:17:20.882632 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-swl5s" Aug 13 20:17:21 crc kubenswrapper[4183]: I0813 20:17:21.156903 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8bbjz" podStartSLOduration=6.689642693 podStartE2EDuration="23.156848646s" podCreationTimestamp="2025-08-13 20:16:58 +0000 UTC" firstStartedPulling="2025-08-13 20:17:00.105515813 +0000 UTC m=+1986.798180411" lastFinishedPulling="2025-08-13 20:17:16.572721666 +0000 UTC m=+2003.265386364" observedRunningTime="2025-08-13 20:17:21.14682776 +0000 UTC m=+2007.839492668" watchObservedRunningTime="2025-08-13 20:17:21.156848646 +0000 UTC m=+2007.849513524" Aug 13 20:17:21 crc kubenswrapper[4183]: I0813 20:17:21.601317 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-swl5s"] Aug 13 20:17:22 crc kubenswrapper[4183]: I0813 20:17:22.294948 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-swl5s" event={"ID":"407a8505-ab64-42f9-aa53-a63f8e97c189","Type":"ContainerStarted","Data":"011ddcc3b1f8c14a5a32c853b9c6c3e0b9cee09c368f2d8bc956c20b0cf4d5d5"} Aug 13 20:17:22 crc kubenswrapper[4183]: I0813 20:17:22.298131 4183 generic.go:334] "Generic (PLEG): container finished" podID="a084eaff-10e9-439e-96f3-f3450fb14db7" containerID="c83a6ceb92ddb0c1bf7184148f9ba8f188093d3e9de859e304c76ea54c5ea5be" exitCode=0 Aug 13 20:17:22 crc kubenswrapper[4183]: I0813 20:17:22.298174 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nsk78" event={"ID":"a084eaff-10e9-439e-96f3-f3450fb14db7","Type":"ContainerDied","Data":"c83a6ceb92ddb0c1bf7184148f9ba8f188093d3e9de859e304c76ea54c5ea5be"} Aug 13 20:17:24 crc kubenswrapper[4183]: I0813 20:17:24.318734 4183 generic.go:334] "Generic (PLEG): container finished" podID="407a8505-ab64-42f9-aa53-a63f8e97c189" containerID="194af42a5001c99ae861a7524d09f26e2ac4df40b0aef4c0a94425791cba5661" exitCode=0 Aug 13 20:17:24 crc kubenswrapper[4183]: I0813 20:17:24.319078 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-swl5s" event={"ID":"407a8505-ab64-42f9-aa53-a63f8e97c189","Type":"ContainerDied","Data":"194af42a5001c99ae861a7524d09f26e2ac4df40b0aef4c0a94425791cba5661"} Aug 13 20:17:24 crc kubenswrapper[4183]: I0813 20:17:24.328164 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nsk78" event={"ID":"a084eaff-10e9-439e-96f3-f3450fb14db7","Type":"ContainerStarted","Data":"e7f09b6d9d86854fd3cc30b6c65331b20aae92eab9c6d03b65f319607fa59aee"} Aug 13 20:17:25 crc kubenswrapper[4183]: I0813 20:17:25.786058 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nsk78" podStartSLOduration=5.345693387 podStartE2EDuration="25.786006691s" podCreationTimestamp="2025-08-13 20:17:00 +0000 UTC" firstStartedPulling="2025-08-13 20:17:02.153570299 +0000 UTC m=+1988.846235017" lastFinishedPulling="2025-08-13 20:17:22.593883603 +0000 UTC m=+2009.286548321" observedRunningTime="2025-08-13 20:17:25.781553214 +0000 UTC m=+2012.474217902" watchObservedRunningTime="2025-08-13 20:17:25.786006691 +0000 UTC m=+2012.478671639" Aug 13 20:17:26 crc kubenswrapper[4183]: I0813 20:17:26.348657 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-swl5s" event={"ID":"407a8505-ab64-42f9-aa53-a63f8e97c189","Type":"ContainerStarted","Data":"064b3140f95afe7c02e4fbe1840b217c2cf8563c4df0d72177d57a941d039783"} Aug 13 20:17:28 crc kubenswrapper[4183]: I0813 20:17:28.522411 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8bbjz" Aug 13 20:17:28 crc kubenswrapper[4183]: I0813 20:17:28.522533 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8bbjz" Aug 13 20:17:29 crc kubenswrapper[4183]: I0813 20:17:29.752257 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-8bbjz" podUID="8e241cc6-c71d-4fa0-9a1a-18098bcf6594" containerName="registry-server" probeResult="failure" output=< Aug 13 20:17:29 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:17:29 crc kubenswrapper[4183]: > Aug 13 20:17:30 crc kubenswrapper[4183]: I0813 20:17:30.356548 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tfv59"] Aug 13 20:17:30 crc kubenswrapper[4183]: I0813 20:17:30.357267 4183 topology_manager.go:215] "Topology Admit Handler" podUID="718f06fe-dcad-4053-8de2-e2c38fb7503d" podNamespace="openshift-marketplace" podName="community-operators-tfv59" Aug 13 20:17:30 crc kubenswrapper[4183]: I0813 20:17:30.359125 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tfv59" Aug 13 20:17:30 crc kubenswrapper[4183]: I0813 20:17:30.397519 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/718f06fe-dcad-4053-8de2-e2c38fb7503d-catalog-content\") pod \"community-operators-tfv59\" (UID: \"718f06fe-dcad-4053-8de2-e2c38fb7503d\") " pod="openshift-marketplace/community-operators-tfv59" Aug 13 20:17:30 crc kubenswrapper[4183]: I0813 20:17:30.397720 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/718f06fe-dcad-4053-8de2-e2c38fb7503d-utilities\") pod \"community-operators-tfv59\" (UID: \"718f06fe-dcad-4053-8de2-e2c38fb7503d\") " pod="openshift-marketplace/community-operators-tfv59" Aug 13 20:17:30 crc kubenswrapper[4183]: I0813 20:17:30.397941 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j46mh\" (UniqueName: \"kubernetes.io/projected/718f06fe-dcad-4053-8de2-e2c38fb7503d-kube-api-access-j46mh\") pod \"community-operators-tfv59\" (UID: \"718f06fe-dcad-4053-8de2-e2c38fb7503d\") " pod="openshift-marketplace/community-operators-tfv59" Aug 13 20:17:30 crc kubenswrapper[4183]: I0813 20:17:30.465031 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tfv59"] Aug 13 20:17:30 crc kubenswrapper[4183]: I0813 20:17:30.500349 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/718f06fe-dcad-4053-8de2-e2c38fb7503d-utilities\") pod \"community-operators-tfv59\" (UID: \"718f06fe-dcad-4053-8de2-e2c38fb7503d\") " pod="openshift-marketplace/community-operators-tfv59" Aug 13 20:17:30 crc kubenswrapper[4183]: I0813 20:17:30.500478 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j46mh\" (UniqueName: \"kubernetes.io/projected/718f06fe-dcad-4053-8de2-e2c38fb7503d-kube-api-access-j46mh\") pod \"community-operators-tfv59\" (UID: \"718f06fe-dcad-4053-8de2-e2c38fb7503d\") " pod="openshift-marketplace/community-operators-tfv59" Aug 13 20:17:30 crc kubenswrapper[4183]: I0813 20:17:30.500571 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/718f06fe-dcad-4053-8de2-e2c38fb7503d-catalog-content\") pod \"community-operators-tfv59\" (UID: \"718f06fe-dcad-4053-8de2-e2c38fb7503d\") " pod="openshift-marketplace/community-operators-tfv59" Aug 13 20:17:30 crc kubenswrapper[4183]: I0813 20:17:30.501318 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/718f06fe-dcad-4053-8de2-e2c38fb7503d-utilities\") pod \"community-operators-tfv59\" (UID: \"718f06fe-dcad-4053-8de2-e2c38fb7503d\") " pod="openshift-marketplace/community-operators-tfv59" Aug 13 20:17:30 crc kubenswrapper[4183]: I0813 20:17:30.501491 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/718f06fe-dcad-4053-8de2-e2c38fb7503d-catalog-content\") pod \"community-operators-tfv59\" (UID: \"718f06fe-dcad-4053-8de2-e2c38fb7503d\") " pod="openshift-marketplace/community-operators-tfv59" Aug 13 20:17:30 crc kubenswrapper[4183]: I0813 20:17:30.508324 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nsk78" Aug 13 20:17:30 crc kubenswrapper[4183]: I0813 20:17:30.508371 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nsk78" Aug 13 20:17:30 crc kubenswrapper[4183]: I0813 20:17:30.580356 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-j46mh\" (UniqueName: \"kubernetes.io/projected/718f06fe-dcad-4053-8de2-e2c38fb7503d-kube-api-access-j46mh\") pod \"community-operators-tfv59\" (UID: \"718f06fe-dcad-4053-8de2-e2c38fb7503d\") " pod="openshift-marketplace/community-operators-tfv59" Aug 13 20:17:30 crc kubenswrapper[4183]: I0813 20:17:30.687703 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nsk78" Aug 13 20:17:30 crc kubenswrapper[4183]: I0813 20:17:30.690202 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tfv59" Aug 13 20:17:31 crc kubenswrapper[4183]: I0813 20:17:31.157708 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tfv59"] Aug 13 20:17:31 crc kubenswrapper[4183]: I0813 20:17:31.386560 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tfv59" event={"ID":"718f06fe-dcad-4053-8de2-e2c38fb7503d","Type":"ContainerStarted","Data":"b983de43e5866346d0dd68108cf11b5abe1a858b0917c8e56d9b8c75a270c790"} Aug 13 20:17:31 crc kubenswrapper[4183]: I0813 20:17:31.552454 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nsk78" Aug 13 20:17:32 crc kubenswrapper[4183]: I0813 20:17:32.398376 4183 generic.go:334] "Generic (PLEG): container finished" podID="718f06fe-dcad-4053-8de2-e2c38fb7503d" containerID="54a087bcecc2c6f5ffbb6af57b3c4e81ed60cca12c4ac0edb8fcbaed62dfc395" exitCode=0 Aug 13 20:17:32 crc kubenswrapper[4183]: I0813 20:17:32.400080 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tfv59" event={"ID":"718f06fe-dcad-4053-8de2-e2c38fb7503d","Type":"ContainerDied","Data":"54a087bcecc2c6f5ffbb6af57b3c4e81ed60cca12c4ac0edb8fcbaed62dfc395"} Aug 13 20:17:34 crc kubenswrapper[4183]: I0813 20:17:34.148460 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nsk78"] Aug 13 20:17:34 crc kubenswrapper[4183]: I0813 20:17:34.149759 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nsk78" podUID="a084eaff-10e9-439e-96f3-f3450fb14db7" containerName="registry-server" containerID="cri-o://e7f09b6d9d86854fd3cc30b6c65331b20aae92eab9c6d03b65f319607fa59aee" gracePeriod=2 Aug 13 20:17:34 crc kubenswrapper[4183]: I0813 20:17:34.430402 4183 generic.go:334] "Generic (PLEG): container finished" podID="a084eaff-10e9-439e-96f3-f3450fb14db7" containerID="e7f09b6d9d86854fd3cc30b6c65331b20aae92eab9c6d03b65f319607fa59aee" exitCode=0 Aug 13 20:17:34 crc kubenswrapper[4183]: I0813 20:17:34.430608 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nsk78" event={"ID":"a084eaff-10e9-439e-96f3-f3450fb14db7","Type":"ContainerDied","Data":"e7f09b6d9d86854fd3cc30b6c65331b20aae92eab9c6d03b65f319607fa59aee"} Aug 13 20:17:34 crc kubenswrapper[4183]: I0813 20:17:34.436848 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tfv59" event={"ID":"718f06fe-dcad-4053-8de2-e2c38fb7503d","Type":"ContainerStarted","Data":"fee1587aa425cb6125597c6af788ac5a06d44abb5df280875e0d2b1624a81906"} Aug 13 20:17:35 crc kubenswrapper[4183]: I0813 20:17:35.735554 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nsk78" Aug 13 20:17:35 crc kubenswrapper[4183]: I0813 20:17:35.779065 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a084eaff-10e9-439e-96f3-f3450fb14db7-catalog-content\") pod \"a084eaff-10e9-439e-96f3-f3450fb14db7\" (UID: \"a084eaff-10e9-439e-96f3-f3450fb14db7\") " Aug 13 20:17:35 crc kubenswrapper[4183]: I0813 20:17:35.779167 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjvpg\" (UniqueName: \"kubernetes.io/projected/a084eaff-10e9-439e-96f3-f3450fb14db7-kube-api-access-sjvpg\") pod \"a084eaff-10e9-439e-96f3-f3450fb14db7\" (UID: \"a084eaff-10e9-439e-96f3-f3450fb14db7\") " Aug 13 20:17:35 crc kubenswrapper[4183]: I0813 20:17:35.779255 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a084eaff-10e9-439e-96f3-f3450fb14db7-utilities\") pod \"a084eaff-10e9-439e-96f3-f3450fb14db7\" (UID: \"a084eaff-10e9-439e-96f3-f3450fb14db7\") " Aug 13 20:17:35 crc kubenswrapper[4183]: I0813 20:17:35.780384 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a084eaff-10e9-439e-96f3-f3450fb14db7-utilities" (OuterVolumeSpecName: "utilities") pod "a084eaff-10e9-439e-96f3-f3450fb14db7" (UID: "a084eaff-10e9-439e-96f3-f3450fb14db7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:17:35 crc kubenswrapper[4183]: I0813 20:17:35.790133 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a084eaff-10e9-439e-96f3-f3450fb14db7-kube-api-access-sjvpg" (OuterVolumeSpecName: "kube-api-access-sjvpg") pod "a084eaff-10e9-439e-96f3-f3450fb14db7" (UID: "a084eaff-10e9-439e-96f3-f3450fb14db7"). InnerVolumeSpecName "kube-api-access-sjvpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:17:35 crc kubenswrapper[4183]: I0813 20:17:35.880210 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-sjvpg\" (UniqueName: \"kubernetes.io/projected/a084eaff-10e9-439e-96f3-f3450fb14db7-kube-api-access-sjvpg\") on node \"crc\" DevicePath \"\"" Aug 13 20:17:35 crc kubenswrapper[4183]: I0813 20:17:35.880249 4183 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a084eaff-10e9-439e-96f3-f3450fb14db7-utilities\") on node \"crc\" DevicePath \"\"" Aug 13 20:17:35 crc kubenswrapper[4183]: I0813 20:17:35.912682 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a084eaff-10e9-439e-96f3-f3450fb14db7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a084eaff-10e9-439e-96f3-f3450fb14db7" (UID: "a084eaff-10e9-439e-96f3-f3450fb14db7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:17:35 crc kubenswrapper[4183]: I0813 20:17:35.981512 4183 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a084eaff-10e9-439e-96f3-f3450fb14db7-catalog-content\") on node \"crc\" DevicePath \"\"" Aug 13 20:17:36 crc kubenswrapper[4183]: I0813 20:17:36.451597 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nsk78" event={"ID":"a084eaff-10e9-439e-96f3-f3450fb14db7","Type":"ContainerDied","Data":"95f40ae6abffb8f7f44a2ff2ed8cce3117476e86756bb59fef9e083f90e1c439"} Aug 13 20:17:36 crc kubenswrapper[4183]: I0813 20:17:36.451670 4183 scope.go:117] "RemoveContainer" containerID="e7f09b6d9d86854fd3cc30b6c65331b20aae92eab9c6d03b65f319607fa59aee" Aug 13 20:17:36 crc kubenswrapper[4183]: I0813 20:17:36.451886 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nsk78" Aug 13 20:17:36 crc kubenswrapper[4183]: I0813 20:17:36.507084 4183 scope.go:117] "RemoveContainer" containerID="c83a6ceb92ddb0c1bf7184148f9ba8f188093d3e9de859e304c76ea54c5ea5be" Aug 13 20:17:36 crc kubenswrapper[4183]: I0813 20:17:36.558206 4183 scope.go:117] "RemoveContainer" containerID="53f81688e5fd104f842edd52471938f4845344eecb7146cd6a01389e1136528a" Aug 13 20:17:36 crc kubenswrapper[4183]: I0813 20:17:36.856002 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nsk78"] Aug 13 20:17:36 crc kubenswrapper[4183]: I0813 20:17:36.946699 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nsk78"] Aug 13 20:17:37 crc kubenswrapper[4183]: I0813 20:17:37.233945 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a084eaff-10e9-439e-96f3-f3450fb14db7" path="/var/lib/kubelet/pods/a084eaff-10e9-439e-96f3-f3450fb14db7/volumes" Aug 13 20:17:38 crc kubenswrapper[4183]: I0813 20:17:38.703123 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8bbjz" Aug 13 20:17:38 crc kubenswrapper[4183]: I0813 20:17:38.841230 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8bbjz" Aug 13 20:17:39 crc kubenswrapper[4183]: I0813 20:17:39.170438 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8bbjz"] Aug 13 20:17:40 crc kubenswrapper[4183]: I0813 20:17:40.478207 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8bbjz" podUID="8e241cc6-c71d-4fa0-9a1a-18098bcf6594" containerName="registry-server" containerID="cri-o://f31945f91f4930b964bb19c200a97bbe2d2d546d46ca69ecc3087aeaff8c4d57" gracePeriod=2 Aug 13 20:17:42 crc kubenswrapper[4183]: I0813 20:17:42.497339 4183 generic.go:334] "Generic (PLEG): container finished" podID="8e241cc6-c71d-4fa0-9a1a-18098bcf6594" containerID="f31945f91f4930b964bb19c200a97bbe2d2d546d46ca69ecc3087aeaff8c4d57" exitCode=0 Aug 13 20:17:42 crc kubenswrapper[4183]: I0813 20:17:42.497393 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8bbjz" event={"ID":"8e241cc6-c71d-4fa0-9a1a-18098bcf6594","Type":"ContainerDied","Data":"f31945f91f4930b964bb19c200a97bbe2d2d546d46ca69ecc3087aeaff8c4d57"} Aug 13 20:17:43 crc kubenswrapper[4183]: I0813 20:17:43.186627 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8bbjz" Aug 13 20:17:43 crc kubenswrapper[4183]: I0813 20:17:43.285473 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e241cc6-c71d-4fa0-9a1a-18098bcf6594-catalog-content\") pod \"8e241cc6-c71d-4fa0-9a1a-18098bcf6594\" (UID: \"8e241cc6-c71d-4fa0-9a1a-18098bcf6594\") " Aug 13 20:17:43 crc kubenswrapper[4183]: I0813 20:17:43.286067 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e241cc6-c71d-4fa0-9a1a-18098bcf6594-utilities\") pod \"8e241cc6-c71d-4fa0-9a1a-18098bcf6594\" (UID: \"8e241cc6-c71d-4fa0-9a1a-18098bcf6594\") " Aug 13 20:17:43 crc kubenswrapper[4183]: I0813 20:17:43.286932 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e241cc6-c71d-4fa0-9a1a-18098bcf6594-utilities" (OuterVolumeSpecName: "utilities") pod "8e241cc6-c71d-4fa0-9a1a-18098bcf6594" (UID: "8e241cc6-c71d-4fa0-9a1a-18098bcf6594"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:17:43 crc kubenswrapper[4183]: I0813 20:17:43.287345 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c56vw\" (UniqueName: \"kubernetes.io/projected/8e241cc6-c71d-4fa0-9a1a-18098bcf6594-kube-api-access-c56vw\") pod \"8e241cc6-c71d-4fa0-9a1a-18098bcf6594\" (UID: \"8e241cc6-c71d-4fa0-9a1a-18098bcf6594\") " Aug 13 20:17:43 crc kubenswrapper[4183]: I0813 20:17:43.289686 4183 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e241cc6-c71d-4fa0-9a1a-18098bcf6594-utilities\") on node \"crc\" DevicePath \"\"" Aug 13 20:17:43 crc kubenswrapper[4183]: I0813 20:17:43.294325 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e241cc6-c71d-4fa0-9a1a-18098bcf6594-kube-api-access-c56vw" (OuterVolumeSpecName: "kube-api-access-c56vw") pod "8e241cc6-c71d-4fa0-9a1a-18098bcf6594" (UID: "8e241cc6-c71d-4fa0-9a1a-18098bcf6594"). InnerVolumeSpecName "kube-api-access-c56vw". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:17:43 crc kubenswrapper[4183]: I0813 20:17:43.392494 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-c56vw\" (UniqueName: \"kubernetes.io/projected/8e241cc6-c71d-4fa0-9a1a-18098bcf6594-kube-api-access-c56vw\") on node \"crc\" DevicePath \"\"" Aug 13 20:17:43 crc kubenswrapper[4183]: I0813 20:17:43.511412 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8bbjz" event={"ID":"8e241cc6-c71d-4fa0-9a1a-18098bcf6594","Type":"ContainerDied","Data":"18af4daca70b211334d04e0a4c7f6070bf9ac31d48abf8bfcac2bc9afc14c07f"} Aug 13 20:17:43 crc kubenswrapper[4183]: I0813 20:17:43.511496 4183 scope.go:117] "RemoveContainer" containerID="f31945f91f4930b964bb19c200a97bbe2d2d546d46ca69ecc3087aeaff8c4d57" Aug 13 20:17:43 crc kubenswrapper[4183]: I0813 20:17:43.511652 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8bbjz" Aug 13 20:17:43 crc kubenswrapper[4183]: I0813 20:17:43.556128 4183 scope.go:117] "RemoveContainer" containerID="81e7ca605fef6f0437d478dbda9f87bc7944dc329f70a81183a2e1f06c2bae95" Aug 13 20:17:43 crc kubenswrapper[4183]: I0813 20:17:43.582229 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e241cc6-c71d-4fa0-9a1a-18098bcf6594-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8e241cc6-c71d-4fa0-9a1a-18098bcf6594" (UID: "8e241cc6-c71d-4fa0-9a1a-18098bcf6594"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:17:43 crc kubenswrapper[4183]: I0813 20:17:43.602192 4183 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e241cc6-c71d-4fa0-9a1a-18098bcf6594-catalog-content\") on node \"crc\" DevicePath \"\"" Aug 13 20:17:43 crc kubenswrapper[4183]: I0813 20:17:43.645938 4183 scope.go:117] "RemoveContainer" containerID="a859c58e4fdfbde98f0fc6b6dd5b6b351283c9a369a0cf1ca5981e6dffd1d537" Aug 13 20:17:45 crc kubenswrapper[4183]: I0813 20:17:45.247674 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8bbjz"] Aug 13 20:17:45 crc kubenswrapper[4183]: I0813 20:17:45.309950 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8bbjz"] Aug 13 20:17:47 crc kubenswrapper[4183]: I0813 20:17:47.219237 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e241cc6-c71d-4fa0-9a1a-18098bcf6594" path="/var/lib/kubelet/pods/8e241cc6-c71d-4fa0-9a1a-18098bcf6594/volumes" Aug 13 20:17:54 crc kubenswrapper[4183]: I0813 20:17:54.767616 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:17:54 crc kubenswrapper[4183]: I0813 20:17:54.768291 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:17:54 crc kubenswrapper[4183]: I0813 20:17:54.768440 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:17:54 crc kubenswrapper[4183]: I0813 20:17:54.768565 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:17:54 crc kubenswrapper[4183]: I0813 20:17:54.768832 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:18:21 crc kubenswrapper[4183]: I0813 20:18:21.790031 4183 generic.go:334] "Generic (PLEG): container finished" podID="718f06fe-dcad-4053-8de2-e2c38fb7503d" containerID="fee1587aa425cb6125597c6af788ac5a06d44abb5df280875e0d2b1624a81906" exitCode=0 Aug 13 20:18:21 crc kubenswrapper[4183]: I0813 20:18:21.790379 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tfv59" event={"ID":"718f06fe-dcad-4053-8de2-e2c38fb7503d","Type":"ContainerDied","Data":"fee1587aa425cb6125597c6af788ac5a06d44abb5df280875e0d2b1624a81906"} Aug 13 20:18:24 crc kubenswrapper[4183]: I0813 20:18:24.830046 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tfv59" event={"ID":"718f06fe-dcad-4053-8de2-e2c38fb7503d","Type":"ContainerStarted","Data":"9d0d4f9896e6c60385c01fe90548d89f3dfa99fc0c2cc45dfb29054b3acd6610"} Aug 13 20:18:28 crc kubenswrapper[4183]: I0813 20:18:28.667179 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tfv59" podStartSLOduration=8.94479276 podStartE2EDuration="58.667068725s" podCreationTimestamp="2025-08-13 20:17:30 +0000 UTC" firstStartedPulling="2025-08-13 20:17:32.401991306 +0000 UTC m=+2019.094655904" lastFinishedPulling="2025-08-13 20:18:22.124267171 +0000 UTC m=+2068.816931869" observedRunningTime="2025-08-13 20:18:28.658892431 +0000 UTC m=+2075.351557529" watchObservedRunningTime="2025-08-13 20:18:28.667068725 +0000 UTC m=+2075.359733513" Aug 13 20:18:30 crc kubenswrapper[4183]: I0813 20:18:30.691065 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tfv59" Aug 13 20:18:30 crc kubenswrapper[4183]: I0813 20:18:30.692101 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tfv59" Aug 13 20:18:31 crc kubenswrapper[4183]: I0813 20:18:31.812856 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-tfv59" podUID="718f06fe-dcad-4053-8de2-e2c38fb7503d" containerName="registry-server" probeResult="failure" output=< Aug 13 20:18:31 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:18:31 crc kubenswrapper[4183]: > Aug 13 20:18:42 crc kubenswrapper[4183]: I0813 20:18:42.212915 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-tfv59" podUID="718f06fe-dcad-4053-8de2-e2c38fb7503d" containerName="registry-server" probeResult="failure" output=< Aug 13 20:18:42 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:18:42 crc kubenswrapper[4183]: > Aug 13 20:18:50 crc kubenswrapper[4183]: I0813 20:18:50.817136 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tfv59" Aug 13 20:18:50 crc kubenswrapper[4183]: I0813 20:18:50.931347 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tfv59" Aug 13 20:18:51 crc kubenswrapper[4183]: I0813 20:18:51.204545 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tfv59"] Aug 13 20:18:52 crc kubenswrapper[4183]: I0813 20:18:52.054359 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tfv59" podUID="718f06fe-dcad-4053-8de2-e2c38fb7503d" containerName="registry-server" containerID="cri-o://9d0d4f9896e6c60385c01fe90548d89f3dfa99fc0c2cc45dfb29054b3acd6610" gracePeriod=2 Aug 13 20:18:53 crc kubenswrapper[4183]: I0813 20:18:53.066555 4183 generic.go:334] "Generic (PLEG): container finished" podID="718f06fe-dcad-4053-8de2-e2c38fb7503d" containerID="9d0d4f9896e6c60385c01fe90548d89f3dfa99fc0c2cc45dfb29054b3acd6610" exitCode=0 Aug 13 20:18:53 crc kubenswrapper[4183]: I0813 20:18:53.066676 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tfv59" event={"ID":"718f06fe-dcad-4053-8de2-e2c38fb7503d","Type":"ContainerDied","Data":"9d0d4f9896e6c60385c01fe90548d89f3dfa99fc0c2cc45dfb29054b3acd6610"} Aug 13 20:18:54 crc kubenswrapper[4183]: I0813 20:18:54.355104 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tfv59" Aug 13 20:18:54 crc kubenswrapper[4183]: I0813 20:18:54.503611 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j46mh\" (UniqueName: \"kubernetes.io/projected/718f06fe-dcad-4053-8de2-e2c38fb7503d-kube-api-access-j46mh\") pod \"718f06fe-dcad-4053-8de2-e2c38fb7503d\" (UID: \"718f06fe-dcad-4053-8de2-e2c38fb7503d\") " Aug 13 20:18:54 crc kubenswrapper[4183]: I0813 20:18:54.503694 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/718f06fe-dcad-4053-8de2-e2c38fb7503d-catalog-content\") pod \"718f06fe-dcad-4053-8de2-e2c38fb7503d\" (UID: \"718f06fe-dcad-4053-8de2-e2c38fb7503d\") " Aug 13 20:18:54 crc kubenswrapper[4183]: I0813 20:18:54.503871 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/718f06fe-dcad-4053-8de2-e2c38fb7503d-utilities\") pod \"718f06fe-dcad-4053-8de2-e2c38fb7503d\" (UID: \"718f06fe-dcad-4053-8de2-e2c38fb7503d\") " Aug 13 20:18:54 crc kubenswrapper[4183]: I0813 20:18:54.505841 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/718f06fe-dcad-4053-8de2-e2c38fb7503d-utilities" (OuterVolumeSpecName: "utilities") pod "718f06fe-dcad-4053-8de2-e2c38fb7503d" (UID: "718f06fe-dcad-4053-8de2-e2c38fb7503d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:18:54 crc kubenswrapper[4183]: I0813 20:18:54.511381 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/718f06fe-dcad-4053-8de2-e2c38fb7503d-kube-api-access-j46mh" (OuterVolumeSpecName: "kube-api-access-j46mh") pod "718f06fe-dcad-4053-8de2-e2c38fb7503d" (UID: "718f06fe-dcad-4053-8de2-e2c38fb7503d"). InnerVolumeSpecName "kube-api-access-j46mh". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:18:54 crc kubenswrapper[4183]: I0813 20:18:54.605134 4183 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/718f06fe-dcad-4053-8de2-e2c38fb7503d-utilities\") on node \"crc\" DevicePath \"\"" Aug 13 20:18:54 crc kubenswrapper[4183]: I0813 20:18:54.605191 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-j46mh\" (UniqueName: \"kubernetes.io/projected/718f06fe-dcad-4053-8de2-e2c38fb7503d-kube-api-access-j46mh\") on node \"crc\" DevicePath \"\"" Aug 13 20:18:54 crc kubenswrapper[4183]: I0813 20:18:54.772825 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:18:54 crc kubenswrapper[4183]: I0813 20:18:54.773000 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:18:54 crc kubenswrapper[4183]: I0813 20:18:54.773054 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:18:54 crc kubenswrapper[4183]: I0813 20:18:54.773115 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:18:54 crc kubenswrapper[4183]: I0813 20:18:54.773176 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:18:55 crc kubenswrapper[4183]: I0813 20:18:55.087090 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tfv59" event={"ID":"718f06fe-dcad-4053-8de2-e2c38fb7503d","Type":"ContainerDied","Data":"b983de43e5866346d0dd68108cf11b5abe1a858b0917c8e56d9b8c75a270c790"} Aug 13 20:18:55 crc kubenswrapper[4183]: I0813 20:18:55.087180 4183 scope.go:117] "RemoveContainer" containerID="9d0d4f9896e6c60385c01fe90548d89f3dfa99fc0c2cc45dfb29054b3acd6610" Aug 13 20:18:55 crc kubenswrapper[4183]: I0813 20:18:55.087336 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tfv59" Aug 13 20:18:55 crc kubenswrapper[4183]: I0813 20:18:55.132051 4183 scope.go:117] "RemoveContainer" containerID="fee1587aa425cb6125597c6af788ac5a06d44abb5df280875e0d2b1624a81906" Aug 13 20:18:55 crc kubenswrapper[4183]: I0813 20:18:55.155373 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/718f06fe-dcad-4053-8de2-e2c38fb7503d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "718f06fe-dcad-4053-8de2-e2c38fb7503d" (UID: "718f06fe-dcad-4053-8de2-e2c38fb7503d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:18:55 crc kubenswrapper[4183]: I0813 20:18:55.193463 4183 scope.go:117] "RemoveContainer" containerID="54a087bcecc2c6f5ffbb6af57b3c4e81ed60cca12c4ac0edb8fcbaed62dfc395" Aug 13 20:18:55 crc kubenswrapper[4183]: I0813 20:18:55.219316 4183 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/718f06fe-dcad-4053-8de2-e2c38fb7503d-catalog-content\") on node \"crc\" DevicePath \"\"" Aug 13 20:18:56 crc kubenswrapper[4183]: I0813 20:18:56.533634 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tfv59"] Aug 13 20:18:56 crc kubenswrapper[4183]: I0813 20:18:56.585294 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tfv59"] Aug 13 20:18:57 crc kubenswrapper[4183]: I0813 20:18:57.218185 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="718f06fe-dcad-4053-8de2-e2c38fb7503d" path="/var/lib/kubelet/pods/718f06fe-dcad-4053-8de2-e2c38fb7503d/volumes" Aug 13 20:18:59 crc kubenswrapper[4183]: I0813 20:18:59.120167 4183 generic.go:334] "Generic (PLEG): container finished" podID="407a8505-ab64-42f9-aa53-a63f8e97c189" containerID="064b3140f95afe7c02e4fbe1840b217c2cf8563c4df0d72177d57a941d039783" exitCode=0 Aug 13 20:18:59 crc kubenswrapper[4183]: I0813 20:18:59.120258 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-swl5s" event={"ID":"407a8505-ab64-42f9-aa53-a63f8e97c189","Type":"ContainerDied","Data":"064b3140f95afe7c02e4fbe1840b217c2cf8563c4df0d72177d57a941d039783"} Aug 13 20:19:00 crc kubenswrapper[4183]: I0813 20:19:00.131839 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-swl5s" event={"ID":"407a8505-ab64-42f9-aa53-a63f8e97c189","Type":"ContainerStarted","Data":"6cccf520e993f65fe7f04eb2fcd6d00f74c6d2b2e0662a163738ba7ad2f433ca"} Aug 13 20:19:00 crc kubenswrapper[4183]: I0813 20:19:00.224845 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-swl5s" podStartSLOduration=11.079722633 podStartE2EDuration="1m46.221985722s" podCreationTimestamp="2025-08-13 20:17:14 +0000 UTC" firstStartedPulling="2025-08-13 20:17:24.321737916 +0000 UTC m=+2011.014402594" lastFinishedPulling="2025-08-13 20:18:59.464001005 +0000 UTC m=+2106.156665683" observedRunningTime="2025-08-13 20:19:00.220231852 +0000 UTC m=+2106.912896660" watchObservedRunningTime="2025-08-13 20:19:00.221985722 +0000 UTC m=+2106.914651530" Aug 13 20:19:00 crc kubenswrapper[4183]: I0813 20:19:00.883357 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-swl5s" Aug 13 20:19:00 crc kubenswrapper[4183]: I0813 20:19:00.883456 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-swl5s" Aug 13 20:19:01 crc kubenswrapper[4183]: I0813 20:19:01.993382 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-swl5s" podUID="407a8505-ab64-42f9-aa53-a63f8e97c189" containerName="registry-server" probeResult="failure" output=< Aug 13 20:19:01 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:19:01 crc kubenswrapper[4183]: > Aug 13 20:19:12 crc kubenswrapper[4183]: I0813 20:19:12.039276 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-swl5s" podUID="407a8505-ab64-42f9-aa53-a63f8e97c189" containerName="registry-server" probeResult="failure" output=< Aug 13 20:19:12 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:19:12 crc kubenswrapper[4183]: > Aug 13 20:19:21 crc kubenswrapper[4183]: I0813 20:19:21.985070 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-swl5s" podUID="407a8505-ab64-42f9-aa53-a63f8e97c189" containerName="registry-server" probeResult="failure" output=< Aug 13 20:19:21 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:19:21 crc kubenswrapper[4183]: > Aug 13 20:19:31 crc kubenswrapper[4183]: I0813 20:19:31.006405 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-swl5s" Aug 13 20:19:31 crc kubenswrapper[4183]: I0813 20:19:31.122567 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-swl5s" Aug 13 20:19:34 crc kubenswrapper[4183]: I0813 20:19:34.138114 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-swl5s"] Aug 13 20:19:34 crc kubenswrapper[4183]: I0813 20:19:34.138918 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-swl5s" podUID="407a8505-ab64-42f9-aa53-a63f8e97c189" containerName="registry-server" containerID="cri-o://6cccf520e993f65fe7f04eb2fcd6d00f74c6d2b2e0662a163738ba7ad2f433ca" gracePeriod=2 Aug 13 20:19:34 crc kubenswrapper[4183]: I0813 20:19:34.397883 4183 generic.go:334] "Generic (PLEG): container finished" podID="407a8505-ab64-42f9-aa53-a63f8e97c189" containerID="6cccf520e993f65fe7f04eb2fcd6d00f74c6d2b2e0662a163738ba7ad2f433ca" exitCode=0 Aug 13 20:19:34 crc kubenswrapper[4183]: I0813 20:19:34.397948 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-swl5s" event={"ID":"407a8505-ab64-42f9-aa53-a63f8e97c189","Type":"ContainerDied","Data":"6cccf520e993f65fe7f04eb2fcd6d00f74c6d2b2e0662a163738ba7ad2f433ca"} Aug 13 20:19:34 crc kubenswrapper[4183]: I0813 20:19:34.611367 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-swl5s" Aug 13 20:19:34 crc kubenswrapper[4183]: I0813 20:19:34.735233 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/407a8505-ab64-42f9-aa53-a63f8e97c189-catalog-content\") pod \"407a8505-ab64-42f9-aa53-a63f8e97c189\" (UID: \"407a8505-ab64-42f9-aa53-a63f8e97c189\") " Aug 13 20:19:34 crc kubenswrapper[4183]: I0813 20:19:34.735402 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48x8n\" (UniqueName: \"kubernetes.io/projected/407a8505-ab64-42f9-aa53-a63f8e97c189-kube-api-access-48x8n\") pod \"407a8505-ab64-42f9-aa53-a63f8e97c189\" (UID: \"407a8505-ab64-42f9-aa53-a63f8e97c189\") " Aug 13 20:19:34 crc kubenswrapper[4183]: I0813 20:19:34.735463 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/407a8505-ab64-42f9-aa53-a63f8e97c189-utilities\") pod \"407a8505-ab64-42f9-aa53-a63f8e97c189\" (UID: \"407a8505-ab64-42f9-aa53-a63f8e97c189\") " Aug 13 20:19:34 crc kubenswrapper[4183]: I0813 20:19:34.736719 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/407a8505-ab64-42f9-aa53-a63f8e97c189-utilities" (OuterVolumeSpecName: "utilities") pod "407a8505-ab64-42f9-aa53-a63f8e97c189" (UID: "407a8505-ab64-42f9-aa53-a63f8e97c189"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:19:34 crc kubenswrapper[4183]: I0813 20:19:34.742886 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/407a8505-ab64-42f9-aa53-a63f8e97c189-kube-api-access-48x8n" (OuterVolumeSpecName: "kube-api-access-48x8n") pod "407a8505-ab64-42f9-aa53-a63f8e97c189" (UID: "407a8505-ab64-42f9-aa53-a63f8e97c189"). InnerVolumeSpecName "kube-api-access-48x8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:19:34 crc kubenswrapper[4183]: I0813 20:19:34.839950 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-48x8n\" (UniqueName: \"kubernetes.io/projected/407a8505-ab64-42f9-aa53-a63f8e97c189-kube-api-access-48x8n\") on node \"crc\" DevicePath \"\"" Aug 13 20:19:34 crc kubenswrapper[4183]: I0813 20:19:34.840044 4183 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/407a8505-ab64-42f9-aa53-a63f8e97c189-utilities\") on node \"crc\" DevicePath \"\"" Aug 13 20:19:35 crc kubenswrapper[4183]: I0813 20:19:35.415040 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-swl5s" event={"ID":"407a8505-ab64-42f9-aa53-a63f8e97c189","Type":"ContainerDied","Data":"011ddcc3b1f8c14a5a32c853b9c6c3e0b9cee09c368f2d8bc956c20b0cf4d5d5"} Aug 13 20:19:35 crc kubenswrapper[4183]: I0813 20:19:35.415089 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-swl5s" Aug 13 20:19:35 crc kubenswrapper[4183]: I0813 20:19:35.415176 4183 scope.go:117] "RemoveContainer" containerID="6cccf520e993f65fe7f04eb2fcd6d00f74c6d2b2e0662a163738ba7ad2f433ca" Aug 13 20:19:35 crc kubenswrapper[4183]: I0813 20:19:35.479710 4183 scope.go:117] "RemoveContainer" containerID="064b3140f95afe7c02e4fbe1840b217c2cf8563c4df0d72177d57a941d039783" Aug 13 20:19:35 crc kubenswrapper[4183]: I0813 20:19:35.716961 4183 scope.go:117] "RemoveContainer" containerID="194af42a5001c99ae861a7524d09f26e2ac4df40b0aef4c0a94425791cba5661" Aug 13 20:19:35 crc kubenswrapper[4183]: I0813 20:19:35.736163 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/407a8505-ab64-42f9-aa53-a63f8e97c189-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "407a8505-ab64-42f9-aa53-a63f8e97c189" (UID: "407a8505-ab64-42f9-aa53-a63f8e97c189"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:19:35 crc kubenswrapper[4183]: I0813 20:19:35.764101 4183 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/407a8505-ab64-42f9-aa53-a63f8e97c189-catalog-content\") on node \"crc\" DevicePath \"\"" Aug 13 20:19:38 crc kubenswrapper[4183]: I0813 20:19:38.358735 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-swl5s"] Aug 13 20:19:38 crc kubenswrapper[4183]: I0813 20:19:38.604074 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-swl5s"] Aug 13 20:19:39 crc kubenswrapper[4183]: I0813 20:19:39.217381 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="407a8505-ab64-42f9-aa53-a63f8e97c189" path="/var/lib/kubelet/pods/407a8505-ab64-42f9-aa53-a63f8e97c189/volumes" Aug 13 20:19:54 crc kubenswrapper[4183]: I0813 20:19:54.774766 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:19:54 crc kubenswrapper[4183]: I0813 20:19:54.776105 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:19:54 crc kubenswrapper[4183]: I0813 20:19:54.776210 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:19:54 crc kubenswrapper[4183]: I0813 20:19:54.776267 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:19:54 crc kubenswrapper[4183]: I0813 20:19:54.776328 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:20:54 crc kubenswrapper[4183]: I0813 20:20:54.780947 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:20:54 crc kubenswrapper[4183]: I0813 20:20:54.781628 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:20:54 crc kubenswrapper[4183]: I0813 20:20:54.781725 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:20:54 crc kubenswrapper[4183]: I0813 20:20:54.781833 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:20:54 crc kubenswrapper[4183]: I0813 20:20:54.783726 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:21:54 crc kubenswrapper[4183]: I0813 20:21:54.784718 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:21:54 crc kubenswrapper[4183]: I0813 20:21:54.785676 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:21:54 crc kubenswrapper[4183]: I0813 20:21:54.785728 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:21:54 crc kubenswrapper[4183]: I0813 20:21:54.785858 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:21:54 crc kubenswrapper[4183]: I0813 20:21:54.786005 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:22:54 crc kubenswrapper[4183]: I0813 20:22:54.786811 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:22:54 crc kubenswrapper[4183]: I0813 20:22:54.787500 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:22:54 crc kubenswrapper[4183]: I0813 20:22:54.787549 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:22:54 crc kubenswrapper[4183]: I0813 20:22:54.787580 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:22:54 crc kubenswrapper[4183]: I0813 20:22:54.787616 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:23:54 crc kubenswrapper[4183]: I0813 20:23:54.788392 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:23:54 crc kubenswrapper[4183]: I0813 20:23:54.789243 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:23:54 crc kubenswrapper[4183]: I0813 20:23:54.789302 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:23:54 crc kubenswrapper[4183]: I0813 20:23:54.789353 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:23:54 crc kubenswrapper[4183]: I0813 20:23:54.789391 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:24:54 crc kubenswrapper[4183]: I0813 20:24:54.790268 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:24:54 crc kubenswrapper[4183]: I0813 20:24:54.791164 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:24:54 crc kubenswrapper[4183]: I0813 20:24:54.791235 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:24:54 crc kubenswrapper[4183]: I0813 20:24:54.791272 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:24:54 crc kubenswrapper[4183]: I0813 20:24:54.791350 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:25:54 crc kubenswrapper[4183]: I0813 20:25:54.792447 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:25:54 crc kubenswrapper[4183]: I0813 20:25:54.793238 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:25:54 crc kubenswrapper[4183]: I0813 20:25:54.793278 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:25:54 crc kubenswrapper[4183]: I0813 20:25:54.793314 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:25:54 crc kubenswrapper[4183]: I0813 20:25:54.793340 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:26:54 crc kubenswrapper[4183]: I0813 20:26:54.794075 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:26:54 crc kubenswrapper[4183]: I0813 20:26:54.794888 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:26:54 crc kubenswrapper[4183]: I0813 20:26:54.795014 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:26:54 crc kubenswrapper[4183]: I0813 20:26:54.795061 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:26:54 crc kubenswrapper[4183]: I0813 20:26:54.795093 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.681077 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jbzn9"] Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.681897 4183 topology_manager.go:215] "Topology Admit Handler" podUID="b152b92f-8fab-4b74-8e68-00278380759d" podNamespace="openshift-marketplace" podName="redhat-marketplace-jbzn9" Aug 13 20:27:05 crc kubenswrapper[4183]: E0813 20:27:05.684542 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="407a8505-ab64-42f9-aa53-a63f8e97c189" containerName="registry-server" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.684698 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="407a8505-ab64-42f9-aa53-a63f8e97c189" containerName="registry-server" Aug 13 20:27:05 crc kubenswrapper[4183]: E0813 20:27:05.684728 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="a084eaff-10e9-439e-96f3-f3450fb14db7" containerName="extract-content" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.684735 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="a084eaff-10e9-439e-96f3-f3450fb14db7" containerName="extract-content" Aug 13 20:27:05 crc kubenswrapper[4183]: E0813 20:27:05.684752 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="407a8505-ab64-42f9-aa53-a63f8e97c189" containerName="extract-content" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.684759 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="407a8505-ab64-42f9-aa53-a63f8e97c189" containerName="extract-content" Aug 13 20:27:05 crc kubenswrapper[4183]: E0813 20:27:05.684841 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="8e241cc6-c71d-4fa0-9a1a-18098bcf6594" containerName="extract-content" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.684867 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e241cc6-c71d-4fa0-9a1a-18098bcf6594" containerName="extract-content" Aug 13 20:27:05 crc kubenswrapper[4183]: E0813 20:27:05.684880 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="407a8505-ab64-42f9-aa53-a63f8e97c189" containerName="extract-utilities" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.684887 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="407a8505-ab64-42f9-aa53-a63f8e97c189" containerName="extract-utilities" Aug 13 20:27:05 crc kubenswrapper[4183]: E0813 20:27:05.684898 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="8e241cc6-c71d-4fa0-9a1a-18098bcf6594" containerName="extract-utilities" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.684908 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e241cc6-c71d-4fa0-9a1a-18098bcf6594" containerName="extract-utilities" Aug 13 20:27:05 crc kubenswrapper[4183]: E0813 20:27:05.684918 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="718f06fe-dcad-4053-8de2-e2c38fb7503d" containerName="registry-server" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.684925 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="718f06fe-dcad-4053-8de2-e2c38fb7503d" containerName="registry-server" Aug 13 20:27:05 crc kubenswrapper[4183]: E0813 20:27:05.684937 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="718f06fe-dcad-4053-8de2-e2c38fb7503d" containerName="extract-content" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.684944 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="718f06fe-dcad-4053-8de2-e2c38fb7503d" containerName="extract-content" Aug 13 20:27:05 crc kubenswrapper[4183]: E0813 20:27:05.684955 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="a084eaff-10e9-439e-96f3-f3450fb14db7" containerName="extract-utilities" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.684962 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="a084eaff-10e9-439e-96f3-f3450fb14db7" containerName="extract-utilities" Aug 13 20:27:05 crc kubenswrapper[4183]: E0813 20:27:05.684975 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="718f06fe-dcad-4053-8de2-e2c38fb7503d" containerName="extract-utilities" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.684982 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="718f06fe-dcad-4053-8de2-e2c38fb7503d" containerName="extract-utilities" Aug 13 20:27:05 crc kubenswrapper[4183]: E0813 20:27:05.685027 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="8e241cc6-c71d-4fa0-9a1a-18098bcf6594" containerName="registry-server" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.685041 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e241cc6-c71d-4fa0-9a1a-18098bcf6594" containerName="registry-server" Aug 13 20:27:05 crc kubenswrapper[4183]: E0813 20:27:05.685052 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="a084eaff-10e9-439e-96f3-f3450fb14db7" containerName="registry-server" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.685059 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="a084eaff-10e9-439e-96f3-f3450fb14db7" containerName="registry-server" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.685448 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="407a8505-ab64-42f9-aa53-a63f8e97c189" containerName="registry-server" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.685487 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="718f06fe-dcad-4053-8de2-e2c38fb7503d" containerName="registry-server" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.685502 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="a084eaff-10e9-439e-96f3-f3450fb14db7" containerName="registry-server" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.685512 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e241cc6-c71d-4fa0-9a1a-18098bcf6594" containerName="registry-server" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.686679 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jbzn9" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.725355 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jbzn9"] Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.734441 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfrr6\" (UniqueName: \"kubernetes.io/projected/b152b92f-8fab-4b74-8e68-00278380759d-kube-api-access-sfrr6\") pod \"redhat-marketplace-jbzn9\" (UID: \"b152b92f-8fab-4b74-8e68-00278380759d\") " pod="openshift-marketplace/redhat-marketplace-jbzn9" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.734624 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b152b92f-8fab-4b74-8e68-00278380759d-catalog-content\") pod \"redhat-marketplace-jbzn9\" (UID: \"b152b92f-8fab-4b74-8e68-00278380759d\") " pod="openshift-marketplace/redhat-marketplace-jbzn9" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.734953 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b152b92f-8fab-4b74-8e68-00278380759d-utilities\") pod \"redhat-marketplace-jbzn9\" (UID: \"b152b92f-8fab-4b74-8e68-00278380759d\") " pod="openshift-marketplace/redhat-marketplace-jbzn9" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.838250 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b152b92f-8fab-4b74-8e68-00278380759d-utilities\") pod \"redhat-marketplace-jbzn9\" (UID: \"b152b92f-8fab-4b74-8e68-00278380759d\") " pod="openshift-marketplace/redhat-marketplace-jbzn9" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.836613 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b152b92f-8fab-4b74-8e68-00278380759d-utilities\") pod \"redhat-marketplace-jbzn9\" (UID: \"b152b92f-8fab-4b74-8e68-00278380759d\") " pod="openshift-marketplace/redhat-marketplace-jbzn9" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.838404 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-sfrr6\" (UniqueName: \"kubernetes.io/projected/b152b92f-8fab-4b74-8e68-00278380759d-kube-api-access-sfrr6\") pod \"redhat-marketplace-jbzn9\" (UID: \"b152b92f-8fab-4b74-8e68-00278380759d\") " pod="openshift-marketplace/redhat-marketplace-jbzn9" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.838438 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b152b92f-8fab-4b74-8e68-00278380759d-catalog-content\") pod \"redhat-marketplace-jbzn9\" (UID: \"b152b92f-8fab-4b74-8e68-00278380759d\") " pod="openshift-marketplace/redhat-marketplace-jbzn9" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.839029 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b152b92f-8fab-4b74-8e68-00278380759d-catalog-content\") pod \"redhat-marketplace-jbzn9\" (UID: \"b152b92f-8fab-4b74-8e68-00278380759d\") " pod="openshift-marketplace/redhat-marketplace-jbzn9" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.843107 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xldzg"] Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.843256 4183 topology_manager.go:215] "Topology Admit Handler" podUID="926ac7a4-e156-4e71-9681-7a48897402eb" podNamespace="openshift-marketplace" podName="certified-operators-xldzg" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.847188 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xldzg" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.880146 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfrr6\" (UniqueName: \"kubernetes.io/projected/b152b92f-8fab-4b74-8e68-00278380759d-kube-api-access-sfrr6\") pod \"redhat-marketplace-jbzn9\" (UID: \"b152b92f-8fab-4b74-8e68-00278380759d\") " pod="openshift-marketplace/redhat-marketplace-jbzn9" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.881068 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xldzg"] Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.941762 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/926ac7a4-e156-4e71-9681-7a48897402eb-utilities\") pod \"certified-operators-xldzg\" (UID: \"926ac7a4-e156-4e71-9681-7a48897402eb\") " pod="openshift-marketplace/certified-operators-xldzg" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.942067 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcz8g\" (UniqueName: \"kubernetes.io/projected/926ac7a4-e156-4e71-9681-7a48897402eb-kube-api-access-tcz8g\") pod \"certified-operators-xldzg\" (UID: \"926ac7a4-e156-4e71-9681-7a48897402eb\") " pod="openshift-marketplace/certified-operators-xldzg" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.942116 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/926ac7a4-e156-4e71-9681-7a48897402eb-catalog-content\") pod \"certified-operators-xldzg\" (UID: \"926ac7a4-e156-4e71-9681-7a48897402eb\") " pod="openshift-marketplace/certified-operators-xldzg" Aug 13 20:27:06 crc kubenswrapper[4183]: I0813 20:27:06.012530 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jbzn9" Aug 13 20:27:06 crc kubenswrapper[4183]: I0813 20:27:06.043376 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tcz8g\" (UniqueName: \"kubernetes.io/projected/926ac7a4-e156-4e71-9681-7a48897402eb-kube-api-access-tcz8g\") pod \"certified-operators-xldzg\" (UID: \"926ac7a4-e156-4e71-9681-7a48897402eb\") " pod="openshift-marketplace/certified-operators-xldzg" Aug 13 20:27:06 crc kubenswrapper[4183]: I0813 20:27:06.043470 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/926ac7a4-e156-4e71-9681-7a48897402eb-catalog-content\") pod \"certified-operators-xldzg\" (UID: \"926ac7a4-e156-4e71-9681-7a48897402eb\") " pod="openshift-marketplace/certified-operators-xldzg" Aug 13 20:27:06 crc kubenswrapper[4183]: I0813 20:27:06.043535 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/926ac7a4-e156-4e71-9681-7a48897402eb-utilities\") pod \"certified-operators-xldzg\" (UID: \"926ac7a4-e156-4e71-9681-7a48897402eb\") " pod="openshift-marketplace/certified-operators-xldzg" Aug 13 20:27:06 crc kubenswrapper[4183]: I0813 20:27:06.044525 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/926ac7a4-e156-4e71-9681-7a48897402eb-utilities\") pod \"certified-operators-xldzg\" (UID: \"926ac7a4-e156-4e71-9681-7a48897402eb\") " pod="openshift-marketplace/certified-operators-xldzg" Aug 13 20:27:06 crc kubenswrapper[4183]: I0813 20:27:06.045458 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/926ac7a4-e156-4e71-9681-7a48897402eb-catalog-content\") pod \"certified-operators-xldzg\" (UID: \"926ac7a4-e156-4e71-9681-7a48897402eb\") " pod="openshift-marketplace/certified-operators-xldzg" Aug 13 20:27:06 crc kubenswrapper[4183]: I0813 20:27:06.083111 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcz8g\" (UniqueName: \"kubernetes.io/projected/926ac7a4-e156-4e71-9681-7a48897402eb-kube-api-access-tcz8g\") pod \"certified-operators-xldzg\" (UID: \"926ac7a4-e156-4e71-9681-7a48897402eb\") " pod="openshift-marketplace/certified-operators-xldzg" Aug 13 20:27:06 crc kubenswrapper[4183]: I0813 20:27:06.172146 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xldzg" Aug 13 20:27:06 crc kubenswrapper[4183]: I0813 20:27:06.522088 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jbzn9"] Aug 13 20:27:06 crc kubenswrapper[4183]: I0813 20:27:06.627904 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xldzg"] Aug 13 20:27:06 crc kubenswrapper[4183]: I0813 20:27:06.815655 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xldzg" event={"ID":"926ac7a4-e156-4e71-9681-7a48897402eb","Type":"ContainerStarted","Data":"d26f242e575b9e444a733da3b77f8e6c54682a63650671af06353e001140925e"} Aug 13 20:27:06 crc kubenswrapper[4183]: I0813 20:27:06.817284 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jbzn9" event={"ID":"b152b92f-8fab-4b74-8e68-00278380759d","Type":"ContainerStarted","Data":"65efa81c3e0e120daecf6c9164d2abac6df51a4e5e31a257f7b78c4d3d3d38c0"} Aug 13 20:27:07 crc kubenswrapper[4183]: I0813 20:27:07.828702 4183 generic.go:334] "Generic (PLEG): container finished" podID="926ac7a4-e156-4e71-9681-7a48897402eb" containerID="de56dabaa69b74ae1b421430568b061a335456078e93d11abdc0f8c2b32ea7bc" exitCode=0 Aug 13 20:27:07 crc kubenswrapper[4183]: I0813 20:27:07.828899 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xldzg" event={"ID":"926ac7a4-e156-4e71-9681-7a48897402eb","Type":"ContainerDied","Data":"de56dabaa69b74ae1b421430568b061a335456078e93d11abdc0f8c2b32ea7bc"} Aug 13 20:27:07 crc kubenswrapper[4183]: I0813 20:27:07.833166 4183 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Aug 13 20:27:07 crc kubenswrapper[4183]: I0813 20:27:07.834515 4183 generic.go:334] "Generic (PLEG): container finished" podID="b152b92f-8fab-4b74-8e68-00278380759d" containerID="2ce6380617c75b8aec8cca4873e4bbb6b91a72f626c193c7888f39c7509cf331" exitCode=0 Aug 13 20:27:07 crc kubenswrapper[4183]: I0813 20:27:07.834677 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jbzn9" event={"ID":"b152b92f-8fab-4b74-8e68-00278380759d","Type":"ContainerDied","Data":"2ce6380617c75b8aec8cca4873e4bbb6b91a72f626c193c7888f39c7509cf331"} Aug 13 20:27:08 crc kubenswrapper[4183]: I0813 20:27:08.846077 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jbzn9" event={"ID":"b152b92f-8fab-4b74-8e68-00278380759d","Type":"ContainerStarted","Data":"ed043c58aa1311cba339ea3a88a4451724c3ae23ee6961db5bf5da456cab8286"} Aug 13 20:27:08 crc kubenswrapper[4183]: I0813 20:27:08.849557 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xldzg" event={"ID":"926ac7a4-e156-4e71-9681-7a48897402eb","Type":"ContainerStarted","Data":"b9f7f9231b80223fe3208938c84d0607b808bcfbd6509dd456db0623e8be59a5"} Aug 13 20:27:15 crc kubenswrapper[4183]: I0813 20:27:15.932398 4183 generic.go:334] "Generic (PLEG): container finished" podID="b152b92f-8fab-4b74-8e68-00278380759d" containerID="ed043c58aa1311cba339ea3a88a4451724c3ae23ee6961db5bf5da456cab8286" exitCode=0 Aug 13 20:27:15 crc kubenswrapper[4183]: I0813 20:27:15.932496 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jbzn9" event={"ID":"b152b92f-8fab-4b74-8e68-00278380759d","Type":"ContainerDied","Data":"ed043c58aa1311cba339ea3a88a4451724c3ae23ee6961db5bf5da456cab8286"} Aug 13 20:27:17 crc kubenswrapper[4183]: I0813 20:27:17.952187 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jbzn9" event={"ID":"b152b92f-8fab-4b74-8e68-00278380759d","Type":"ContainerStarted","Data":"7e30ccf539b0e939f52dfb902c47e4cd395445da1765661c6d426b8ca964b032"} Aug 13 20:27:18 crc kubenswrapper[4183]: I0813 20:27:18.623429 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jbzn9" podStartSLOduration=5.083855925 podStartE2EDuration="13.623333743s" podCreationTimestamp="2025-08-13 20:27:05 +0000 UTC" firstStartedPulling="2025-08-13 20:27:07.836421672 +0000 UTC m=+2594.529086440" lastFinishedPulling="2025-08-13 20:27:16.37589966 +0000 UTC m=+2603.068564258" observedRunningTime="2025-08-13 20:27:18.616155369 +0000 UTC m=+2605.308820377" watchObservedRunningTime="2025-08-13 20:27:18.623333743 +0000 UTC m=+2605.315998621" Aug 13 20:27:18 crc kubenswrapper[4183]: I0813 20:27:18.966283 4183 generic.go:334] "Generic (PLEG): container finished" podID="926ac7a4-e156-4e71-9681-7a48897402eb" containerID="b9f7f9231b80223fe3208938c84d0607b808bcfbd6509dd456db0623e8be59a5" exitCode=0 Aug 13 20:27:18 crc kubenswrapper[4183]: I0813 20:27:18.966964 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xldzg" event={"ID":"926ac7a4-e156-4e71-9681-7a48897402eb","Type":"ContainerDied","Data":"b9f7f9231b80223fe3208938c84d0607b808bcfbd6509dd456db0623e8be59a5"} Aug 13 20:27:19 crc kubenswrapper[4183]: I0813 20:27:19.985472 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xldzg" event={"ID":"926ac7a4-e156-4e71-9681-7a48897402eb","Type":"ContainerStarted","Data":"88530f2c8d6983ea4b7f8a55a61c8904c48794b4f2d766641e0746619745d418"} Aug 13 20:27:20 crc kubenswrapper[4183]: I0813 20:27:20.034729 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xldzg" podStartSLOduration=3.500986964 podStartE2EDuration="15.034677739s" podCreationTimestamp="2025-08-13 20:27:05 +0000 UTC" firstStartedPulling="2025-08-13 20:27:07.832168011 +0000 UTC m=+2594.524832719" lastFinishedPulling="2025-08-13 20:27:19.365858876 +0000 UTC m=+2606.058523494" observedRunningTime="2025-08-13 20:27:20.028528893 +0000 UTC m=+2606.721193801" watchObservedRunningTime="2025-08-13 20:27:20.034677739 +0000 UTC m=+2606.727342477" Aug 13 20:27:26 crc kubenswrapper[4183]: I0813 20:27:26.013496 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jbzn9" Aug 13 20:27:26 crc kubenswrapper[4183]: I0813 20:27:26.015469 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jbzn9" Aug 13 20:27:26 crc kubenswrapper[4183]: I0813 20:27:26.171177 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jbzn9" Aug 13 20:27:26 crc kubenswrapper[4183]: I0813 20:27:26.173954 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xldzg" Aug 13 20:27:26 crc kubenswrapper[4183]: I0813 20:27:26.174409 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xldzg" Aug 13 20:27:26 crc kubenswrapper[4183]: I0813 20:27:26.312207 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xldzg" Aug 13 20:27:27 crc kubenswrapper[4183]: I0813 20:27:27.173669 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xldzg" Aug 13 20:27:27 crc kubenswrapper[4183]: I0813 20:27:27.174635 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jbzn9" Aug 13 20:27:27 crc kubenswrapper[4183]: I0813 20:27:27.267615 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xldzg"] Aug 13 20:27:27 crc kubenswrapper[4183]: I0813 20:27:27.431673 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jbzn9"] Aug 13 20:27:29 crc kubenswrapper[4183]: I0813 20:27:29.069858 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jbzn9" podUID="b152b92f-8fab-4b74-8e68-00278380759d" containerName="registry-server" containerID="cri-o://7e30ccf539b0e939f52dfb902c47e4cd395445da1765661c6d426b8ca964b032" gracePeriod=2 Aug 13 20:27:29 crc kubenswrapper[4183]: I0813 20:27:29.070204 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xldzg" podUID="926ac7a4-e156-4e71-9681-7a48897402eb" containerName="registry-server" containerID="cri-o://88530f2c8d6983ea4b7f8a55a61c8904c48794b4f2d766641e0746619745d418" gracePeriod=2 Aug 13 20:27:29 crc kubenswrapper[4183]: I0813 20:27:29.551734 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xldzg" Aug 13 20:27:29 crc kubenswrapper[4183]: I0813 20:27:29.565636 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jbzn9" Aug 13 20:27:29 crc kubenswrapper[4183]: I0813 20:27:29.706074 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/926ac7a4-e156-4e71-9681-7a48897402eb-catalog-content\") pod \"926ac7a4-e156-4e71-9681-7a48897402eb\" (UID: \"926ac7a4-e156-4e71-9681-7a48897402eb\") " Aug 13 20:27:29 crc kubenswrapper[4183]: I0813 20:27:29.706587 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcz8g\" (UniqueName: \"kubernetes.io/projected/926ac7a4-e156-4e71-9681-7a48897402eb-kube-api-access-tcz8g\") pod \"926ac7a4-e156-4e71-9681-7a48897402eb\" (UID: \"926ac7a4-e156-4e71-9681-7a48897402eb\") " Aug 13 20:27:29 crc kubenswrapper[4183]: I0813 20:27:29.706991 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b152b92f-8fab-4b74-8e68-00278380759d-utilities\") pod \"b152b92f-8fab-4b74-8e68-00278380759d\" (UID: \"b152b92f-8fab-4b74-8e68-00278380759d\") " Aug 13 20:27:29 crc kubenswrapper[4183]: I0813 20:27:29.707191 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/926ac7a4-e156-4e71-9681-7a48897402eb-utilities\") pod \"926ac7a4-e156-4e71-9681-7a48897402eb\" (UID: \"926ac7a4-e156-4e71-9681-7a48897402eb\") " Aug 13 20:27:29 crc kubenswrapper[4183]: I0813 20:27:29.707319 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfrr6\" (UniqueName: \"kubernetes.io/projected/b152b92f-8fab-4b74-8e68-00278380759d-kube-api-access-sfrr6\") pod \"b152b92f-8fab-4b74-8e68-00278380759d\" (UID: \"b152b92f-8fab-4b74-8e68-00278380759d\") " Aug 13 20:27:29 crc kubenswrapper[4183]: I0813 20:27:29.707465 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b152b92f-8fab-4b74-8e68-00278380759d-catalog-content\") pod \"b152b92f-8fab-4b74-8e68-00278380759d\" (UID: \"b152b92f-8fab-4b74-8e68-00278380759d\") " Aug 13 20:27:29 crc kubenswrapper[4183]: I0813 20:27:29.707537 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b152b92f-8fab-4b74-8e68-00278380759d-utilities" (OuterVolumeSpecName: "utilities") pod "b152b92f-8fab-4b74-8e68-00278380759d" (UID: "b152b92f-8fab-4b74-8e68-00278380759d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:27:29 crc kubenswrapper[4183]: I0813 20:27:29.707757 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/926ac7a4-e156-4e71-9681-7a48897402eb-utilities" (OuterVolumeSpecName: "utilities") pod "926ac7a4-e156-4e71-9681-7a48897402eb" (UID: "926ac7a4-e156-4e71-9681-7a48897402eb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:27:29 crc kubenswrapper[4183]: I0813 20:27:29.708134 4183 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b152b92f-8fab-4b74-8e68-00278380759d-utilities\") on node \"crc\" DevicePath \"\"" Aug 13 20:27:29 crc kubenswrapper[4183]: I0813 20:27:29.708253 4183 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/926ac7a4-e156-4e71-9681-7a48897402eb-utilities\") on node \"crc\" DevicePath \"\"" Aug 13 20:27:29 crc kubenswrapper[4183]: I0813 20:27:29.714867 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/926ac7a4-e156-4e71-9681-7a48897402eb-kube-api-access-tcz8g" (OuterVolumeSpecName: "kube-api-access-tcz8g") pod "926ac7a4-e156-4e71-9681-7a48897402eb" (UID: "926ac7a4-e156-4e71-9681-7a48897402eb"). InnerVolumeSpecName "kube-api-access-tcz8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:27:29 crc kubenswrapper[4183]: I0813 20:27:29.715290 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b152b92f-8fab-4b74-8e68-00278380759d-kube-api-access-sfrr6" (OuterVolumeSpecName: "kube-api-access-sfrr6") pod "b152b92f-8fab-4b74-8e68-00278380759d" (UID: "b152b92f-8fab-4b74-8e68-00278380759d"). InnerVolumeSpecName "kube-api-access-sfrr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:27:29 crc kubenswrapper[4183]: I0813 20:27:29.810096 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-sfrr6\" (UniqueName: \"kubernetes.io/projected/b152b92f-8fab-4b74-8e68-00278380759d-kube-api-access-sfrr6\") on node \"crc\" DevicePath \"\"" Aug 13 20:27:29 crc kubenswrapper[4183]: I0813 20:27:29.810149 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-tcz8g\" (UniqueName: \"kubernetes.io/projected/926ac7a4-e156-4e71-9681-7a48897402eb-kube-api-access-tcz8g\") on node \"crc\" DevicePath \"\"" Aug 13 20:27:29 crc kubenswrapper[4183]: I0813 20:27:29.846204 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b152b92f-8fab-4b74-8e68-00278380759d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b152b92f-8fab-4b74-8e68-00278380759d" (UID: "b152b92f-8fab-4b74-8e68-00278380759d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:27:29 crc kubenswrapper[4183]: I0813 20:27:29.911927 4183 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b152b92f-8fab-4b74-8e68-00278380759d-catalog-content\") on node \"crc\" DevicePath \"\"" Aug 13 20:27:29 crc kubenswrapper[4183]: I0813 20:27:29.944382 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/926ac7a4-e156-4e71-9681-7a48897402eb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "926ac7a4-e156-4e71-9681-7a48897402eb" (UID: "926ac7a4-e156-4e71-9681-7a48897402eb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.013927 4183 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/926ac7a4-e156-4e71-9681-7a48897402eb-catalog-content\") on node \"crc\" DevicePath \"\"" Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.078332 4183 generic.go:334] "Generic (PLEG): container finished" podID="b152b92f-8fab-4b74-8e68-00278380759d" containerID="7e30ccf539b0e939f52dfb902c47e4cd395445da1765661c6d426b8ca964b032" exitCode=0 Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.078431 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jbzn9" event={"ID":"b152b92f-8fab-4b74-8e68-00278380759d","Type":"ContainerDied","Data":"7e30ccf539b0e939f52dfb902c47e4cd395445da1765661c6d426b8ca964b032"} Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.078464 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jbzn9" event={"ID":"b152b92f-8fab-4b74-8e68-00278380759d","Type":"ContainerDied","Data":"65efa81c3e0e120daecf6c9164d2abac6df51a4e5e31a257f7b78c4d3d3d38c0"} Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.078506 4183 scope.go:117] "RemoveContainer" containerID="7e30ccf539b0e939f52dfb902c47e4cd395445da1765661c6d426b8ca964b032" Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.078669 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jbzn9" Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.087593 4183 generic.go:334] "Generic (PLEG): container finished" podID="926ac7a4-e156-4e71-9681-7a48897402eb" containerID="88530f2c8d6983ea4b7f8a55a61c8904c48794b4f2d766641e0746619745d418" exitCode=0 Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.087681 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xldzg" event={"ID":"926ac7a4-e156-4e71-9681-7a48897402eb","Type":"ContainerDied","Data":"88530f2c8d6983ea4b7f8a55a61c8904c48794b4f2d766641e0746619745d418"} Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.087736 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xldzg" event={"ID":"926ac7a4-e156-4e71-9681-7a48897402eb","Type":"ContainerDied","Data":"d26f242e575b9e444a733da3b77f8e6c54682a63650671af06353e001140925e"} Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.089151 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xldzg" Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.155105 4183 scope.go:117] "RemoveContainer" containerID="ed043c58aa1311cba339ea3a88a4451724c3ae23ee6961db5bf5da456cab8286" Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.230393 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xldzg"] Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.247602 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xldzg"] Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.259374 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jbzn9"] Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.266146 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jbzn9"] Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.278132 4183 scope.go:117] "RemoveContainer" containerID="2ce6380617c75b8aec8cca4873e4bbb6b91a72f626c193c7888f39c7509cf331" Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.324392 4183 scope.go:117] "RemoveContainer" containerID="7e30ccf539b0e939f52dfb902c47e4cd395445da1765661c6d426b8ca964b032" Aug 13 20:27:30 crc kubenswrapper[4183]: E0813 20:27:30.326065 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e30ccf539b0e939f52dfb902c47e4cd395445da1765661c6d426b8ca964b032\": container with ID starting with 7e30ccf539b0e939f52dfb902c47e4cd395445da1765661c6d426b8ca964b032 not found: ID does not exist" containerID="7e30ccf539b0e939f52dfb902c47e4cd395445da1765661c6d426b8ca964b032" Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.326155 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e30ccf539b0e939f52dfb902c47e4cd395445da1765661c6d426b8ca964b032"} err="failed to get container status \"7e30ccf539b0e939f52dfb902c47e4cd395445da1765661c6d426b8ca964b032\": rpc error: code = NotFound desc = could not find container \"7e30ccf539b0e939f52dfb902c47e4cd395445da1765661c6d426b8ca964b032\": container with ID starting with 7e30ccf539b0e939f52dfb902c47e4cd395445da1765661c6d426b8ca964b032 not found: ID does not exist" Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.326184 4183 scope.go:117] "RemoveContainer" containerID="ed043c58aa1311cba339ea3a88a4451724c3ae23ee6961db5bf5da456cab8286" Aug 13 20:27:30 crc kubenswrapper[4183]: E0813 20:27:30.327105 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed043c58aa1311cba339ea3a88a4451724c3ae23ee6961db5bf5da456cab8286\": container with ID starting with ed043c58aa1311cba339ea3a88a4451724c3ae23ee6961db5bf5da456cab8286 not found: ID does not exist" containerID="ed043c58aa1311cba339ea3a88a4451724c3ae23ee6961db5bf5da456cab8286" Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.327149 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed043c58aa1311cba339ea3a88a4451724c3ae23ee6961db5bf5da456cab8286"} err="failed to get container status \"ed043c58aa1311cba339ea3a88a4451724c3ae23ee6961db5bf5da456cab8286\": rpc error: code = NotFound desc = could not find container \"ed043c58aa1311cba339ea3a88a4451724c3ae23ee6961db5bf5da456cab8286\": container with ID starting with ed043c58aa1311cba339ea3a88a4451724c3ae23ee6961db5bf5da456cab8286 not found: ID does not exist" Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.327166 4183 scope.go:117] "RemoveContainer" containerID="2ce6380617c75b8aec8cca4873e4bbb6b91a72f626c193c7888f39c7509cf331" Aug 13 20:27:30 crc kubenswrapper[4183]: E0813 20:27:30.327955 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ce6380617c75b8aec8cca4873e4bbb6b91a72f626c193c7888f39c7509cf331\": container with ID starting with 2ce6380617c75b8aec8cca4873e4bbb6b91a72f626c193c7888f39c7509cf331 not found: ID does not exist" containerID="2ce6380617c75b8aec8cca4873e4bbb6b91a72f626c193c7888f39c7509cf331" Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.328062 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ce6380617c75b8aec8cca4873e4bbb6b91a72f626c193c7888f39c7509cf331"} err="failed to get container status \"2ce6380617c75b8aec8cca4873e4bbb6b91a72f626c193c7888f39c7509cf331\": rpc error: code = NotFound desc = could not find container \"2ce6380617c75b8aec8cca4873e4bbb6b91a72f626c193c7888f39c7509cf331\": container with ID starting with 2ce6380617c75b8aec8cca4873e4bbb6b91a72f626c193c7888f39c7509cf331 not found: ID does not exist" Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.328084 4183 scope.go:117] "RemoveContainer" containerID="88530f2c8d6983ea4b7f8a55a61c8904c48794b4f2d766641e0746619745d418" Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.363618 4183 scope.go:117] "RemoveContainer" containerID="b9f7f9231b80223fe3208938c84d0607b808bcfbd6509dd456db0623e8be59a5" Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.424486 4183 scope.go:117] "RemoveContainer" containerID="de56dabaa69b74ae1b421430568b061a335456078e93d11abdc0f8c2b32ea7bc" Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.478357 4183 scope.go:117] "RemoveContainer" containerID="88530f2c8d6983ea4b7f8a55a61c8904c48794b4f2d766641e0746619745d418" Aug 13 20:27:30 crc kubenswrapper[4183]: E0813 20:27:30.479580 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88530f2c8d6983ea4b7f8a55a61c8904c48794b4f2d766641e0746619745d418\": container with ID starting with 88530f2c8d6983ea4b7f8a55a61c8904c48794b4f2d766641e0746619745d418 not found: ID does not exist" containerID="88530f2c8d6983ea4b7f8a55a61c8904c48794b4f2d766641e0746619745d418" Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.479858 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88530f2c8d6983ea4b7f8a55a61c8904c48794b4f2d766641e0746619745d418"} err="failed to get container status \"88530f2c8d6983ea4b7f8a55a61c8904c48794b4f2d766641e0746619745d418\": rpc error: code = NotFound desc = could not find container \"88530f2c8d6983ea4b7f8a55a61c8904c48794b4f2d766641e0746619745d418\": container with ID starting with 88530f2c8d6983ea4b7f8a55a61c8904c48794b4f2d766641e0746619745d418 not found: ID does not exist" Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.479883 4183 scope.go:117] "RemoveContainer" containerID="b9f7f9231b80223fe3208938c84d0607b808bcfbd6509dd456db0623e8be59a5" Aug 13 20:27:30 crc kubenswrapper[4183]: E0813 20:27:30.480605 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9f7f9231b80223fe3208938c84d0607b808bcfbd6509dd456db0623e8be59a5\": container with ID starting with b9f7f9231b80223fe3208938c84d0607b808bcfbd6509dd456db0623e8be59a5 not found: ID does not exist" containerID="b9f7f9231b80223fe3208938c84d0607b808bcfbd6509dd456db0623e8be59a5" Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.480680 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9f7f9231b80223fe3208938c84d0607b808bcfbd6509dd456db0623e8be59a5"} err="failed to get container status \"b9f7f9231b80223fe3208938c84d0607b808bcfbd6509dd456db0623e8be59a5\": rpc error: code = NotFound desc = could not find container \"b9f7f9231b80223fe3208938c84d0607b808bcfbd6509dd456db0623e8be59a5\": container with ID starting with b9f7f9231b80223fe3208938c84d0607b808bcfbd6509dd456db0623e8be59a5 not found: ID does not exist" Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.480697 4183 scope.go:117] "RemoveContainer" containerID="de56dabaa69b74ae1b421430568b061a335456078e93d11abdc0f8c2b32ea7bc" Aug 13 20:27:30 crc kubenswrapper[4183]: E0813 20:27:30.481149 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de56dabaa69b74ae1b421430568b061a335456078e93d11abdc0f8c2b32ea7bc\": container with ID starting with de56dabaa69b74ae1b421430568b061a335456078e93d11abdc0f8c2b32ea7bc not found: ID does not exist" containerID="de56dabaa69b74ae1b421430568b061a335456078e93d11abdc0f8c2b32ea7bc" Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.481210 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de56dabaa69b74ae1b421430568b061a335456078e93d11abdc0f8c2b32ea7bc"} err="failed to get container status \"de56dabaa69b74ae1b421430568b061a335456078e93d11abdc0f8c2b32ea7bc\": rpc error: code = NotFound desc = could not find container \"de56dabaa69b74ae1b421430568b061a335456078e93d11abdc0f8c2b32ea7bc\": container with ID starting with de56dabaa69b74ae1b421430568b061a335456078e93d11abdc0f8c2b32ea7bc not found: ID does not exist" Aug 13 20:27:31 crc kubenswrapper[4183]: I0813 20:27:31.218427 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="926ac7a4-e156-4e71-9681-7a48897402eb" path="/var/lib/kubelet/pods/926ac7a4-e156-4e71-9681-7a48897402eb/volumes" Aug 13 20:27:31 crc kubenswrapper[4183]: I0813 20:27:31.219874 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b152b92f-8fab-4b74-8e68-00278380759d" path="/var/lib/kubelet/pods/b152b92f-8fab-4b74-8e68-00278380759d/volumes" Aug 13 20:27:54 crc kubenswrapper[4183]: I0813 20:27:54.796855 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:27:54 crc kubenswrapper[4183]: I0813 20:27:54.797488 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:27:54 crc kubenswrapper[4183]: I0813 20:27:54.797527 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:27:54 crc kubenswrapper[4183]: I0813 20:27:54.797558 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:27:54 crc kubenswrapper[4183]: I0813 20:27:54.797597 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:28:43 crc kubenswrapper[4183]: I0813 20:28:43.324677 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hvwvm"] Aug 13 20:28:43 crc kubenswrapper[4183]: I0813 20:28:43.325567 4183 topology_manager.go:215] "Topology Admit Handler" podUID="bfb8fd54-a923-43fe-a0f5-bc4066352d71" podNamespace="openshift-marketplace" podName="community-operators-hvwvm" Aug 13 20:28:43 crc kubenswrapper[4183]: E0813 20:28:43.325926 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="926ac7a4-e156-4e71-9681-7a48897402eb" containerName="extract-content" Aug 13 20:28:43 crc kubenswrapper[4183]: I0813 20:28:43.325946 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="926ac7a4-e156-4e71-9681-7a48897402eb" containerName="extract-content" Aug 13 20:28:43 crc kubenswrapper[4183]: E0813 20:28:43.325959 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="926ac7a4-e156-4e71-9681-7a48897402eb" containerName="registry-server" Aug 13 20:28:43 crc kubenswrapper[4183]: I0813 20:28:43.325966 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="926ac7a4-e156-4e71-9681-7a48897402eb" containerName="registry-server" Aug 13 20:28:43 crc kubenswrapper[4183]: E0813 20:28:43.325982 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="b152b92f-8fab-4b74-8e68-00278380759d" containerName="extract-content" Aug 13 20:28:43 crc kubenswrapper[4183]: I0813 20:28:43.325989 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="b152b92f-8fab-4b74-8e68-00278380759d" containerName="extract-content" Aug 13 20:28:43 crc kubenswrapper[4183]: E0813 20:28:43.326029 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="926ac7a4-e156-4e71-9681-7a48897402eb" containerName="extract-utilities" Aug 13 20:28:43 crc kubenswrapper[4183]: I0813 20:28:43.326047 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="926ac7a4-e156-4e71-9681-7a48897402eb" containerName="extract-utilities" Aug 13 20:28:43 crc kubenswrapper[4183]: E0813 20:28:43.326063 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="b152b92f-8fab-4b74-8e68-00278380759d" containerName="extract-utilities" Aug 13 20:28:43 crc kubenswrapper[4183]: I0813 20:28:43.326072 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="b152b92f-8fab-4b74-8e68-00278380759d" containerName="extract-utilities" Aug 13 20:28:43 crc kubenswrapper[4183]: E0813 20:28:43.326125 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="b152b92f-8fab-4b74-8e68-00278380759d" containerName="registry-server" Aug 13 20:28:43 crc kubenswrapper[4183]: I0813 20:28:43.326136 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="b152b92f-8fab-4b74-8e68-00278380759d" containerName="registry-server" Aug 13 20:28:43 crc kubenswrapper[4183]: I0813 20:28:43.326308 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="926ac7a4-e156-4e71-9681-7a48897402eb" containerName="registry-server" Aug 13 20:28:43 crc kubenswrapper[4183]: I0813 20:28:43.326322 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="b152b92f-8fab-4b74-8e68-00278380759d" containerName="registry-server" Aug 13 20:28:43 crc kubenswrapper[4183]: I0813 20:28:43.327661 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hvwvm" Aug 13 20:28:43 crc kubenswrapper[4183]: I0813 20:28:43.360377 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hvwvm"] Aug 13 20:28:43 crc kubenswrapper[4183]: I0813 20:28:43.377401 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfb8fd54-a923-43fe-a0f5-bc4066352d71-catalog-content\") pod \"community-operators-hvwvm\" (UID: \"bfb8fd54-a923-43fe-a0f5-bc4066352d71\") " pod="openshift-marketplace/community-operators-hvwvm" Aug 13 20:28:43 crc kubenswrapper[4183]: I0813 20:28:43.377601 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfb8fd54-a923-43fe-a0f5-bc4066352d71-utilities\") pod \"community-operators-hvwvm\" (UID: \"bfb8fd54-a923-43fe-a0f5-bc4066352d71\") " pod="openshift-marketplace/community-operators-hvwvm" Aug 13 20:28:43 crc kubenswrapper[4183]: I0813 20:28:43.378243 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4wdz\" (UniqueName: \"kubernetes.io/projected/bfb8fd54-a923-43fe-a0f5-bc4066352d71-kube-api-access-j4wdz\") pod \"community-operators-hvwvm\" (UID: \"bfb8fd54-a923-43fe-a0f5-bc4066352d71\") " pod="openshift-marketplace/community-operators-hvwvm" Aug 13 20:28:43 crc kubenswrapper[4183]: I0813 20:28:43.479200 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfb8fd54-a923-43fe-a0f5-bc4066352d71-utilities\") pod \"community-operators-hvwvm\" (UID: \"bfb8fd54-a923-43fe-a0f5-bc4066352d71\") " pod="openshift-marketplace/community-operators-hvwvm" Aug 13 20:28:43 crc kubenswrapper[4183]: I0813 20:28:43.479349 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j4wdz\" (UniqueName: \"kubernetes.io/projected/bfb8fd54-a923-43fe-a0f5-bc4066352d71-kube-api-access-j4wdz\") pod \"community-operators-hvwvm\" (UID: \"bfb8fd54-a923-43fe-a0f5-bc4066352d71\") " pod="openshift-marketplace/community-operators-hvwvm" Aug 13 20:28:43 crc kubenswrapper[4183]: I0813 20:28:43.479405 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfb8fd54-a923-43fe-a0f5-bc4066352d71-catalog-content\") pod \"community-operators-hvwvm\" (UID: \"bfb8fd54-a923-43fe-a0f5-bc4066352d71\") " pod="openshift-marketplace/community-operators-hvwvm" Aug 13 20:28:43 crc kubenswrapper[4183]: I0813 20:28:43.480311 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfb8fd54-a923-43fe-a0f5-bc4066352d71-utilities\") pod \"community-operators-hvwvm\" (UID: \"bfb8fd54-a923-43fe-a0f5-bc4066352d71\") " pod="openshift-marketplace/community-operators-hvwvm" Aug 13 20:28:43 crc kubenswrapper[4183]: I0813 20:28:43.480353 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfb8fd54-a923-43fe-a0f5-bc4066352d71-catalog-content\") pod \"community-operators-hvwvm\" (UID: \"bfb8fd54-a923-43fe-a0f5-bc4066352d71\") " pod="openshift-marketplace/community-operators-hvwvm" Aug 13 20:28:43 crc kubenswrapper[4183]: I0813 20:28:43.516418 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4wdz\" (UniqueName: \"kubernetes.io/projected/bfb8fd54-a923-43fe-a0f5-bc4066352d71-kube-api-access-j4wdz\") pod \"community-operators-hvwvm\" (UID: \"bfb8fd54-a923-43fe-a0f5-bc4066352d71\") " pod="openshift-marketplace/community-operators-hvwvm" Aug 13 20:28:43 crc kubenswrapper[4183]: I0813 20:28:43.659547 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hvwvm" Aug 13 20:28:44 crc kubenswrapper[4183]: I0813 20:28:44.064674 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hvwvm"] Aug 13 20:28:44 crc kubenswrapper[4183]: I0813 20:28:44.629205 4183 generic.go:334] "Generic (PLEG): container finished" podID="bfb8fd54-a923-43fe-a0f5-bc4066352d71" containerID="e757bc97b0adc6d6cf0ccef8319788efea8208fee6dfe24ef865cc769848b1ef" exitCode=0 Aug 13 20:28:44 crc kubenswrapper[4183]: I0813 20:28:44.630049 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hvwvm" event={"ID":"bfb8fd54-a923-43fe-a0f5-bc4066352d71","Type":"ContainerDied","Data":"e757bc97b0adc6d6cf0ccef8319788efea8208fee6dfe24ef865cc769848b1ef"} Aug 13 20:28:44 crc kubenswrapper[4183]: I0813 20:28:44.630922 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hvwvm" event={"ID":"bfb8fd54-a923-43fe-a0f5-bc4066352d71","Type":"ContainerStarted","Data":"786926dc94686efd1a36edcba9d74a25c52ebbab0b0f4bffa09ccd0563fa89af"} Aug 13 20:28:45 crc kubenswrapper[4183]: I0813 20:28:45.657598 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hvwvm" event={"ID":"bfb8fd54-a923-43fe-a0f5-bc4066352d71","Type":"ContainerStarted","Data":"e680e963590fc9f5f15495fee59202e5d2c3d62df223d53f279ca67bdf1c2519"} Aug 13 20:28:54 crc kubenswrapper[4183]: I0813 20:28:54.798527 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:28:54 crc kubenswrapper[4183]: I0813 20:28:54.799512 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:28:54 crc kubenswrapper[4183]: I0813 20:28:54.799589 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:28:54 crc kubenswrapper[4183]: I0813 20:28:54.799642 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:28:54 crc kubenswrapper[4183]: I0813 20:28:54.799690 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:28:57 crc kubenswrapper[4183]: I0813 20:28:57.754900 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hvwvm" event={"ID":"bfb8fd54-a923-43fe-a0f5-bc4066352d71","Type":"ContainerDied","Data":"e680e963590fc9f5f15495fee59202e5d2c3d62df223d53f279ca67bdf1c2519"} Aug 13 20:28:57 crc kubenswrapper[4183]: I0813 20:28:57.754912 4183 generic.go:334] "Generic (PLEG): container finished" podID="bfb8fd54-a923-43fe-a0f5-bc4066352d71" containerID="e680e963590fc9f5f15495fee59202e5d2c3d62df223d53f279ca67bdf1c2519" exitCode=0 Aug 13 20:28:59 crc kubenswrapper[4183]: I0813 20:28:59.779256 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hvwvm" event={"ID":"bfb8fd54-a923-43fe-a0f5-bc4066352d71","Type":"ContainerStarted","Data":"133bc35819b92fc5eccabda1a227691250d617d9190cf935e0388ffd98cee7fc"} Aug 13 20:28:59 crc kubenswrapper[4183]: I0813 20:28:59.823743 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hvwvm" podStartSLOduration=3.175837032 podStartE2EDuration="16.823670146s" podCreationTimestamp="2025-08-13 20:28:43 +0000 UTC" firstStartedPulling="2025-08-13 20:28:44.639101497 +0000 UTC m=+2691.331766095" lastFinishedPulling="2025-08-13 20:28:58.286934521 +0000 UTC m=+2704.979599209" observedRunningTime="2025-08-13 20:28:59.820758222 +0000 UTC m=+2706.513422960" watchObservedRunningTime="2025-08-13 20:28:59.823670146 +0000 UTC m=+2706.516334874" Aug 13 20:29:03 crc kubenswrapper[4183]: I0813 20:29:03.660115 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hvwvm" Aug 13 20:29:03 crc kubenswrapper[4183]: I0813 20:29:03.660963 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hvwvm" Aug 13 20:29:03 crc kubenswrapper[4183]: I0813 20:29:03.780392 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hvwvm" Aug 13 20:29:03 crc kubenswrapper[4183]: I0813 20:29:03.914752 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hvwvm" Aug 13 20:29:03 crc kubenswrapper[4183]: I0813 20:29:03.990443 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hvwvm"] Aug 13 20:29:05 crc kubenswrapper[4183]: I0813 20:29:05.815902 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hvwvm" podUID="bfb8fd54-a923-43fe-a0f5-bc4066352d71" containerName="registry-server" containerID="cri-o://133bc35819b92fc5eccabda1a227691250d617d9190cf935e0388ffd98cee7fc" gracePeriod=2 Aug 13 20:29:06 crc kubenswrapper[4183]: I0813 20:29:06.270104 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hvwvm" Aug 13 20:29:06 crc kubenswrapper[4183]: I0813 20:29:06.449566 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4wdz\" (UniqueName: \"kubernetes.io/projected/bfb8fd54-a923-43fe-a0f5-bc4066352d71-kube-api-access-j4wdz\") pod \"bfb8fd54-a923-43fe-a0f5-bc4066352d71\" (UID: \"bfb8fd54-a923-43fe-a0f5-bc4066352d71\") " Aug 13 20:29:06 crc kubenswrapper[4183]: I0813 20:29:06.450180 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfb8fd54-a923-43fe-a0f5-bc4066352d71-catalog-content\") pod \"bfb8fd54-a923-43fe-a0f5-bc4066352d71\" (UID: \"bfb8fd54-a923-43fe-a0f5-bc4066352d71\") " Aug 13 20:29:06 crc kubenswrapper[4183]: I0813 20:29:06.450371 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfb8fd54-a923-43fe-a0f5-bc4066352d71-utilities\") pod \"bfb8fd54-a923-43fe-a0f5-bc4066352d71\" (UID: \"bfb8fd54-a923-43fe-a0f5-bc4066352d71\") " Aug 13 20:29:06 crc kubenswrapper[4183]: I0813 20:29:06.451196 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfb8fd54-a923-43fe-a0f5-bc4066352d71-utilities" (OuterVolumeSpecName: "utilities") pod "bfb8fd54-a923-43fe-a0f5-bc4066352d71" (UID: "bfb8fd54-a923-43fe-a0f5-bc4066352d71"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:29:06 crc kubenswrapper[4183]: I0813 20:29:06.457914 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfb8fd54-a923-43fe-a0f5-bc4066352d71-kube-api-access-j4wdz" (OuterVolumeSpecName: "kube-api-access-j4wdz") pod "bfb8fd54-a923-43fe-a0f5-bc4066352d71" (UID: "bfb8fd54-a923-43fe-a0f5-bc4066352d71"). InnerVolumeSpecName "kube-api-access-j4wdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:29:06 crc kubenswrapper[4183]: I0813 20:29:06.551885 4183 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfb8fd54-a923-43fe-a0f5-bc4066352d71-utilities\") on node \"crc\" DevicePath \"\"" Aug 13 20:29:06 crc kubenswrapper[4183]: I0813 20:29:06.551946 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-j4wdz\" (UniqueName: \"kubernetes.io/projected/bfb8fd54-a923-43fe-a0f5-bc4066352d71-kube-api-access-j4wdz\") on node \"crc\" DevicePath \"\"" Aug 13 20:29:06 crc kubenswrapper[4183]: I0813 20:29:06.831648 4183 generic.go:334] "Generic (PLEG): container finished" podID="bfb8fd54-a923-43fe-a0f5-bc4066352d71" containerID="133bc35819b92fc5eccabda1a227691250d617d9190cf935e0388ffd98cee7fc" exitCode=0 Aug 13 20:29:06 crc kubenswrapper[4183]: I0813 20:29:06.831920 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hvwvm" event={"ID":"bfb8fd54-a923-43fe-a0f5-bc4066352d71","Type":"ContainerDied","Data":"133bc35819b92fc5eccabda1a227691250d617d9190cf935e0388ffd98cee7fc"} Aug 13 20:29:06 crc kubenswrapper[4183]: I0813 20:29:06.831997 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hvwvm" event={"ID":"bfb8fd54-a923-43fe-a0f5-bc4066352d71","Type":"ContainerDied","Data":"786926dc94686efd1a36edcba9d74a25c52ebbab0b0f4bffa09ccd0563fa89af"} Aug 13 20:29:06 crc kubenswrapper[4183]: I0813 20:29:06.832103 4183 scope.go:117] "RemoveContainer" containerID="133bc35819b92fc5eccabda1a227691250d617d9190cf935e0388ffd98cee7fc" Aug 13 20:29:06 crc kubenswrapper[4183]: I0813 20:29:06.832179 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hvwvm" Aug 13 20:29:06 crc kubenswrapper[4183]: I0813 20:29:06.886425 4183 scope.go:117] "RemoveContainer" containerID="e680e963590fc9f5f15495fee59202e5d2c3d62df223d53f279ca67bdf1c2519" Aug 13 20:29:06 crc kubenswrapper[4183]: I0813 20:29:06.958360 4183 scope.go:117] "RemoveContainer" containerID="e757bc97b0adc6d6cf0ccef8319788efea8208fee6dfe24ef865cc769848b1ef" Aug 13 20:29:07 crc kubenswrapper[4183]: I0813 20:29:07.001299 4183 scope.go:117] "RemoveContainer" containerID="133bc35819b92fc5eccabda1a227691250d617d9190cf935e0388ffd98cee7fc" Aug 13 20:29:07 crc kubenswrapper[4183]: E0813 20:29:07.002724 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"133bc35819b92fc5eccabda1a227691250d617d9190cf935e0388ffd98cee7fc\": container with ID starting with 133bc35819b92fc5eccabda1a227691250d617d9190cf935e0388ffd98cee7fc not found: ID does not exist" containerID="133bc35819b92fc5eccabda1a227691250d617d9190cf935e0388ffd98cee7fc" Aug 13 20:29:07 crc kubenswrapper[4183]: I0813 20:29:07.002860 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"133bc35819b92fc5eccabda1a227691250d617d9190cf935e0388ffd98cee7fc"} err="failed to get container status \"133bc35819b92fc5eccabda1a227691250d617d9190cf935e0388ffd98cee7fc\": rpc error: code = NotFound desc = could not find container \"133bc35819b92fc5eccabda1a227691250d617d9190cf935e0388ffd98cee7fc\": container with ID starting with 133bc35819b92fc5eccabda1a227691250d617d9190cf935e0388ffd98cee7fc not found: ID does not exist" Aug 13 20:29:07 crc kubenswrapper[4183]: I0813 20:29:07.002883 4183 scope.go:117] "RemoveContainer" containerID="e680e963590fc9f5f15495fee59202e5d2c3d62df223d53f279ca67bdf1c2519" Aug 13 20:29:07 crc kubenswrapper[4183]: E0813 20:29:07.003455 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e680e963590fc9f5f15495fee59202e5d2c3d62df223d53f279ca67bdf1c2519\": container with ID starting with e680e963590fc9f5f15495fee59202e5d2c3d62df223d53f279ca67bdf1c2519 not found: ID does not exist" containerID="e680e963590fc9f5f15495fee59202e5d2c3d62df223d53f279ca67bdf1c2519" Aug 13 20:29:07 crc kubenswrapper[4183]: I0813 20:29:07.003521 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e680e963590fc9f5f15495fee59202e5d2c3d62df223d53f279ca67bdf1c2519"} err="failed to get container status \"e680e963590fc9f5f15495fee59202e5d2c3d62df223d53f279ca67bdf1c2519\": rpc error: code = NotFound desc = could not find container \"e680e963590fc9f5f15495fee59202e5d2c3d62df223d53f279ca67bdf1c2519\": container with ID starting with e680e963590fc9f5f15495fee59202e5d2c3d62df223d53f279ca67bdf1c2519 not found: ID does not exist" Aug 13 20:29:07 crc kubenswrapper[4183]: I0813 20:29:07.003548 4183 scope.go:117] "RemoveContainer" containerID="e757bc97b0adc6d6cf0ccef8319788efea8208fee6dfe24ef865cc769848b1ef" Aug 13 20:29:07 crc kubenswrapper[4183]: E0813 20:29:07.004426 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e757bc97b0adc6d6cf0ccef8319788efea8208fee6dfe24ef865cc769848b1ef\": container with ID starting with e757bc97b0adc6d6cf0ccef8319788efea8208fee6dfe24ef865cc769848b1ef not found: ID does not exist" containerID="e757bc97b0adc6d6cf0ccef8319788efea8208fee6dfe24ef865cc769848b1ef" Aug 13 20:29:07 crc kubenswrapper[4183]: I0813 20:29:07.004459 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e757bc97b0adc6d6cf0ccef8319788efea8208fee6dfe24ef865cc769848b1ef"} err="failed to get container status \"e757bc97b0adc6d6cf0ccef8319788efea8208fee6dfe24ef865cc769848b1ef\": rpc error: code = NotFound desc = could not find container \"e757bc97b0adc6d6cf0ccef8319788efea8208fee6dfe24ef865cc769848b1ef\": container with ID starting with e757bc97b0adc6d6cf0ccef8319788efea8208fee6dfe24ef865cc769848b1ef not found: ID does not exist" Aug 13 20:29:07 crc kubenswrapper[4183]: I0813 20:29:07.133046 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfb8fd54-a923-43fe-a0f5-bc4066352d71-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bfb8fd54-a923-43fe-a0f5-bc4066352d71" (UID: "bfb8fd54-a923-43fe-a0f5-bc4066352d71"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:29:07 crc kubenswrapper[4183]: I0813 20:29:07.159478 4183 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfb8fd54-a923-43fe-a0f5-bc4066352d71-catalog-content\") on node \"crc\" DevicePath \"\"" Aug 13 20:29:07 crc kubenswrapper[4183]: I0813 20:29:07.474406 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hvwvm"] Aug 13 20:29:07 crc kubenswrapper[4183]: I0813 20:29:07.488264 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hvwvm"] Aug 13 20:29:09 crc kubenswrapper[4183]: I0813 20:29:09.217193 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfb8fd54-a923-43fe-a0f5-bc4066352d71" path="/var/lib/kubelet/pods/bfb8fd54-a923-43fe-a0f5-bc4066352d71/volumes" Aug 13 20:29:30 crc kubenswrapper[4183]: I0813 20:29:30.105720 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zdwjn"] Aug 13 20:29:30 crc kubenswrapper[4183]: I0813 20:29:30.106596 4183 topology_manager.go:215] "Topology Admit Handler" podUID="6d579e1a-3b27-4c1f-9175-42ac58490d42" podNamespace="openshift-marketplace" podName="redhat-operators-zdwjn" Aug 13 20:29:30 crc kubenswrapper[4183]: E0813 20:29:30.106870 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="bfb8fd54-a923-43fe-a0f5-bc4066352d71" containerName="extract-utilities" Aug 13 20:29:30 crc kubenswrapper[4183]: I0813 20:29:30.106886 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfb8fd54-a923-43fe-a0f5-bc4066352d71" containerName="extract-utilities" Aug 13 20:29:30 crc kubenswrapper[4183]: E0813 20:29:30.106898 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="bfb8fd54-a923-43fe-a0f5-bc4066352d71" containerName="extract-content" Aug 13 20:29:30 crc kubenswrapper[4183]: I0813 20:29:30.106906 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfb8fd54-a923-43fe-a0f5-bc4066352d71" containerName="extract-content" Aug 13 20:29:30 crc kubenswrapper[4183]: E0813 20:29:30.106923 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="bfb8fd54-a923-43fe-a0f5-bc4066352d71" containerName="registry-server" Aug 13 20:29:30 crc kubenswrapper[4183]: I0813 20:29:30.106932 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfb8fd54-a923-43fe-a0f5-bc4066352d71" containerName="registry-server" Aug 13 20:29:30 crc kubenswrapper[4183]: I0813 20:29:30.107125 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfb8fd54-a923-43fe-a0f5-bc4066352d71" containerName="registry-server" Aug 13 20:29:30 crc kubenswrapper[4183]: I0813 20:29:30.115316 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zdwjn" Aug 13 20:29:30 crc kubenswrapper[4183]: I0813 20:29:30.142749 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zdwjn"] Aug 13 20:29:30 crc kubenswrapper[4183]: I0813 20:29:30.293194 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d579e1a-3b27-4c1f-9175-42ac58490d42-utilities\") pod \"redhat-operators-zdwjn\" (UID: \"6d579e1a-3b27-4c1f-9175-42ac58490d42\") " pod="openshift-marketplace/redhat-operators-zdwjn" Aug 13 20:29:30 crc kubenswrapper[4183]: I0813 20:29:30.293265 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d579e1a-3b27-4c1f-9175-42ac58490d42-catalog-content\") pod \"redhat-operators-zdwjn\" (UID: \"6d579e1a-3b27-4c1f-9175-42ac58490d42\") " pod="openshift-marketplace/redhat-operators-zdwjn" Aug 13 20:29:30 crc kubenswrapper[4183]: I0813 20:29:30.293294 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6rj8\" (UniqueName: \"kubernetes.io/projected/6d579e1a-3b27-4c1f-9175-42ac58490d42-kube-api-access-r6rj8\") pod \"redhat-operators-zdwjn\" (UID: \"6d579e1a-3b27-4c1f-9175-42ac58490d42\") " pod="openshift-marketplace/redhat-operators-zdwjn" Aug 13 20:29:30 crc kubenswrapper[4183]: I0813 20:29:30.394671 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d579e1a-3b27-4c1f-9175-42ac58490d42-catalog-content\") pod \"redhat-operators-zdwjn\" (UID: \"6d579e1a-3b27-4c1f-9175-42ac58490d42\") " pod="openshift-marketplace/redhat-operators-zdwjn" Aug 13 20:29:30 crc kubenswrapper[4183]: I0813 20:29:30.395277 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r6rj8\" (UniqueName: \"kubernetes.io/projected/6d579e1a-3b27-4c1f-9175-42ac58490d42-kube-api-access-r6rj8\") pod \"redhat-operators-zdwjn\" (UID: \"6d579e1a-3b27-4c1f-9175-42ac58490d42\") " pod="openshift-marketplace/redhat-operators-zdwjn" Aug 13 20:29:30 crc kubenswrapper[4183]: I0813 20:29:30.395684 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d579e1a-3b27-4c1f-9175-42ac58490d42-catalog-content\") pod \"redhat-operators-zdwjn\" (UID: \"6d579e1a-3b27-4c1f-9175-42ac58490d42\") " pod="openshift-marketplace/redhat-operators-zdwjn" Aug 13 20:29:30 crc kubenswrapper[4183]: I0813 20:29:30.396060 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d579e1a-3b27-4c1f-9175-42ac58490d42-utilities\") pod \"redhat-operators-zdwjn\" (UID: \"6d579e1a-3b27-4c1f-9175-42ac58490d42\") " pod="openshift-marketplace/redhat-operators-zdwjn" Aug 13 20:29:30 crc kubenswrapper[4183]: I0813 20:29:30.396737 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d579e1a-3b27-4c1f-9175-42ac58490d42-utilities\") pod \"redhat-operators-zdwjn\" (UID: \"6d579e1a-3b27-4c1f-9175-42ac58490d42\") " pod="openshift-marketplace/redhat-operators-zdwjn" Aug 13 20:29:30 crc kubenswrapper[4183]: I0813 20:29:30.439308 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6rj8\" (UniqueName: \"kubernetes.io/projected/6d579e1a-3b27-4c1f-9175-42ac58490d42-kube-api-access-r6rj8\") pod \"redhat-operators-zdwjn\" (UID: \"6d579e1a-3b27-4c1f-9175-42ac58490d42\") " pod="openshift-marketplace/redhat-operators-zdwjn" Aug 13 20:29:30 crc kubenswrapper[4183]: I0813 20:29:30.443745 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zdwjn" Aug 13 20:29:30 crc kubenswrapper[4183]: I0813 20:29:30.797719 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zdwjn"] Aug 13 20:29:31 crc kubenswrapper[4183]: I0813 20:29:31.010510 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zdwjn" event={"ID":"6d579e1a-3b27-4c1f-9175-42ac58490d42","Type":"ContainerStarted","Data":"3fdb2c96a67c0023e81d4e6bc3c617fe7dc7a69ecde6952807c647f2fadab664"} Aug 13 20:29:32 crc kubenswrapper[4183]: I0813 20:29:32.020856 4183 generic.go:334] "Generic (PLEG): container finished" podID="6d579e1a-3b27-4c1f-9175-42ac58490d42" containerID="a54b9d1110572d22b3a369ea31bffa9fe51cea3f5e0f5eec8bf96489870607fa" exitCode=0 Aug 13 20:29:32 crc kubenswrapper[4183]: I0813 20:29:32.021000 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zdwjn" event={"ID":"6d579e1a-3b27-4c1f-9175-42ac58490d42","Type":"ContainerDied","Data":"a54b9d1110572d22b3a369ea31bffa9fe51cea3f5e0f5eec8bf96489870607fa"} Aug 13 20:29:33 crc kubenswrapper[4183]: I0813 20:29:33.030834 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zdwjn" event={"ID":"6d579e1a-3b27-4c1f-9175-42ac58490d42","Type":"ContainerStarted","Data":"dd08aaf9d3c514accc3008f9ff4a36a680f73168eda1c4184a8cfeed0f324d29"} Aug 13 20:29:54 crc kubenswrapper[4183]: I0813 20:29:54.801138 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:29:54 crc kubenswrapper[4183]: I0813 20:29:54.802303 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:29:54 crc kubenswrapper[4183]: I0813 20:29:54.802388 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:29:54 crc kubenswrapper[4183]: I0813 20:29:54.802449 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:29:54 crc kubenswrapper[4183]: I0813 20:29:54.802499 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:30:01 crc kubenswrapper[4183]: I0813 20:30:01.984271 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd"] Aug 13 20:30:01 crc kubenswrapper[4183]: I0813 20:30:01.985070 4183 topology_manager.go:215] "Topology Admit Handler" podUID="ad171c4b-8408-4370-8e86-502999788ddb" podNamespace="openshift-operator-lifecycle-manager" podName="collect-profiles-29251950-x8jjd" Aug 13 20:30:01 crc kubenswrapper[4183]: I0813 20:30:01.985900 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd" Aug 13 20:30:02 crc kubenswrapper[4183]: I0813 20:30:02.008184 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-45g9d" Aug 13 20:30:02 crc kubenswrapper[4183]: I0813 20:30:02.008444 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Aug 13 20:30:02 crc kubenswrapper[4183]: I0813 20:30:02.036942 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd"] Aug 13 20:30:02 crc kubenswrapper[4183]: I0813 20:30:02.076386 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ad171c4b-8408-4370-8e86-502999788ddb-config-volume\") pod \"collect-profiles-29251950-x8jjd\" (UID: \"ad171c4b-8408-4370-8e86-502999788ddb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd" Aug 13 20:30:02 crc kubenswrapper[4183]: I0813 20:30:02.076843 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ad171c4b-8408-4370-8e86-502999788ddb-secret-volume\") pod \"collect-profiles-29251950-x8jjd\" (UID: \"ad171c4b-8408-4370-8e86-502999788ddb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd" Aug 13 20:30:02 crc kubenswrapper[4183]: I0813 20:30:02.077488 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmlcw\" (UniqueName: \"kubernetes.io/projected/ad171c4b-8408-4370-8e86-502999788ddb-kube-api-access-pmlcw\") pod \"collect-profiles-29251950-x8jjd\" (UID: \"ad171c4b-8408-4370-8e86-502999788ddb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd" Aug 13 20:30:02 crc kubenswrapper[4183]: I0813 20:30:02.179277 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ad171c4b-8408-4370-8e86-502999788ddb-config-volume\") pod \"collect-profiles-29251950-x8jjd\" (UID: \"ad171c4b-8408-4370-8e86-502999788ddb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd" Aug 13 20:30:02 crc kubenswrapper[4183]: I0813 20:30:02.179382 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ad171c4b-8408-4370-8e86-502999788ddb-secret-volume\") pod \"collect-profiles-29251950-x8jjd\" (UID: \"ad171c4b-8408-4370-8e86-502999788ddb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd" Aug 13 20:30:02 crc kubenswrapper[4183]: I0813 20:30:02.179452 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pmlcw\" (UniqueName: \"kubernetes.io/projected/ad171c4b-8408-4370-8e86-502999788ddb-kube-api-access-pmlcw\") pod \"collect-profiles-29251950-x8jjd\" (UID: \"ad171c4b-8408-4370-8e86-502999788ddb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd" Aug 13 20:30:02 crc kubenswrapper[4183]: I0813 20:30:02.180707 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ad171c4b-8408-4370-8e86-502999788ddb-config-volume\") pod \"collect-profiles-29251950-x8jjd\" (UID: \"ad171c4b-8408-4370-8e86-502999788ddb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd" Aug 13 20:30:02 crc kubenswrapper[4183]: I0813 20:30:02.190825 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ad171c4b-8408-4370-8e86-502999788ddb-secret-volume\") pod \"collect-profiles-29251950-x8jjd\" (UID: \"ad171c4b-8408-4370-8e86-502999788ddb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd" Aug 13 20:30:02 crc kubenswrapper[4183]: I0813 20:30:02.218103 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmlcw\" (UniqueName: \"kubernetes.io/projected/ad171c4b-8408-4370-8e86-502999788ddb-kube-api-access-pmlcw\") pod \"collect-profiles-29251950-x8jjd\" (UID: \"ad171c4b-8408-4370-8e86-502999788ddb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd" Aug 13 20:30:02 crc kubenswrapper[4183]: I0813 20:30:02.322129 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd" Aug 13 20:30:02 crc kubenswrapper[4183]: I0813 20:30:02.812554 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd"] Aug 13 20:30:03 crc kubenswrapper[4183]: I0813 20:30:03.273725 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd" event={"ID":"ad171c4b-8408-4370-8e86-502999788ddb","Type":"ContainerStarted","Data":"67968268b9681a78ea8ff7d1d622336aeef2dd395719c809f7d90fd4229e2e89"} Aug 13 20:30:03 crc kubenswrapper[4183]: I0813 20:30:03.273834 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd" event={"ID":"ad171c4b-8408-4370-8e86-502999788ddb","Type":"ContainerStarted","Data":"61f39a784f23d0eb34c08ee8791af999ae86d8f1a778312f8732ee7ffb6e1ab9"} Aug 13 20:30:03 crc kubenswrapper[4183]: I0813 20:30:03.327749 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd" podStartSLOduration=2.327674238 podStartE2EDuration="2.327674238s" podCreationTimestamp="2025-08-13 20:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:30:03.323089886 +0000 UTC m=+2770.015754874" watchObservedRunningTime="2025-08-13 20:30:03.327674238 +0000 UTC m=+2770.020338866" Aug 13 20:30:05 crc kubenswrapper[4183]: I0813 20:30:05.290513 4183 generic.go:334] "Generic (PLEG): container finished" podID="ad171c4b-8408-4370-8e86-502999788ddb" containerID="67968268b9681a78ea8ff7d1d622336aeef2dd395719c809f7d90fd4229e2e89" exitCode=0 Aug 13 20:30:05 crc kubenswrapper[4183]: I0813 20:30:05.290622 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd" event={"ID":"ad171c4b-8408-4370-8e86-502999788ddb","Type":"ContainerDied","Data":"67968268b9681a78ea8ff7d1d622336aeef2dd395719c809f7d90fd4229e2e89"} Aug 13 20:30:06 crc kubenswrapper[4183]: I0813 20:30:06.889910 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd" Aug 13 20:30:06 crc kubenswrapper[4183]: I0813 20:30:06.968429 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ad171c4b-8408-4370-8e86-502999788ddb-config-volume\") pod \"ad171c4b-8408-4370-8e86-502999788ddb\" (UID: \"ad171c4b-8408-4370-8e86-502999788ddb\") " Aug 13 20:30:06 crc kubenswrapper[4183]: I0813 20:30:06.969155 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmlcw\" (UniqueName: \"kubernetes.io/projected/ad171c4b-8408-4370-8e86-502999788ddb-kube-api-access-pmlcw\") pod \"ad171c4b-8408-4370-8e86-502999788ddb\" (UID: \"ad171c4b-8408-4370-8e86-502999788ddb\") " Aug 13 20:30:06 crc kubenswrapper[4183]: I0813 20:30:06.969974 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ad171c4b-8408-4370-8e86-502999788ddb-secret-volume\") pod \"ad171c4b-8408-4370-8e86-502999788ddb\" (UID: \"ad171c4b-8408-4370-8e86-502999788ddb\") " Aug 13 20:30:06 crc kubenswrapper[4183]: I0813 20:30:06.972559 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad171c4b-8408-4370-8e86-502999788ddb-config-volume" (OuterVolumeSpecName: "config-volume") pod "ad171c4b-8408-4370-8e86-502999788ddb" (UID: "ad171c4b-8408-4370-8e86-502999788ddb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:30:07 crc kubenswrapper[4183]: I0813 20:30:07.000000 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad171c4b-8408-4370-8e86-502999788ddb-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ad171c4b-8408-4370-8e86-502999788ddb" (UID: "ad171c4b-8408-4370-8e86-502999788ddb"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:30:07 crc kubenswrapper[4183]: I0813 20:30:07.001682 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad171c4b-8408-4370-8e86-502999788ddb-kube-api-access-pmlcw" (OuterVolumeSpecName: "kube-api-access-pmlcw") pod "ad171c4b-8408-4370-8e86-502999788ddb" (UID: "ad171c4b-8408-4370-8e86-502999788ddb"). InnerVolumeSpecName "kube-api-access-pmlcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:30:07 crc kubenswrapper[4183]: I0813 20:30:07.073397 4183 reconciler_common.go:300] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ad171c4b-8408-4370-8e86-502999788ddb-secret-volume\") on node \"crc\" DevicePath \"\"" Aug 13 20:30:07 crc kubenswrapper[4183]: I0813 20:30:07.073542 4183 reconciler_common.go:300] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ad171c4b-8408-4370-8e86-502999788ddb-config-volume\") on node \"crc\" DevicePath \"\"" Aug 13 20:30:07 crc kubenswrapper[4183]: I0813 20:30:07.073637 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-pmlcw\" (UniqueName: \"kubernetes.io/projected/ad171c4b-8408-4370-8e86-502999788ddb-kube-api-access-pmlcw\") on node \"crc\" DevicePath \"\"" Aug 13 20:30:07 crc kubenswrapper[4183]: I0813 20:30:07.307944 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd" Aug 13 20:30:07 crc kubenswrapper[4183]: I0813 20:30:07.308046 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd" event={"ID":"ad171c4b-8408-4370-8e86-502999788ddb","Type":"ContainerDied","Data":"61f39a784f23d0eb34c08ee8791af999ae86d8f1a778312f8732ee7ffb6e1ab9"} Aug 13 20:30:07 crc kubenswrapper[4183]: I0813 20:30:07.309566 4183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61f39a784f23d0eb34c08ee8791af999ae86d8f1a778312f8732ee7ffb6e1ab9" Aug 13 20:30:07 crc kubenswrapper[4183]: I0813 20:30:07.313402 4183 generic.go:334] "Generic (PLEG): container finished" podID="6d579e1a-3b27-4c1f-9175-42ac58490d42" containerID="dd08aaf9d3c514accc3008f9ff4a36a680f73168eda1c4184a8cfeed0f324d29" exitCode=0 Aug 13 20:30:07 crc kubenswrapper[4183]: I0813 20:30:07.314010 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zdwjn" event={"ID":"6d579e1a-3b27-4c1f-9175-42ac58490d42","Type":"ContainerDied","Data":"dd08aaf9d3c514accc3008f9ff4a36a680f73168eda1c4184a8cfeed0f324d29"} Aug 13 20:30:08 crc kubenswrapper[4183]: I0813 20:30:08.188369 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9"] Aug 13 20:30:08 crc kubenswrapper[4183]: I0813 20:30:08.202397 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9"] Aug 13 20:30:08 crc kubenswrapper[4183]: I0813 20:30:08.323625 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zdwjn" event={"ID":"6d579e1a-3b27-4c1f-9175-42ac58490d42","Type":"ContainerStarted","Data":"7883102f1a9e3d1e5b1b2906ef9833009223f4efc5cfe9d327a5f7340ebd983e"} Aug 13 20:30:08 crc kubenswrapper[4183]: I0813 20:30:08.376959 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zdwjn" podStartSLOduration=2.644749574 podStartE2EDuration="38.376906603s" podCreationTimestamp="2025-08-13 20:29:30 +0000 UTC" firstStartedPulling="2025-08-13 20:29:32.023072954 +0000 UTC m=+2738.715737712" lastFinishedPulling="2025-08-13 20:30:07.755230113 +0000 UTC m=+2774.447894741" observedRunningTime="2025-08-13 20:30:08.369449078 +0000 UTC m=+2775.062113856" watchObservedRunningTime="2025-08-13 20:30:08.376906603 +0000 UTC m=+2775.069571331" Aug 13 20:30:09 crc kubenswrapper[4183]: I0813 20:30:09.217942 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8500d7bd-50fb-4ca6-af41-b7a24cae43cd" path="/var/lib/kubelet/pods/8500d7bd-50fb-4ca6-af41-b7a24cae43cd/volumes" Aug 13 20:30:10 crc kubenswrapper[4183]: I0813 20:30:10.444935 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zdwjn" Aug 13 20:30:10 crc kubenswrapper[4183]: I0813 20:30:10.445312 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zdwjn" Aug 13 20:30:11 crc kubenswrapper[4183]: I0813 20:30:11.559391 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zdwjn" podUID="6d579e1a-3b27-4c1f-9175-42ac58490d42" containerName="registry-server" probeResult="failure" output=< Aug 13 20:30:11 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:30:11 crc kubenswrapper[4183]: > Aug 13 20:30:21 crc kubenswrapper[4183]: I0813 20:30:21.571657 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zdwjn" podUID="6d579e1a-3b27-4c1f-9175-42ac58490d42" containerName="registry-server" probeResult="failure" output=< Aug 13 20:30:21 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:30:21 crc kubenswrapper[4183]: > Aug 13 20:30:30 crc kubenswrapper[4183]: I0813 20:30:30.639012 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zdwjn" Aug 13 20:30:30 crc kubenswrapper[4183]: I0813 20:30:30.789286 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zdwjn" Aug 13 20:30:30 crc kubenswrapper[4183]: I0813 20:30:30.862664 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zdwjn"] Aug 13 20:30:32 crc kubenswrapper[4183]: I0813 20:30:32.506496 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zdwjn" podUID="6d579e1a-3b27-4c1f-9175-42ac58490d42" containerName="registry-server" containerID="cri-o://7883102f1a9e3d1e5b1b2906ef9833009223f4efc5cfe9d327a5f7340ebd983e" gracePeriod=2 Aug 13 20:30:32 crc kubenswrapper[4183]: I0813 20:30:32.931506 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zdwjn" Aug 13 20:30:32 crc kubenswrapper[4183]: I0813 20:30:32.984564 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d579e1a-3b27-4c1f-9175-42ac58490d42-catalog-content\") pod \"6d579e1a-3b27-4c1f-9175-42ac58490d42\" (UID: \"6d579e1a-3b27-4c1f-9175-42ac58490d42\") " Aug 13 20:30:32 crc kubenswrapper[4183]: I0813 20:30:32.984743 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d579e1a-3b27-4c1f-9175-42ac58490d42-utilities\") pod \"6d579e1a-3b27-4c1f-9175-42ac58490d42\" (UID: \"6d579e1a-3b27-4c1f-9175-42ac58490d42\") " Aug 13 20:30:32 crc kubenswrapper[4183]: I0813 20:30:32.984919 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6rj8\" (UniqueName: \"kubernetes.io/projected/6d579e1a-3b27-4c1f-9175-42ac58490d42-kube-api-access-r6rj8\") pod \"6d579e1a-3b27-4c1f-9175-42ac58490d42\" (UID: \"6d579e1a-3b27-4c1f-9175-42ac58490d42\") " Aug 13 20:30:32 crc kubenswrapper[4183]: I0813 20:30:32.987281 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d579e1a-3b27-4c1f-9175-42ac58490d42-utilities" (OuterVolumeSpecName: "utilities") pod "6d579e1a-3b27-4c1f-9175-42ac58490d42" (UID: "6d579e1a-3b27-4c1f-9175-42ac58490d42"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:30:32 crc kubenswrapper[4183]: I0813 20:30:32.995193 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d579e1a-3b27-4c1f-9175-42ac58490d42-kube-api-access-r6rj8" (OuterVolumeSpecName: "kube-api-access-r6rj8") pod "6d579e1a-3b27-4c1f-9175-42ac58490d42" (UID: "6d579e1a-3b27-4c1f-9175-42ac58490d42"). InnerVolumeSpecName "kube-api-access-r6rj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:30:33 crc kubenswrapper[4183]: I0813 20:30:33.086897 4183 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d579e1a-3b27-4c1f-9175-42ac58490d42-utilities\") on node \"crc\" DevicePath \"\"" Aug 13 20:30:33 crc kubenswrapper[4183]: I0813 20:30:33.087266 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-r6rj8\" (UniqueName: \"kubernetes.io/projected/6d579e1a-3b27-4c1f-9175-42ac58490d42-kube-api-access-r6rj8\") on node \"crc\" DevicePath \"\"" Aug 13 20:30:33 crc kubenswrapper[4183]: I0813 20:30:33.521199 4183 generic.go:334] "Generic (PLEG): container finished" podID="6d579e1a-3b27-4c1f-9175-42ac58490d42" containerID="7883102f1a9e3d1e5b1b2906ef9833009223f4efc5cfe9d327a5f7340ebd983e" exitCode=0 Aug 13 20:30:33 crc kubenswrapper[4183]: I0813 20:30:33.521250 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zdwjn" event={"ID":"6d579e1a-3b27-4c1f-9175-42ac58490d42","Type":"ContainerDied","Data":"7883102f1a9e3d1e5b1b2906ef9833009223f4efc5cfe9d327a5f7340ebd983e"} Aug 13 20:30:33 crc kubenswrapper[4183]: I0813 20:30:33.521283 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zdwjn" event={"ID":"6d579e1a-3b27-4c1f-9175-42ac58490d42","Type":"ContainerDied","Data":"3fdb2c96a67c0023e81d4e6bc3c617fe7dc7a69ecde6952807c647f2fadab664"} Aug 13 20:30:33 crc kubenswrapper[4183]: I0813 20:30:33.521312 4183 scope.go:117] "RemoveContainer" containerID="7883102f1a9e3d1e5b1b2906ef9833009223f4efc5cfe9d327a5f7340ebd983e" Aug 13 20:30:33 crc kubenswrapper[4183]: I0813 20:30:33.521409 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zdwjn" Aug 13 20:30:33 crc kubenswrapper[4183]: I0813 20:30:33.589471 4183 scope.go:117] "RemoveContainer" containerID="dd08aaf9d3c514accc3008f9ff4a36a680f73168eda1c4184a8cfeed0f324d29" Aug 13 20:30:33 crc kubenswrapper[4183]: I0813 20:30:33.818192 4183 scope.go:117] "RemoveContainer" containerID="a54b9d1110572d22b3a369ea31bffa9fe51cea3f5e0f5eec8bf96489870607fa" Aug 13 20:30:33 crc kubenswrapper[4183]: I0813 20:30:33.892207 4183 scope.go:117] "RemoveContainer" containerID="7883102f1a9e3d1e5b1b2906ef9833009223f4efc5cfe9d327a5f7340ebd983e" Aug 13 20:30:33 crc kubenswrapper[4183]: E0813 20:30:33.897265 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7883102f1a9e3d1e5b1b2906ef9833009223f4efc5cfe9d327a5f7340ebd983e\": container with ID starting with 7883102f1a9e3d1e5b1b2906ef9833009223f4efc5cfe9d327a5f7340ebd983e not found: ID does not exist" containerID="7883102f1a9e3d1e5b1b2906ef9833009223f4efc5cfe9d327a5f7340ebd983e" Aug 13 20:30:33 crc kubenswrapper[4183]: I0813 20:30:33.897391 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7883102f1a9e3d1e5b1b2906ef9833009223f4efc5cfe9d327a5f7340ebd983e"} err="failed to get container status \"7883102f1a9e3d1e5b1b2906ef9833009223f4efc5cfe9d327a5f7340ebd983e\": rpc error: code = NotFound desc = could not find container \"7883102f1a9e3d1e5b1b2906ef9833009223f4efc5cfe9d327a5f7340ebd983e\": container with ID starting with 7883102f1a9e3d1e5b1b2906ef9833009223f4efc5cfe9d327a5f7340ebd983e not found: ID does not exist" Aug 13 20:30:33 crc kubenswrapper[4183]: I0813 20:30:33.897418 4183 scope.go:117] "RemoveContainer" containerID="dd08aaf9d3c514accc3008f9ff4a36a680f73168eda1c4184a8cfeed0f324d29" Aug 13 20:30:33 crc kubenswrapper[4183]: E0813 20:30:33.898541 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd08aaf9d3c514accc3008f9ff4a36a680f73168eda1c4184a8cfeed0f324d29\": container with ID starting with dd08aaf9d3c514accc3008f9ff4a36a680f73168eda1c4184a8cfeed0f324d29 not found: ID does not exist" containerID="dd08aaf9d3c514accc3008f9ff4a36a680f73168eda1c4184a8cfeed0f324d29" Aug 13 20:30:33 crc kubenswrapper[4183]: I0813 20:30:33.898707 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd08aaf9d3c514accc3008f9ff4a36a680f73168eda1c4184a8cfeed0f324d29"} err="failed to get container status \"dd08aaf9d3c514accc3008f9ff4a36a680f73168eda1c4184a8cfeed0f324d29\": rpc error: code = NotFound desc = could not find container \"dd08aaf9d3c514accc3008f9ff4a36a680f73168eda1c4184a8cfeed0f324d29\": container with ID starting with dd08aaf9d3c514accc3008f9ff4a36a680f73168eda1c4184a8cfeed0f324d29 not found: ID does not exist" Aug 13 20:30:33 crc kubenswrapper[4183]: I0813 20:30:33.898943 4183 scope.go:117] "RemoveContainer" containerID="a54b9d1110572d22b3a369ea31bffa9fe51cea3f5e0f5eec8bf96489870607fa" Aug 13 20:30:33 crc kubenswrapper[4183]: E0813 20:30:33.899705 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a54b9d1110572d22b3a369ea31bffa9fe51cea3f5e0f5eec8bf96489870607fa\": container with ID starting with a54b9d1110572d22b3a369ea31bffa9fe51cea3f5e0f5eec8bf96489870607fa not found: ID does not exist" containerID="a54b9d1110572d22b3a369ea31bffa9fe51cea3f5e0f5eec8bf96489870607fa" Aug 13 20:30:33 crc kubenswrapper[4183]: I0813 20:30:33.899762 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a54b9d1110572d22b3a369ea31bffa9fe51cea3f5e0f5eec8bf96489870607fa"} err="failed to get container status \"a54b9d1110572d22b3a369ea31bffa9fe51cea3f5e0f5eec8bf96489870607fa\": rpc error: code = NotFound desc = could not find container \"a54b9d1110572d22b3a369ea31bffa9fe51cea3f5e0f5eec8bf96489870607fa\": container with ID starting with a54b9d1110572d22b3a369ea31bffa9fe51cea3f5e0f5eec8bf96489870607fa not found: ID does not exist" Aug 13 20:30:33 crc kubenswrapper[4183]: I0813 20:30:33.930635 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d579e1a-3b27-4c1f-9175-42ac58490d42-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6d579e1a-3b27-4c1f-9175-42ac58490d42" (UID: "6d579e1a-3b27-4c1f-9175-42ac58490d42"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:30:34 crc kubenswrapper[4183]: I0813 20:30:34.008519 4183 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d579e1a-3b27-4c1f-9175-42ac58490d42-catalog-content\") on node \"crc\" DevicePath \"\"" Aug 13 20:30:34 crc kubenswrapper[4183]: I0813 20:30:34.175424 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zdwjn"] Aug 13 20:30:34 crc kubenswrapper[4183]: I0813 20:30:34.188387 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zdwjn"] Aug 13 20:30:35 crc kubenswrapper[4183]: I0813 20:30:35.217865 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d579e1a-3b27-4c1f-9175-42ac58490d42" path="/var/lib/kubelet/pods/6d579e1a-3b27-4c1f-9175-42ac58490d42/volumes" Aug 13 20:30:54 crc kubenswrapper[4183]: I0813 20:30:54.803495 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:30:54 crc kubenswrapper[4183]: I0813 20:30:54.804074 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:30:54 crc kubenswrapper[4183]: I0813 20:30:54.804179 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:30:54 crc kubenswrapper[4183]: I0813 20:30:54.804222 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:30:54 crc kubenswrapper[4183]: I0813 20:30:54.804256 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:30:56 crc kubenswrapper[4183]: I0813 20:30:56.527744 4183 scope.go:117] "RemoveContainer" containerID="a00abbf09803bc3f3a22a86887914ba2fa3026aff021086131cdf33906d7fb2c" Aug 13 20:31:54 crc kubenswrapper[4183]: I0813 20:31:54.805259 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:31:54 crc kubenswrapper[4183]: I0813 20:31:54.806196 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:31:54 crc kubenswrapper[4183]: I0813 20:31:54.806303 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:31:54 crc kubenswrapper[4183]: I0813 20:31:54.806341 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:31:54 crc kubenswrapper[4183]: I0813 20:31:54.806378 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:32:54 crc kubenswrapper[4183]: I0813 20:32:54.807668 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:32:54 crc kubenswrapper[4183]: I0813 20:32:54.808421 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:32:54 crc kubenswrapper[4183]: I0813 20:32:54.808465 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:32:54 crc kubenswrapper[4183]: I0813 20:32:54.808514 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:32:54 crc kubenswrapper[4183]: I0813 20:32:54.808615 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:33:54 crc kubenswrapper[4183]: I0813 20:33:54.809699 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:33:54 crc kubenswrapper[4183]: I0813 20:33:54.810371 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:33:54 crc kubenswrapper[4183]: I0813 20:33:54.810430 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:33:54 crc kubenswrapper[4183]: I0813 20:33:54.810472 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:33:54 crc kubenswrapper[4183]: I0813 20:33:54.810521 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:34:54 crc kubenswrapper[4183]: I0813 20:34:54.810974 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:34:54 crc kubenswrapper[4183]: I0813 20:34:54.811990 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:34:54 crc kubenswrapper[4183]: I0813 20:34:54.812054 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:34:54 crc kubenswrapper[4183]: I0813 20:34:54.812164 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:34:54 crc kubenswrapper[4183]: I0813 20:34:54.812235 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:35:54 crc kubenswrapper[4183]: I0813 20:35:54.813302 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:35:54 crc kubenswrapper[4183]: I0813 20:35:54.813971 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:35:54 crc kubenswrapper[4183]: I0813 20:35:54.814025 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:35:54 crc kubenswrapper[4183]: I0813 20:35:54.814174 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:35:54 crc kubenswrapper[4183]: I0813 20:35:54.814227 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:36:54 crc kubenswrapper[4183]: I0813 20:36:54.815418 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:36:54 crc kubenswrapper[4183]: I0813 20:36:54.816161 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:36:54 crc kubenswrapper[4183]: I0813 20:36:54.816230 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:36:54 crc kubenswrapper[4183]: I0813 20:36:54.816266 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:36:54 crc kubenswrapper[4183]: I0813 20:36:54.816304 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:37:48 crc kubenswrapper[4183]: I0813 20:37:48.226038 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nkzlk"] Aug 13 20:37:48 crc kubenswrapper[4183]: I0813 20:37:48.227116 4183 topology_manager.go:215] "Topology Admit Handler" podUID="afc02c17-9714-426d-aafa-ee58c673ab0c" podNamespace="openshift-marketplace" podName="redhat-marketplace-nkzlk" Aug 13 20:37:48 crc kubenswrapper[4183]: E0813 20:37:48.227465 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="6d579e1a-3b27-4c1f-9175-42ac58490d42" containerName="registry-server" Aug 13 20:37:48 crc kubenswrapper[4183]: I0813 20:37:48.227489 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d579e1a-3b27-4c1f-9175-42ac58490d42" containerName="registry-server" Aug 13 20:37:48 crc kubenswrapper[4183]: E0813 20:37:48.227519 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="6d579e1a-3b27-4c1f-9175-42ac58490d42" containerName="extract-utilities" Aug 13 20:37:48 crc kubenswrapper[4183]: I0813 20:37:48.227529 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d579e1a-3b27-4c1f-9175-42ac58490d42" containerName="extract-utilities" Aug 13 20:37:48 crc kubenswrapper[4183]: E0813 20:37:48.227576 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="6d579e1a-3b27-4c1f-9175-42ac58490d42" containerName="extract-content" Aug 13 20:37:48 crc kubenswrapper[4183]: I0813 20:37:48.227589 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d579e1a-3b27-4c1f-9175-42ac58490d42" containerName="extract-content" Aug 13 20:37:48 crc kubenswrapper[4183]: E0813 20:37:48.227600 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="ad171c4b-8408-4370-8e86-502999788ddb" containerName="collect-profiles" Aug 13 20:37:48 crc kubenswrapper[4183]: I0813 20:37:48.227610 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad171c4b-8408-4370-8e86-502999788ddb" containerName="collect-profiles" Aug 13 20:37:48 crc kubenswrapper[4183]: I0813 20:37:48.231919 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d579e1a-3b27-4c1f-9175-42ac58490d42" containerName="registry-server" Aug 13 20:37:48 crc kubenswrapper[4183]: I0813 20:37:48.231972 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad171c4b-8408-4370-8e86-502999788ddb" containerName="collect-profiles" Aug 13 20:37:48 crc kubenswrapper[4183]: I0813 20:37:48.233395 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nkzlk" Aug 13 20:37:48 crc kubenswrapper[4183]: I0813 20:37:48.272736 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nkzlk"] Aug 13 20:37:48 crc kubenswrapper[4183]: I0813 20:37:48.360000 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afc02c17-9714-426d-aafa-ee58c673ab0c-catalog-content\") pod \"redhat-marketplace-nkzlk\" (UID: \"afc02c17-9714-426d-aafa-ee58c673ab0c\") " pod="openshift-marketplace/redhat-marketplace-nkzlk" Aug 13 20:37:48 crc kubenswrapper[4183]: I0813 20:37:48.360188 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gcn9\" (UniqueName: \"kubernetes.io/projected/afc02c17-9714-426d-aafa-ee58c673ab0c-kube-api-access-9gcn9\") pod \"redhat-marketplace-nkzlk\" (UID: \"afc02c17-9714-426d-aafa-ee58c673ab0c\") " pod="openshift-marketplace/redhat-marketplace-nkzlk" Aug 13 20:37:48 crc kubenswrapper[4183]: I0813 20:37:48.360524 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afc02c17-9714-426d-aafa-ee58c673ab0c-utilities\") pod \"redhat-marketplace-nkzlk\" (UID: \"afc02c17-9714-426d-aafa-ee58c673ab0c\") " pod="openshift-marketplace/redhat-marketplace-nkzlk" Aug 13 20:37:48 crc kubenswrapper[4183]: I0813 20:37:48.462502 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afc02c17-9714-426d-aafa-ee58c673ab0c-catalog-content\") pod \"redhat-marketplace-nkzlk\" (UID: \"afc02c17-9714-426d-aafa-ee58c673ab0c\") " pod="openshift-marketplace/redhat-marketplace-nkzlk" Aug 13 20:37:48 crc kubenswrapper[4183]: I0813 20:37:48.463115 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9gcn9\" (UniqueName: \"kubernetes.io/projected/afc02c17-9714-426d-aafa-ee58c673ab0c-kube-api-access-9gcn9\") pod \"redhat-marketplace-nkzlk\" (UID: \"afc02c17-9714-426d-aafa-ee58c673ab0c\") " pod="openshift-marketplace/redhat-marketplace-nkzlk" Aug 13 20:37:48 crc kubenswrapper[4183]: I0813 20:37:48.463353 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afc02c17-9714-426d-aafa-ee58c673ab0c-utilities\") pod \"redhat-marketplace-nkzlk\" (UID: \"afc02c17-9714-426d-aafa-ee58c673ab0c\") " pod="openshift-marketplace/redhat-marketplace-nkzlk" Aug 13 20:37:48 crc kubenswrapper[4183]: I0813 20:37:48.464352 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afc02c17-9714-426d-aafa-ee58c673ab0c-catalog-content\") pod \"redhat-marketplace-nkzlk\" (UID: \"afc02c17-9714-426d-aafa-ee58c673ab0c\") " pod="openshift-marketplace/redhat-marketplace-nkzlk" Aug 13 20:37:48 crc kubenswrapper[4183]: I0813 20:37:48.464448 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afc02c17-9714-426d-aafa-ee58c673ab0c-utilities\") pod \"redhat-marketplace-nkzlk\" (UID: \"afc02c17-9714-426d-aafa-ee58c673ab0c\") " pod="openshift-marketplace/redhat-marketplace-nkzlk" Aug 13 20:37:48 crc kubenswrapper[4183]: I0813 20:37:48.493262 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gcn9\" (UniqueName: \"kubernetes.io/projected/afc02c17-9714-426d-aafa-ee58c673ab0c-kube-api-access-9gcn9\") pod \"redhat-marketplace-nkzlk\" (UID: \"afc02c17-9714-426d-aafa-ee58c673ab0c\") " pod="openshift-marketplace/redhat-marketplace-nkzlk" Aug 13 20:37:48 crc kubenswrapper[4183]: I0813 20:37:48.563669 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nkzlk" Aug 13 20:37:48 crc kubenswrapper[4183]: I0813 20:37:48.897981 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nkzlk"] Aug 13 20:37:49 crc kubenswrapper[4183]: I0813 20:37:49.610098 4183 generic.go:334] "Generic (PLEG): container finished" podID="afc02c17-9714-426d-aafa-ee58c673ab0c" containerID="380cb4808274ab30e2897e56a320084500d526076fc23555aa51c36d1995e57d" exitCode=0 Aug 13 20:37:49 crc kubenswrapper[4183]: I0813 20:37:49.610182 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nkzlk" event={"ID":"afc02c17-9714-426d-aafa-ee58c673ab0c","Type":"ContainerDied","Data":"380cb4808274ab30e2897e56a320084500d526076fc23555aa51c36d1995e57d"} Aug 13 20:37:49 crc kubenswrapper[4183]: I0813 20:37:49.610530 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nkzlk" event={"ID":"afc02c17-9714-426d-aafa-ee58c673ab0c","Type":"ContainerStarted","Data":"316cb50fa85ce6160eae66b0e77413969935d818294ab5165bd912abd5fc6973"} Aug 13 20:37:49 crc kubenswrapper[4183]: I0813 20:37:49.614029 4183 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Aug 13 20:37:50 crc kubenswrapper[4183]: I0813 20:37:50.621086 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nkzlk" event={"ID":"afc02c17-9714-426d-aafa-ee58c673ab0c","Type":"ContainerStarted","Data":"1f10cb491a363d12591b266e087b0fcbb708d3c04b98458a2baaa6c8740d55ee"} Aug 13 20:37:54 crc kubenswrapper[4183]: I0813 20:37:54.659569 4183 generic.go:334] "Generic (PLEG): container finished" podID="afc02c17-9714-426d-aafa-ee58c673ab0c" containerID="1f10cb491a363d12591b266e087b0fcbb708d3c04b98458a2baaa6c8740d55ee" exitCode=0 Aug 13 20:37:54 crc kubenswrapper[4183]: I0813 20:37:54.660074 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nkzlk" event={"ID":"afc02c17-9714-426d-aafa-ee58c673ab0c","Type":"ContainerDied","Data":"1f10cb491a363d12591b266e087b0fcbb708d3c04b98458a2baaa6c8740d55ee"} Aug 13 20:37:54 crc kubenswrapper[4183]: I0813 20:37:54.816871 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:37:54 crc kubenswrapper[4183]: I0813 20:37:54.816963 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:37:54 crc kubenswrapper[4183]: I0813 20:37:54.817010 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:37:54 crc kubenswrapper[4183]: I0813 20:37:54.817053 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:37:54 crc kubenswrapper[4183]: I0813 20:37:54.817088 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:37:55 crc kubenswrapper[4183]: I0813 20:37:55.670764 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nkzlk" event={"ID":"afc02c17-9714-426d-aafa-ee58c673ab0c","Type":"ContainerStarted","Data":"8c0ce2e26a36b42bbbf4f6b8b7a9e7a3db2be497f2cd4408c8bf334f82611922"} Aug 13 20:37:58 crc kubenswrapper[4183]: I0813 20:37:58.565755 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nkzlk" Aug 13 20:37:58 crc kubenswrapper[4183]: I0813 20:37:58.566326 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nkzlk" Aug 13 20:37:58 crc kubenswrapper[4183]: I0813 20:37:58.676409 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nkzlk" Aug 13 20:37:58 crc kubenswrapper[4183]: I0813 20:37:58.705440 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nkzlk" podStartSLOduration=5.288354689 podStartE2EDuration="10.705385893s" podCreationTimestamp="2025-08-13 20:37:48 +0000 UTC" firstStartedPulling="2025-08-13 20:37:49.613412649 +0000 UTC m=+3236.306077307" lastFinishedPulling="2025-08-13 20:37:55.030443883 +0000 UTC m=+3241.723108511" observedRunningTime="2025-08-13 20:37:56.514890851 +0000 UTC m=+3243.207556409" watchObservedRunningTime="2025-08-13 20:37:58.705385893 +0000 UTC m=+3245.398050771" Aug 13 20:38:08 crc kubenswrapper[4183]: I0813 20:38:08.683194 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nkzlk" Aug 13 20:38:08 crc kubenswrapper[4183]: I0813 20:38:08.749777 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nkzlk"] Aug 13 20:38:08 crc kubenswrapper[4183]: I0813 20:38:08.764345 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nkzlk" podUID="afc02c17-9714-426d-aafa-ee58c673ab0c" containerName="registry-server" containerID="cri-o://8c0ce2e26a36b42bbbf4f6b8b7a9e7a3db2be497f2cd4408c8bf334f82611922" gracePeriod=2 Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.176666 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nkzlk" Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.217983 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gcn9\" (UniqueName: \"kubernetes.io/projected/afc02c17-9714-426d-aafa-ee58c673ab0c-kube-api-access-9gcn9\") pod \"afc02c17-9714-426d-aafa-ee58c673ab0c\" (UID: \"afc02c17-9714-426d-aafa-ee58c673ab0c\") " Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.218293 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afc02c17-9714-426d-aafa-ee58c673ab0c-catalog-content\") pod \"afc02c17-9714-426d-aafa-ee58c673ab0c\" (UID: \"afc02c17-9714-426d-aafa-ee58c673ab0c\") " Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.218355 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afc02c17-9714-426d-aafa-ee58c673ab0c-utilities\") pod \"afc02c17-9714-426d-aafa-ee58c673ab0c\" (UID: \"afc02c17-9714-426d-aafa-ee58c673ab0c\") " Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.219426 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afc02c17-9714-426d-aafa-ee58c673ab0c-utilities" (OuterVolumeSpecName: "utilities") pod "afc02c17-9714-426d-aafa-ee58c673ab0c" (UID: "afc02c17-9714-426d-aafa-ee58c673ab0c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.226278 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afc02c17-9714-426d-aafa-ee58c673ab0c-kube-api-access-9gcn9" (OuterVolumeSpecName: "kube-api-access-9gcn9") pod "afc02c17-9714-426d-aafa-ee58c673ab0c" (UID: "afc02c17-9714-426d-aafa-ee58c673ab0c"). InnerVolumeSpecName "kube-api-access-9gcn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.320361 4183 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afc02c17-9714-426d-aafa-ee58c673ab0c-utilities\") on node \"crc\" DevicePath \"\"" Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.320929 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-9gcn9\" (UniqueName: \"kubernetes.io/projected/afc02c17-9714-426d-aafa-ee58c673ab0c-kube-api-access-9gcn9\") on node \"crc\" DevicePath \"\"" Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.366919 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afc02c17-9714-426d-aafa-ee58c673ab0c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "afc02c17-9714-426d-aafa-ee58c673ab0c" (UID: "afc02c17-9714-426d-aafa-ee58c673ab0c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.422616 4183 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afc02c17-9714-426d-aafa-ee58c673ab0c-catalog-content\") on node \"crc\" DevicePath \"\"" Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.776026 4183 generic.go:334] "Generic (PLEG): container finished" podID="afc02c17-9714-426d-aafa-ee58c673ab0c" containerID="8c0ce2e26a36b42bbbf4f6b8b7a9e7a3db2be497f2cd4408c8bf334f82611922" exitCode=0 Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.776115 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nkzlk" Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.776194 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nkzlk" event={"ID":"afc02c17-9714-426d-aafa-ee58c673ab0c","Type":"ContainerDied","Data":"8c0ce2e26a36b42bbbf4f6b8b7a9e7a3db2be497f2cd4408c8bf334f82611922"} Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.777248 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nkzlk" event={"ID":"afc02c17-9714-426d-aafa-ee58c673ab0c","Type":"ContainerDied","Data":"316cb50fa85ce6160eae66b0e77413969935d818294ab5165bd912abd5fc6973"} Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.777285 4183 scope.go:117] "RemoveContainer" containerID="8c0ce2e26a36b42bbbf4f6b8b7a9e7a3db2be497f2cd4408c8bf334f82611922" Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.829179 4183 scope.go:117] "RemoveContainer" containerID="1f10cb491a363d12591b266e087b0fcbb708d3c04b98458a2baaa6c8740d55ee" Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.866063 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nkzlk"] Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.875230 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nkzlk"] Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.883982 4183 scope.go:117] "RemoveContainer" containerID="380cb4808274ab30e2897e56a320084500d526076fc23555aa51c36d1995e57d" Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.922409 4183 scope.go:117] "RemoveContainer" containerID="8c0ce2e26a36b42bbbf4f6b8b7a9e7a3db2be497f2cd4408c8bf334f82611922" Aug 13 20:38:09 crc kubenswrapper[4183]: E0813 20:38:09.923230 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c0ce2e26a36b42bbbf4f6b8b7a9e7a3db2be497f2cd4408c8bf334f82611922\": container with ID starting with 8c0ce2e26a36b42bbbf4f6b8b7a9e7a3db2be497f2cd4408c8bf334f82611922 not found: ID does not exist" containerID="8c0ce2e26a36b42bbbf4f6b8b7a9e7a3db2be497f2cd4408c8bf334f82611922" Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.923304 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c0ce2e26a36b42bbbf4f6b8b7a9e7a3db2be497f2cd4408c8bf334f82611922"} err="failed to get container status \"8c0ce2e26a36b42bbbf4f6b8b7a9e7a3db2be497f2cd4408c8bf334f82611922\": rpc error: code = NotFound desc = could not find container \"8c0ce2e26a36b42bbbf4f6b8b7a9e7a3db2be497f2cd4408c8bf334f82611922\": container with ID starting with 8c0ce2e26a36b42bbbf4f6b8b7a9e7a3db2be497f2cd4408c8bf334f82611922 not found: ID does not exist" Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.923319 4183 scope.go:117] "RemoveContainer" containerID="1f10cb491a363d12591b266e087b0fcbb708d3c04b98458a2baaa6c8740d55ee" Aug 13 20:38:09 crc kubenswrapper[4183]: E0813 20:38:09.923941 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f10cb491a363d12591b266e087b0fcbb708d3c04b98458a2baaa6c8740d55ee\": container with ID starting with 1f10cb491a363d12591b266e087b0fcbb708d3c04b98458a2baaa6c8740d55ee not found: ID does not exist" containerID="1f10cb491a363d12591b266e087b0fcbb708d3c04b98458a2baaa6c8740d55ee" Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.923970 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f10cb491a363d12591b266e087b0fcbb708d3c04b98458a2baaa6c8740d55ee"} err="failed to get container status \"1f10cb491a363d12591b266e087b0fcbb708d3c04b98458a2baaa6c8740d55ee\": rpc error: code = NotFound desc = could not find container \"1f10cb491a363d12591b266e087b0fcbb708d3c04b98458a2baaa6c8740d55ee\": container with ID starting with 1f10cb491a363d12591b266e087b0fcbb708d3c04b98458a2baaa6c8740d55ee not found: ID does not exist" Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.923981 4183 scope.go:117] "RemoveContainer" containerID="380cb4808274ab30e2897e56a320084500d526076fc23555aa51c36d1995e57d" Aug 13 20:38:09 crc kubenswrapper[4183]: E0813 20:38:09.925057 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"380cb4808274ab30e2897e56a320084500d526076fc23555aa51c36d1995e57d\": container with ID starting with 380cb4808274ab30e2897e56a320084500d526076fc23555aa51c36d1995e57d not found: ID does not exist" containerID="380cb4808274ab30e2897e56a320084500d526076fc23555aa51c36d1995e57d" Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.925250 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"380cb4808274ab30e2897e56a320084500d526076fc23555aa51c36d1995e57d"} err="failed to get container status \"380cb4808274ab30e2897e56a320084500d526076fc23555aa51c36d1995e57d\": rpc error: code = NotFound desc = could not find container \"380cb4808274ab30e2897e56a320084500d526076fc23555aa51c36d1995e57d\": container with ID starting with 380cb4808274ab30e2897e56a320084500d526076fc23555aa51c36d1995e57d not found: ID does not exist" Aug 13 20:38:11 crc kubenswrapper[4183]: I0813 20:38:11.217764 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afc02c17-9714-426d-aafa-ee58c673ab0c" path="/var/lib/kubelet/pods/afc02c17-9714-426d-aafa-ee58c673ab0c/volumes" Aug 13 20:38:36 crc kubenswrapper[4183]: I0813 20:38:36.093544 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4kmbv"] Aug 13 20:38:36 crc kubenswrapper[4183]: I0813 20:38:36.096217 4183 topology_manager.go:215] "Topology Admit Handler" podUID="847e60dc-7a0a-4115-a7e1-356476e319e7" podNamespace="openshift-marketplace" podName="certified-operators-4kmbv" Aug 13 20:38:36 crc kubenswrapper[4183]: E0813 20:38:36.096659 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="afc02c17-9714-426d-aafa-ee58c673ab0c" containerName="registry-server" Aug 13 20:38:36 crc kubenswrapper[4183]: I0813 20:38:36.096835 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="afc02c17-9714-426d-aafa-ee58c673ab0c" containerName="registry-server" Aug 13 20:38:36 crc kubenswrapper[4183]: E0813 20:38:36.104025 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="afc02c17-9714-426d-aafa-ee58c673ab0c" containerName="extract-utilities" Aug 13 20:38:36 crc kubenswrapper[4183]: I0813 20:38:36.104087 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="afc02c17-9714-426d-aafa-ee58c673ab0c" containerName="extract-utilities" Aug 13 20:38:36 crc kubenswrapper[4183]: E0813 20:38:36.104122 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="afc02c17-9714-426d-aafa-ee58c673ab0c" containerName="extract-content" Aug 13 20:38:36 crc kubenswrapper[4183]: I0813 20:38:36.104129 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="afc02c17-9714-426d-aafa-ee58c673ab0c" containerName="extract-content" Aug 13 20:38:36 crc kubenswrapper[4183]: I0813 20:38:36.104443 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="afc02c17-9714-426d-aafa-ee58c673ab0c" containerName="registry-server" Aug 13 20:38:36 crc kubenswrapper[4183]: I0813 20:38:36.105518 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4kmbv" Aug 13 20:38:36 crc kubenswrapper[4183]: I0813 20:38:36.143532 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4kmbv"] Aug 13 20:38:36 crc kubenswrapper[4183]: I0813 20:38:36.203570 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/847e60dc-7a0a-4115-a7e1-356476e319e7-utilities\") pod \"certified-operators-4kmbv\" (UID: \"847e60dc-7a0a-4115-a7e1-356476e319e7\") " pod="openshift-marketplace/certified-operators-4kmbv" Aug 13 20:38:36 crc kubenswrapper[4183]: I0813 20:38:36.203656 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqlp7\" (UniqueName: \"kubernetes.io/projected/847e60dc-7a0a-4115-a7e1-356476e319e7-kube-api-access-bqlp7\") pod \"certified-operators-4kmbv\" (UID: \"847e60dc-7a0a-4115-a7e1-356476e319e7\") " pod="openshift-marketplace/certified-operators-4kmbv" Aug 13 20:38:36 crc kubenswrapper[4183]: I0813 20:38:36.204094 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/847e60dc-7a0a-4115-a7e1-356476e319e7-catalog-content\") pod \"certified-operators-4kmbv\" (UID: \"847e60dc-7a0a-4115-a7e1-356476e319e7\") " pod="openshift-marketplace/certified-operators-4kmbv" Aug 13 20:38:36 crc kubenswrapper[4183]: I0813 20:38:36.305098 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/847e60dc-7a0a-4115-a7e1-356476e319e7-utilities\") pod \"certified-operators-4kmbv\" (UID: \"847e60dc-7a0a-4115-a7e1-356476e319e7\") " pod="openshift-marketplace/certified-operators-4kmbv" Aug 13 20:38:36 crc kubenswrapper[4183]: I0813 20:38:36.305560 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bqlp7\" (UniqueName: \"kubernetes.io/projected/847e60dc-7a0a-4115-a7e1-356476e319e7-kube-api-access-bqlp7\") pod \"certified-operators-4kmbv\" (UID: \"847e60dc-7a0a-4115-a7e1-356476e319e7\") " pod="openshift-marketplace/certified-operators-4kmbv" Aug 13 20:38:36 crc kubenswrapper[4183]: I0813 20:38:36.306221 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/847e60dc-7a0a-4115-a7e1-356476e319e7-catalog-content\") pod \"certified-operators-4kmbv\" (UID: \"847e60dc-7a0a-4115-a7e1-356476e319e7\") " pod="openshift-marketplace/certified-operators-4kmbv" Aug 13 20:38:36 crc kubenswrapper[4183]: I0813 20:38:36.307045 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/847e60dc-7a0a-4115-a7e1-356476e319e7-catalog-content\") pod \"certified-operators-4kmbv\" (UID: \"847e60dc-7a0a-4115-a7e1-356476e319e7\") " pod="openshift-marketplace/certified-operators-4kmbv" Aug 13 20:38:36 crc kubenswrapper[4183]: I0813 20:38:36.307051 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/847e60dc-7a0a-4115-a7e1-356476e319e7-utilities\") pod \"certified-operators-4kmbv\" (UID: \"847e60dc-7a0a-4115-a7e1-356476e319e7\") " pod="openshift-marketplace/certified-operators-4kmbv" Aug 13 20:38:36 crc kubenswrapper[4183]: I0813 20:38:36.340674 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqlp7\" (UniqueName: \"kubernetes.io/projected/847e60dc-7a0a-4115-a7e1-356476e319e7-kube-api-access-bqlp7\") pod \"certified-operators-4kmbv\" (UID: \"847e60dc-7a0a-4115-a7e1-356476e319e7\") " pod="openshift-marketplace/certified-operators-4kmbv" Aug 13 20:38:36 crc kubenswrapper[4183]: I0813 20:38:36.431705 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4kmbv" Aug 13 20:38:36 crc kubenswrapper[4183]: I0813 20:38:36.809750 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4kmbv"] Aug 13 20:38:36 crc kubenswrapper[4183]: I0813 20:38:36.985997 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4kmbv" event={"ID":"847e60dc-7a0a-4115-a7e1-356476e319e7","Type":"ContainerStarted","Data":"48a72e1ed96b8c0e5bbe9b3b5aff8ae2f439297ae80339ffcbf1bb7ef84d8de0"} Aug 13 20:38:37 crc kubenswrapper[4183]: I0813 20:38:37.994454 4183 generic.go:334] "Generic (PLEG): container finished" podID="847e60dc-7a0a-4115-a7e1-356476e319e7" containerID="f13decb9fdd30ef896ae57a0bb1e7c727d2f51bf23d21a0c06925e526cda0255" exitCode=0 Aug 13 20:38:37 crc kubenswrapper[4183]: I0813 20:38:37.994525 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4kmbv" event={"ID":"847e60dc-7a0a-4115-a7e1-356476e319e7","Type":"ContainerDied","Data":"f13decb9fdd30ef896ae57a0bb1e7c727d2f51bf23d21a0c06925e526cda0255"} Aug 13 20:38:39 crc kubenswrapper[4183]: I0813 20:38:39.004230 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4kmbv" event={"ID":"847e60dc-7a0a-4115-a7e1-356476e319e7","Type":"ContainerStarted","Data":"cca9e40ae74d8be31d8667f9679183397993730648da379af8845ec53dbc84b2"} Aug 13 20:38:44 crc kubenswrapper[4183]: I0813 20:38:44.041088 4183 generic.go:334] "Generic (PLEG): container finished" podID="847e60dc-7a0a-4115-a7e1-356476e319e7" containerID="cca9e40ae74d8be31d8667f9679183397993730648da379af8845ec53dbc84b2" exitCode=0 Aug 13 20:38:44 crc kubenswrapper[4183]: I0813 20:38:44.041438 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4kmbv" event={"ID":"847e60dc-7a0a-4115-a7e1-356476e319e7","Type":"ContainerDied","Data":"cca9e40ae74d8be31d8667f9679183397993730648da379af8845ec53dbc84b2"} Aug 13 20:38:45 crc kubenswrapper[4183]: I0813 20:38:45.050620 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4kmbv" event={"ID":"847e60dc-7a0a-4115-a7e1-356476e319e7","Type":"ContainerStarted","Data":"4d4fa968ffeb0d6b6d897b7980c16b8302c2093e98fc6200cbfdce0392867e0b"} Aug 13 20:38:45 crc kubenswrapper[4183]: I0813 20:38:45.084743 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4kmbv" podStartSLOduration=2.689452066 podStartE2EDuration="9.084667311s" podCreationTimestamp="2025-08-13 20:38:36 +0000 UTC" firstStartedPulling="2025-08-13 20:38:37.996633082 +0000 UTC m=+3284.689297820" lastFinishedPulling="2025-08-13 20:38:44.391848357 +0000 UTC m=+3291.084513065" observedRunningTime="2025-08-13 20:38:45.080307175 +0000 UTC m=+3291.772971963" watchObservedRunningTime="2025-08-13 20:38:45.084667311 +0000 UTC m=+3291.777332029" Aug 13 20:38:46 crc kubenswrapper[4183]: I0813 20:38:46.432635 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4kmbv" Aug 13 20:38:46 crc kubenswrapper[4183]: I0813 20:38:46.433566 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4kmbv" Aug 13 20:38:46 crc kubenswrapper[4183]: I0813 20:38:46.551433 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4kmbv" Aug 13 20:38:54 crc kubenswrapper[4183]: I0813 20:38:54.817852 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:38:54 crc kubenswrapper[4183]: I0813 20:38:54.818310 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:38:54 crc kubenswrapper[4183]: I0813 20:38:54.818423 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:38:54 crc kubenswrapper[4183]: I0813 20:38:54.818467 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:38:54 crc kubenswrapper[4183]: I0813 20:38:54.818514 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:38:56 crc kubenswrapper[4183]: I0813 20:38:56.564125 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4kmbv" Aug 13 20:38:56 crc kubenswrapper[4183]: I0813 20:38:56.644422 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4kmbv"] Aug 13 20:38:57 crc kubenswrapper[4183]: I0813 20:38:57.141812 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4kmbv" podUID="847e60dc-7a0a-4115-a7e1-356476e319e7" containerName="registry-server" containerID="cri-o://4d4fa968ffeb0d6b6d897b7980c16b8302c2093e98fc6200cbfdce0392867e0b" gracePeriod=2 Aug 13 20:38:57 crc kubenswrapper[4183]: I0813 20:38:57.533422 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4kmbv" Aug 13 20:38:57 crc kubenswrapper[4183]: I0813 20:38:57.617319 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqlp7\" (UniqueName: \"kubernetes.io/projected/847e60dc-7a0a-4115-a7e1-356476e319e7-kube-api-access-bqlp7\") pod \"847e60dc-7a0a-4115-a7e1-356476e319e7\" (UID: \"847e60dc-7a0a-4115-a7e1-356476e319e7\") " Aug 13 20:38:57 crc kubenswrapper[4183]: I0813 20:38:57.617553 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/847e60dc-7a0a-4115-a7e1-356476e319e7-catalog-content\") pod \"847e60dc-7a0a-4115-a7e1-356476e319e7\" (UID: \"847e60dc-7a0a-4115-a7e1-356476e319e7\") " Aug 13 20:38:57 crc kubenswrapper[4183]: I0813 20:38:57.617652 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/847e60dc-7a0a-4115-a7e1-356476e319e7-utilities\") pod \"847e60dc-7a0a-4115-a7e1-356476e319e7\" (UID: \"847e60dc-7a0a-4115-a7e1-356476e319e7\") " Aug 13 20:38:57 crc kubenswrapper[4183]: I0813 20:38:57.618960 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/847e60dc-7a0a-4115-a7e1-356476e319e7-utilities" (OuterVolumeSpecName: "utilities") pod "847e60dc-7a0a-4115-a7e1-356476e319e7" (UID: "847e60dc-7a0a-4115-a7e1-356476e319e7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:38:57 crc kubenswrapper[4183]: I0813 20:38:57.628370 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/847e60dc-7a0a-4115-a7e1-356476e319e7-kube-api-access-bqlp7" (OuterVolumeSpecName: "kube-api-access-bqlp7") pod "847e60dc-7a0a-4115-a7e1-356476e319e7" (UID: "847e60dc-7a0a-4115-a7e1-356476e319e7"). InnerVolumeSpecName "kube-api-access-bqlp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:38:57 crc kubenswrapper[4183]: I0813 20:38:57.719139 4183 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/847e60dc-7a0a-4115-a7e1-356476e319e7-utilities\") on node \"crc\" DevicePath \"\"" Aug 13 20:38:57 crc kubenswrapper[4183]: I0813 20:38:57.719228 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-bqlp7\" (UniqueName: \"kubernetes.io/projected/847e60dc-7a0a-4115-a7e1-356476e319e7-kube-api-access-bqlp7\") on node \"crc\" DevicePath \"\"" Aug 13 20:38:57 crc kubenswrapper[4183]: I0813 20:38:57.842955 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/847e60dc-7a0a-4115-a7e1-356476e319e7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "847e60dc-7a0a-4115-a7e1-356476e319e7" (UID: "847e60dc-7a0a-4115-a7e1-356476e319e7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:38:57 crc kubenswrapper[4183]: I0813 20:38:57.921914 4183 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/847e60dc-7a0a-4115-a7e1-356476e319e7-catalog-content\") on node \"crc\" DevicePath \"\"" Aug 13 20:38:58 crc kubenswrapper[4183]: I0813 20:38:58.151335 4183 generic.go:334] "Generic (PLEG): container finished" podID="847e60dc-7a0a-4115-a7e1-356476e319e7" containerID="4d4fa968ffeb0d6b6d897b7980c16b8302c2093e98fc6200cbfdce0392867e0b" exitCode=0 Aug 13 20:38:58 crc kubenswrapper[4183]: I0813 20:38:58.151405 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4kmbv" event={"ID":"847e60dc-7a0a-4115-a7e1-356476e319e7","Type":"ContainerDied","Data":"4d4fa968ffeb0d6b6d897b7980c16b8302c2093e98fc6200cbfdce0392867e0b"} Aug 13 20:38:58 crc kubenswrapper[4183]: I0813 20:38:58.151452 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4kmbv" event={"ID":"847e60dc-7a0a-4115-a7e1-356476e319e7","Type":"ContainerDied","Data":"48a72e1ed96b8c0e5bbe9b3b5aff8ae2f439297ae80339ffcbf1bb7ef84d8de0"} Aug 13 20:38:58 crc kubenswrapper[4183]: I0813 20:38:58.151497 4183 scope.go:117] "RemoveContainer" containerID="4d4fa968ffeb0d6b6d897b7980c16b8302c2093e98fc6200cbfdce0392867e0b" Aug 13 20:38:58 crc kubenswrapper[4183]: I0813 20:38:58.151628 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4kmbv" Aug 13 20:38:58 crc kubenswrapper[4183]: I0813 20:38:58.199060 4183 scope.go:117] "RemoveContainer" containerID="cca9e40ae74d8be31d8667f9679183397993730648da379af8845ec53dbc84b2" Aug 13 20:38:58 crc kubenswrapper[4183]: I0813 20:38:58.240373 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4kmbv"] Aug 13 20:38:58 crc kubenswrapper[4183]: I0813 20:38:58.246222 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4kmbv"] Aug 13 20:38:58 crc kubenswrapper[4183]: I0813 20:38:58.267919 4183 scope.go:117] "RemoveContainer" containerID="f13decb9fdd30ef896ae57a0bb1e7c727d2f51bf23d21a0c06925e526cda0255" Aug 13 20:38:58 crc kubenswrapper[4183]: I0813 20:38:58.320226 4183 scope.go:117] "RemoveContainer" containerID="4d4fa968ffeb0d6b6d897b7980c16b8302c2093e98fc6200cbfdce0392867e0b" Aug 13 20:38:58 crc kubenswrapper[4183]: E0813 20:38:58.321862 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d4fa968ffeb0d6b6d897b7980c16b8302c2093e98fc6200cbfdce0392867e0b\": container with ID starting with 4d4fa968ffeb0d6b6d897b7980c16b8302c2093e98fc6200cbfdce0392867e0b not found: ID does not exist" containerID="4d4fa968ffeb0d6b6d897b7980c16b8302c2093e98fc6200cbfdce0392867e0b" Aug 13 20:38:58 crc kubenswrapper[4183]: I0813 20:38:58.321944 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d4fa968ffeb0d6b6d897b7980c16b8302c2093e98fc6200cbfdce0392867e0b"} err="failed to get container status \"4d4fa968ffeb0d6b6d897b7980c16b8302c2093e98fc6200cbfdce0392867e0b\": rpc error: code = NotFound desc = could not find container \"4d4fa968ffeb0d6b6d897b7980c16b8302c2093e98fc6200cbfdce0392867e0b\": container with ID starting with 4d4fa968ffeb0d6b6d897b7980c16b8302c2093e98fc6200cbfdce0392867e0b not found: ID does not exist" Aug 13 20:38:58 crc kubenswrapper[4183]: I0813 20:38:58.321968 4183 scope.go:117] "RemoveContainer" containerID="cca9e40ae74d8be31d8667f9679183397993730648da379af8845ec53dbc84b2" Aug 13 20:38:58 crc kubenswrapper[4183]: E0813 20:38:58.322957 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cca9e40ae74d8be31d8667f9679183397993730648da379af8845ec53dbc84b2\": container with ID starting with cca9e40ae74d8be31d8667f9679183397993730648da379af8845ec53dbc84b2 not found: ID does not exist" containerID="cca9e40ae74d8be31d8667f9679183397993730648da379af8845ec53dbc84b2" Aug 13 20:38:58 crc kubenswrapper[4183]: I0813 20:38:58.323051 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cca9e40ae74d8be31d8667f9679183397993730648da379af8845ec53dbc84b2"} err="failed to get container status \"cca9e40ae74d8be31d8667f9679183397993730648da379af8845ec53dbc84b2\": rpc error: code = NotFound desc = could not find container \"cca9e40ae74d8be31d8667f9679183397993730648da379af8845ec53dbc84b2\": container with ID starting with cca9e40ae74d8be31d8667f9679183397993730648da379af8845ec53dbc84b2 not found: ID does not exist" Aug 13 20:38:58 crc kubenswrapper[4183]: I0813 20:38:58.323071 4183 scope.go:117] "RemoveContainer" containerID="f13decb9fdd30ef896ae57a0bb1e7c727d2f51bf23d21a0c06925e526cda0255" Aug 13 20:38:58 crc kubenswrapper[4183]: E0813 20:38:58.323851 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f13decb9fdd30ef896ae57a0bb1e7c727d2f51bf23d21a0c06925e526cda0255\": container with ID starting with f13decb9fdd30ef896ae57a0bb1e7c727d2f51bf23d21a0c06925e526cda0255 not found: ID does not exist" containerID="f13decb9fdd30ef896ae57a0bb1e7c727d2f51bf23d21a0c06925e526cda0255" Aug 13 20:38:58 crc kubenswrapper[4183]: I0813 20:38:58.323918 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f13decb9fdd30ef896ae57a0bb1e7c727d2f51bf23d21a0c06925e526cda0255"} err="failed to get container status \"f13decb9fdd30ef896ae57a0bb1e7c727d2f51bf23d21a0c06925e526cda0255\": rpc error: code = NotFound desc = could not find container \"f13decb9fdd30ef896ae57a0bb1e7c727d2f51bf23d21a0c06925e526cda0255\": container with ID starting with f13decb9fdd30ef896ae57a0bb1e7c727d2f51bf23d21a0c06925e526cda0255 not found: ID does not exist" Aug 13 20:38:59 crc kubenswrapper[4183]: I0813 20:38:59.221999 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="847e60dc-7a0a-4115-a7e1-356476e319e7" path="/var/lib/kubelet/pods/847e60dc-7a0a-4115-a7e1-356476e319e7/volumes" Aug 13 20:39:54 crc kubenswrapper[4183]: I0813 20:39:54.819395 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:39:54 crc kubenswrapper[4183]: I0813 20:39:54.820101 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:39:54 crc kubenswrapper[4183]: I0813 20:39:54.820237 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:39:54 crc kubenswrapper[4183]: I0813 20:39:54.820279 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:39:54 crc kubenswrapper[4183]: I0813 20:39:54.820312 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:40:54 crc kubenswrapper[4183]: I0813 20:40:54.821089 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:40:54 crc kubenswrapper[4183]: I0813 20:40:54.821872 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:40:54 crc kubenswrapper[4183]: I0813 20:40:54.821940 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:40:54 crc kubenswrapper[4183]: I0813 20:40:54.821984 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:40:54 crc kubenswrapper[4183]: I0813 20:40:54.822014 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:41:21 crc kubenswrapper[4183]: I0813 20:41:21.457733 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k2tgr"] Aug 13 20:41:21 crc kubenswrapper[4183]: I0813 20:41:21.458497 4183 topology_manager.go:215] "Topology Admit Handler" podUID="58e4f786-ee2a-45c4-83a4-523611d1eccd" podNamespace="openshift-marketplace" podName="redhat-operators-k2tgr" Aug 13 20:41:21 crc kubenswrapper[4183]: E0813 20:41:21.458870 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="847e60dc-7a0a-4115-a7e1-356476e319e7" containerName="registry-server" Aug 13 20:41:21 crc kubenswrapper[4183]: I0813 20:41:21.458891 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="847e60dc-7a0a-4115-a7e1-356476e319e7" containerName="registry-server" Aug 13 20:41:21 crc kubenswrapper[4183]: E0813 20:41:21.458911 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="847e60dc-7a0a-4115-a7e1-356476e319e7" containerName="extract-content" Aug 13 20:41:21 crc kubenswrapper[4183]: I0813 20:41:21.458919 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="847e60dc-7a0a-4115-a7e1-356476e319e7" containerName="extract-content" Aug 13 20:41:21 crc kubenswrapper[4183]: E0813 20:41:21.458935 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="847e60dc-7a0a-4115-a7e1-356476e319e7" containerName="extract-utilities" Aug 13 20:41:21 crc kubenswrapper[4183]: I0813 20:41:21.458943 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="847e60dc-7a0a-4115-a7e1-356476e319e7" containerName="extract-utilities" Aug 13 20:41:21 crc kubenswrapper[4183]: I0813 20:41:21.459099 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="847e60dc-7a0a-4115-a7e1-356476e319e7" containerName="registry-server" Aug 13 20:41:21 crc kubenswrapper[4183]: I0813 20:41:21.463161 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k2tgr" Aug 13 20:41:21 crc kubenswrapper[4183]: I0813 20:41:21.560744 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k2tgr"] Aug 13 20:41:21 crc kubenswrapper[4183]: I0813 20:41:21.638564 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58e4f786-ee2a-45c4-83a4-523611d1eccd-utilities\") pod \"redhat-operators-k2tgr\" (UID: \"58e4f786-ee2a-45c4-83a4-523611d1eccd\") " pod="openshift-marketplace/redhat-operators-k2tgr" Aug 13 20:41:21 crc kubenswrapper[4183]: I0813 20:41:21.638643 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58e4f786-ee2a-45c4-83a4-523611d1eccd-catalog-content\") pod \"redhat-operators-k2tgr\" (UID: \"58e4f786-ee2a-45c4-83a4-523611d1eccd\") " pod="openshift-marketplace/redhat-operators-k2tgr" Aug 13 20:41:21 crc kubenswrapper[4183]: I0813 20:41:21.638712 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shhm9\" (UniqueName: \"kubernetes.io/projected/58e4f786-ee2a-45c4-83a4-523611d1eccd-kube-api-access-shhm9\") pod \"redhat-operators-k2tgr\" (UID: \"58e4f786-ee2a-45c4-83a4-523611d1eccd\") " pod="openshift-marketplace/redhat-operators-k2tgr" Aug 13 20:41:21 crc kubenswrapper[4183]: I0813 20:41:21.740072 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58e4f786-ee2a-45c4-83a4-523611d1eccd-utilities\") pod \"redhat-operators-k2tgr\" (UID: \"58e4f786-ee2a-45c4-83a4-523611d1eccd\") " pod="openshift-marketplace/redhat-operators-k2tgr" Aug 13 20:41:21 crc kubenswrapper[4183]: I0813 20:41:21.740153 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58e4f786-ee2a-45c4-83a4-523611d1eccd-catalog-content\") pod \"redhat-operators-k2tgr\" (UID: \"58e4f786-ee2a-45c4-83a4-523611d1eccd\") " pod="openshift-marketplace/redhat-operators-k2tgr" Aug 13 20:41:21 crc kubenswrapper[4183]: I0813 20:41:21.740263 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-shhm9\" (UniqueName: \"kubernetes.io/projected/58e4f786-ee2a-45c4-83a4-523611d1eccd-kube-api-access-shhm9\") pod \"redhat-operators-k2tgr\" (UID: \"58e4f786-ee2a-45c4-83a4-523611d1eccd\") " pod="openshift-marketplace/redhat-operators-k2tgr" Aug 13 20:41:21 crc kubenswrapper[4183]: I0813 20:41:21.741100 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58e4f786-ee2a-45c4-83a4-523611d1eccd-utilities\") pod \"redhat-operators-k2tgr\" (UID: \"58e4f786-ee2a-45c4-83a4-523611d1eccd\") " pod="openshift-marketplace/redhat-operators-k2tgr" Aug 13 20:41:21 crc kubenswrapper[4183]: I0813 20:41:21.741155 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58e4f786-ee2a-45c4-83a4-523611d1eccd-catalog-content\") pod \"redhat-operators-k2tgr\" (UID: \"58e4f786-ee2a-45c4-83a4-523611d1eccd\") " pod="openshift-marketplace/redhat-operators-k2tgr" Aug 13 20:41:21 crc kubenswrapper[4183]: I0813 20:41:21.775996 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-shhm9\" (UniqueName: \"kubernetes.io/projected/58e4f786-ee2a-45c4-83a4-523611d1eccd-kube-api-access-shhm9\") pod \"redhat-operators-k2tgr\" (UID: \"58e4f786-ee2a-45c4-83a4-523611d1eccd\") " pod="openshift-marketplace/redhat-operators-k2tgr" Aug 13 20:41:21 crc kubenswrapper[4183]: I0813 20:41:21.813097 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k2tgr" Aug 13 20:41:22 crc kubenswrapper[4183]: I0813 20:41:22.212454 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k2tgr"] Aug 13 20:41:23 crc kubenswrapper[4183]: I0813 20:41:23.138668 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2tgr" event={"ID":"58e4f786-ee2a-45c4-83a4-523611d1eccd","Type":"ContainerDied","Data":"97975b8478480bc243fd4dfc822e187789038bc9e4be6621b7b69c1f88b52b54"} Aug 13 20:41:23 crc kubenswrapper[4183]: I0813 20:41:23.140092 4183 generic.go:334] "Generic (PLEG): container finished" podID="58e4f786-ee2a-45c4-83a4-523611d1eccd" containerID="97975b8478480bc243fd4dfc822e187789038bc9e4be6621b7b69c1f88b52b54" exitCode=0 Aug 13 20:41:23 crc kubenswrapper[4183]: I0813 20:41:23.140278 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2tgr" event={"ID":"58e4f786-ee2a-45c4-83a4-523611d1eccd","Type":"ContainerStarted","Data":"b07b3fcd02d69d1222fdf132ca426f7cb86cb788df356d30a6d271afcf12936c"} Aug 13 20:41:24 crc kubenswrapper[4183]: I0813 20:41:24.153949 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2tgr" event={"ID":"58e4f786-ee2a-45c4-83a4-523611d1eccd","Type":"ContainerStarted","Data":"23cb6067105cb81e29b706a75511879876a39ff71faee76af4065685c8489b42"} Aug 13 20:41:48 crc kubenswrapper[4183]: I0813 20:41:48.416680 4183 generic.go:334] "Generic (PLEG): container finished" podID="58e4f786-ee2a-45c4-83a4-523611d1eccd" containerID="23cb6067105cb81e29b706a75511879876a39ff71faee76af4065685c8489b42" exitCode=0 Aug 13 20:41:48 crc kubenswrapper[4183]: I0813 20:41:48.417522 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2tgr" event={"ID":"58e4f786-ee2a-45c4-83a4-523611d1eccd","Type":"ContainerDied","Data":"23cb6067105cb81e29b706a75511879876a39ff71faee76af4065685c8489b42"} Aug 13 20:41:50 crc kubenswrapper[4183]: I0813 20:41:50.435617 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2tgr" event={"ID":"58e4f786-ee2a-45c4-83a4-523611d1eccd","Type":"ContainerStarted","Data":"d71a08820a628e49a4944e224dac2a57c287993423476efa7e5926f4e7df03d0"} Aug 13 20:41:51 crc kubenswrapper[4183]: I0813 20:41:51.814499 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k2tgr" Aug 13 20:41:51 crc kubenswrapper[4183]: I0813 20:41:51.814605 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k2tgr" Aug 13 20:41:52 crc kubenswrapper[4183]: I0813 20:41:52.942710 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k2tgr" podUID="58e4f786-ee2a-45c4-83a4-523611d1eccd" containerName="registry-server" probeResult="failure" output=< Aug 13 20:41:52 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:41:52 crc kubenswrapper[4183]: > Aug 13 20:41:54 crc kubenswrapper[4183]: I0813 20:41:54.822617 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:41:54 crc kubenswrapper[4183]: I0813 20:41:54.823133 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:41:54 crc kubenswrapper[4183]: I0813 20:41:54.823185 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:41:54 crc kubenswrapper[4183]: I0813 20:41:54.823259 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:41:54 crc kubenswrapper[4183]: I0813 20:41:54.823299 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:42:02 crc kubenswrapper[4183]: I0813 20:42:02.939416 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k2tgr" podUID="58e4f786-ee2a-45c4-83a4-523611d1eccd" containerName="registry-server" probeResult="failure" output=< Aug 13 20:42:02 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:42:02 crc kubenswrapper[4183]: > Aug 13 20:42:11 crc kubenswrapper[4183]: I0813 20:42:11.984442 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k2tgr" Aug 13 20:42:12 crc kubenswrapper[4183]: I0813 20:42:12.028486 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k2tgr" podStartSLOduration=25.310193169 podStartE2EDuration="51.028422928s" podCreationTimestamp="2025-08-13 20:41:21 +0000 UTC" firstStartedPulling="2025-08-13 20:41:23.140881353 +0000 UTC m=+3449.833546071" lastFinishedPulling="2025-08-13 20:41:48.859111222 +0000 UTC m=+3475.551775830" observedRunningTime="2025-08-13 20:41:50.480344302 +0000 UTC m=+3477.173009280" watchObservedRunningTime="2025-08-13 20:42:12.028422928 +0000 UTC m=+3498.721087656" Aug 13 20:42:12 crc kubenswrapper[4183]: I0813 20:42:12.100927 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k2tgr" Aug 13 20:42:12 crc kubenswrapper[4183]: I0813 20:42:12.176489 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k2tgr"] Aug 13 20:42:13 crc kubenswrapper[4183]: I0813 20:42:13.263240 4183 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="fsnotify: can't remove non-existent watch: /etc/kubernetes/kubelet-ca.crt" Aug 13 20:42:13 crc kubenswrapper[4183]: I0813 20:42:13.587508 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-k2tgr" podUID="58e4f786-ee2a-45c4-83a4-523611d1eccd" containerName="registry-server" containerID="cri-o://d71a08820a628e49a4944e224dac2a57c287993423476efa7e5926f4e7df03d0" gracePeriod=2 Aug 13 20:42:13 crc kubenswrapper[4183]: I0813 20:42:13.985208 4183 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="fsnotify: can't remove non-existent watch: /etc/kubernetes/kubelet-ca.crt" Aug 13 20:42:14 crc kubenswrapper[4183]: I0813 20:42:14.243675 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k2tgr" Aug 13 20:42:14 crc kubenswrapper[4183]: I0813 20:42:14.329446 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shhm9\" (UniqueName: \"kubernetes.io/projected/58e4f786-ee2a-45c4-83a4-523611d1eccd-kube-api-access-shhm9\") pod \"58e4f786-ee2a-45c4-83a4-523611d1eccd\" (UID: \"58e4f786-ee2a-45c4-83a4-523611d1eccd\") " Aug 13 20:42:14 crc kubenswrapper[4183]: I0813 20:42:14.329529 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58e4f786-ee2a-45c4-83a4-523611d1eccd-utilities\") pod \"58e4f786-ee2a-45c4-83a4-523611d1eccd\" (UID: \"58e4f786-ee2a-45c4-83a4-523611d1eccd\") " Aug 13 20:42:14 crc kubenswrapper[4183]: I0813 20:42:14.329562 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58e4f786-ee2a-45c4-83a4-523611d1eccd-catalog-content\") pod \"58e4f786-ee2a-45c4-83a4-523611d1eccd\" (UID: \"58e4f786-ee2a-45c4-83a4-523611d1eccd\") " Aug 13 20:42:14 crc kubenswrapper[4183]: I0813 20:42:14.330725 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58e4f786-ee2a-45c4-83a4-523611d1eccd-utilities" (OuterVolumeSpecName: "utilities") pod "58e4f786-ee2a-45c4-83a4-523611d1eccd" (UID: "58e4f786-ee2a-45c4-83a4-523611d1eccd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:42:14 crc kubenswrapper[4183]: I0813 20:42:14.346140 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58e4f786-ee2a-45c4-83a4-523611d1eccd-kube-api-access-shhm9" (OuterVolumeSpecName: "kube-api-access-shhm9") pod "58e4f786-ee2a-45c4-83a4-523611d1eccd" (UID: "58e4f786-ee2a-45c4-83a4-523611d1eccd"). InnerVolumeSpecName "kube-api-access-shhm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:42:14 crc kubenswrapper[4183]: I0813 20:42:14.431373 4183 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58e4f786-ee2a-45c4-83a4-523611d1eccd-utilities\") on node \"crc\" DevicePath \"\"" Aug 13 20:42:14 crc kubenswrapper[4183]: I0813 20:42:14.431440 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-shhm9\" (UniqueName: \"kubernetes.io/projected/58e4f786-ee2a-45c4-83a4-523611d1eccd-kube-api-access-shhm9\") on node \"crc\" DevicePath \"\"" Aug 13 20:42:14 crc kubenswrapper[4183]: I0813 20:42:14.622657 4183 generic.go:334] "Generic (PLEG): container finished" podID="58e4f786-ee2a-45c4-83a4-523611d1eccd" containerID="d71a08820a628e49a4944e224dac2a57c287993423476efa7e5926f4e7df03d0" exitCode=0 Aug 13 20:42:14 crc kubenswrapper[4183]: I0813 20:42:14.622712 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2tgr" event={"ID":"58e4f786-ee2a-45c4-83a4-523611d1eccd","Type":"ContainerDied","Data":"d71a08820a628e49a4944e224dac2a57c287993423476efa7e5926f4e7df03d0"} Aug 13 20:42:14 crc kubenswrapper[4183]: I0813 20:42:14.622765 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2tgr" event={"ID":"58e4f786-ee2a-45c4-83a4-523611d1eccd","Type":"ContainerDied","Data":"b07b3fcd02d69d1222fdf132ca426f7cb86cb788df356d30a6d271afcf12936c"} Aug 13 20:42:14 crc kubenswrapper[4183]: I0813 20:42:14.622852 4183 scope.go:117] "RemoveContainer" containerID="d71a08820a628e49a4944e224dac2a57c287993423476efa7e5926f4e7df03d0" Aug 13 20:42:14 crc kubenswrapper[4183]: I0813 20:42:14.623034 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k2tgr" Aug 13 20:42:14 crc kubenswrapper[4183]: I0813 20:42:14.791096 4183 scope.go:117] "RemoveContainer" containerID="23cb6067105cb81e29b706a75511879876a39ff71faee76af4065685c8489b42" Aug 13 20:42:14 crc kubenswrapper[4183]: I0813 20:42:14.903231 4183 scope.go:117] "RemoveContainer" containerID="97975b8478480bc243fd4dfc822e187789038bc9e4be6621b7b69c1f88b52b54" Aug 13 20:42:14 crc kubenswrapper[4183]: I0813 20:42:14.973171 4183 scope.go:117] "RemoveContainer" containerID="d71a08820a628e49a4944e224dac2a57c287993423476efa7e5926f4e7df03d0" Aug 13 20:42:14 crc kubenswrapper[4183]: E0813 20:42:14.974453 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d71a08820a628e49a4944e224dac2a57c287993423476efa7e5926f4e7df03d0\": container with ID starting with d71a08820a628e49a4944e224dac2a57c287993423476efa7e5926f4e7df03d0 not found: ID does not exist" containerID="d71a08820a628e49a4944e224dac2a57c287993423476efa7e5926f4e7df03d0" Aug 13 20:42:14 crc kubenswrapper[4183]: I0813 20:42:14.974568 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d71a08820a628e49a4944e224dac2a57c287993423476efa7e5926f4e7df03d0"} err="failed to get container status \"d71a08820a628e49a4944e224dac2a57c287993423476efa7e5926f4e7df03d0\": rpc error: code = NotFound desc = could not find container \"d71a08820a628e49a4944e224dac2a57c287993423476efa7e5926f4e7df03d0\": container with ID starting with d71a08820a628e49a4944e224dac2a57c287993423476efa7e5926f4e7df03d0 not found: ID does not exist" Aug 13 20:42:14 crc kubenswrapper[4183]: I0813 20:42:14.974596 4183 scope.go:117] "RemoveContainer" containerID="23cb6067105cb81e29b706a75511879876a39ff71faee76af4065685c8489b42" Aug 13 20:42:14 crc kubenswrapper[4183]: E0813 20:42:14.975768 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23cb6067105cb81e29b706a75511879876a39ff71faee76af4065685c8489b42\": container with ID starting with 23cb6067105cb81e29b706a75511879876a39ff71faee76af4065685c8489b42 not found: ID does not exist" containerID="23cb6067105cb81e29b706a75511879876a39ff71faee76af4065685c8489b42" Aug 13 20:42:14 crc kubenswrapper[4183]: I0813 20:42:14.976375 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23cb6067105cb81e29b706a75511879876a39ff71faee76af4065685c8489b42"} err="failed to get container status \"23cb6067105cb81e29b706a75511879876a39ff71faee76af4065685c8489b42\": rpc error: code = NotFound desc = could not find container \"23cb6067105cb81e29b706a75511879876a39ff71faee76af4065685c8489b42\": container with ID starting with 23cb6067105cb81e29b706a75511879876a39ff71faee76af4065685c8489b42 not found: ID does not exist" Aug 13 20:42:14 crc kubenswrapper[4183]: I0813 20:42:14.976404 4183 scope.go:117] "RemoveContainer" containerID="97975b8478480bc243fd4dfc822e187789038bc9e4be6621b7b69c1f88b52b54" Aug 13 20:42:14 crc kubenswrapper[4183]: E0813 20:42:14.977560 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97975b8478480bc243fd4dfc822e187789038bc9e4be6621b7b69c1f88b52b54\": container with ID starting with 97975b8478480bc243fd4dfc822e187789038bc9e4be6621b7b69c1f88b52b54 not found: ID does not exist" containerID="97975b8478480bc243fd4dfc822e187789038bc9e4be6621b7b69c1f88b52b54" Aug 13 20:42:14 crc kubenswrapper[4183]: I0813 20:42:14.977600 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97975b8478480bc243fd4dfc822e187789038bc9e4be6621b7b69c1f88b52b54"} err="failed to get container status \"97975b8478480bc243fd4dfc822e187789038bc9e4be6621b7b69c1f88b52b54\": rpc error: code = NotFound desc = could not find container \"97975b8478480bc243fd4dfc822e187789038bc9e4be6621b7b69c1f88b52b54\": container with ID starting with 97975b8478480bc243fd4dfc822e187789038bc9e4be6621b7b69c1f88b52b54 not found: ID does not exist" Aug 13 20:42:15 crc kubenswrapper[4183]: I0813 20:42:15.279549 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58e4f786-ee2a-45c4-83a4-523611d1eccd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "58e4f786-ee2a-45c4-83a4-523611d1eccd" (UID: "58e4f786-ee2a-45c4-83a4-523611d1eccd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:42:15 crc kubenswrapper[4183]: I0813 20:42:15.345759 4183 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58e4f786-ee2a-45c4-83a4-523611d1eccd-catalog-content\") on node \"crc\" DevicePath \"\"" Aug 13 20:42:15 crc kubenswrapper[4183]: I0813 20:42:15.645911 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k2tgr"] Aug 13 20:42:15 crc kubenswrapper[4183]: I0813 20:42:15.671541 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-k2tgr"] Aug 13 20:42:16 crc kubenswrapper[4183]: I0813 20:42:16.591921 4183 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="fsnotify: can't remove non-existent watch: /etc/kubernetes/kubelet-ca.crt" Aug 13 20:42:17 crc kubenswrapper[4183]: I0813 20:42:17.218922 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58e4f786-ee2a-45c4-83a4-523611d1eccd" path="/var/lib/kubelet/pods/58e4f786-ee2a-45c4-83a4-523611d1eccd/volumes" Aug 13 20:42:26 crc kubenswrapper[4183]: I0813 20:42:26.022059 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sdddl"] Aug 13 20:42:26 crc kubenswrapper[4183]: I0813 20:42:26.022931 4183 topology_manager.go:215] "Topology Admit Handler" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" podNamespace="openshift-marketplace" podName="community-operators-sdddl" Aug 13 20:42:26 crc kubenswrapper[4183]: E0813 20:42:26.023252 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="58e4f786-ee2a-45c4-83a4-523611d1eccd" containerName="extract-content" Aug 13 20:42:26 crc kubenswrapper[4183]: I0813 20:42:26.023293 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="58e4f786-ee2a-45c4-83a4-523611d1eccd" containerName="extract-content" Aug 13 20:42:26 crc kubenswrapper[4183]: E0813 20:42:26.023313 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="58e4f786-ee2a-45c4-83a4-523611d1eccd" containerName="extract-utilities" Aug 13 20:42:26 crc kubenswrapper[4183]: I0813 20:42:26.023325 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="58e4f786-ee2a-45c4-83a4-523611d1eccd" containerName="extract-utilities" Aug 13 20:42:26 crc kubenswrapper[4183]: E0813 20:42:26.023345 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="58e4f786-ee2a-45c4-83a4-523611d1eccd" containerName="registry-server" Aug 13 20:42:26 crc kubenswrapper[4183]: I0813 20:42:26.023355 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="58e4f786-ee2a-45c4-83a4-523611d1eccd" containerName="registry-server" Aug 13 20:42:26 crc kubenswrapper[4183]: I0813 20:42:26.023548 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="58e4f786-ee2a-45c4-83a4-523611d1eccd" containerName="registry-server" Aug 13 20:42:26 crc kubenswrapper[4183]: I0813 20:42:26.033492 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Aug 13 20:42:26 crc kubenswrapper[4183]: I0813 20:42:26.042188 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sdddl"] Aug 13 20:42:26 crc kubenswrapper[4183]: I0813 20:42:26.209469 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-catalog-content\") pod \"community-operators-sdddl\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " pod="openshift-marketplace/community-operators-sdddl" Aug 13 20:42:26 crc kubenswrapper[4183]: I0813 20:42:26.210951 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-utilities\") pod \"community-operators-sdddl\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " pod="openshift-marketplace/community-operators-sdddl" Aug 13 20:42:26 crc kubenswrapper[4183]: I0813 20:42:26.211019 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9p8gt\" (UniqueName: \"kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt\") pod \"community-operators-sdddl\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " pod="openshift-marketplace/community-operators-sdddl" Aug 13 20:42:26 crc kubenswrapper[4183]: I0813 20:42:26.312196 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-catalog-content\") pod \"community-operators-sdddl\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " pod="openshift-marketplace/community-operators-sdddl" Aug 13 20:42:26 crc kubenswrapper[4183]: I0813 20:42:26.312307 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-utilities\") pod \"community-operators-sdddl\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " pod="openshift-marketplace/community-operators-sdddl" Aug 13 20:42:26 crc kubenswrapper[4183]: I0813 20:42:26.312335 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9p8gt\" (UniqueName: \"kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt\") pod \"community-operators-sdddl\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " pod="openshift-marketplace/community-operators-sdddl" Aug 13 20:42:26 crc kubenswrapper[4183]: I0813 20:42:26.313570 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-catalog-content\") pod \"community-operators-sdddl\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " pod="openshift-marketplace/community-operators-sdddl" Aug 13 20:42:26 crc kubenswrapper[4183]: I0813 20:42:26.313883 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-utilities\") pod \"community-operators-sdddl\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " pod="openshift-marketplace/community-operators-sdddl" Aug 13 20:42:26 crc kubenswrapper[4183]: I0813 20:42:26.356133 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-9p8gt\" (UniqueName: \"kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt\") pod \"community-operators-sdddl\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " pod="openshift-marketplace/community-operators-sdddl" Aug 13 20:42:26 crc kubenswrapper[4183]: I0813 20:42:26.889621 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Aug 13 20:42:27 crc kubenswrapper[4183]: I0813 20:42:27.900601 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sdddl"] Aug 13 20:42:28 crc kubenswrapper[4183]: I0813 20:42:28.727615 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sdddl" event={"ID":"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760","Type":"ContainerStarted","Data":"b4ce7c1e13297d1e3743efaf9f1064544bf90f65fb0b7a8fecd420a76ed2a73a"} Aug 13 20:42:31 crc kubenswrapper[4183]: I0813 20:42:31.758640 4183 generic.go:334] "Generic (PLEG): container finished" podID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" containerID="821137b1cd0b6ecccbd1081c1b451b19bfea6dd2e089a4b1001a6cdb31a4256f" exitCode=0 Aug 13 20:42:31 crc kubenswrapper[4183]: I0813 20:42:31.758743 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sdddl" event={"ID":"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760","Type":"ContainerDied","Data":"821137b1cd0b6ecccbd1081c1b451b19bfea6dd2e089a4b1001a6cdb31a4256f"} Aug 13 20:42:34 crc systemd[1]: Stopping Kubernetes Kubelet... Aug 13 20:42:34 crc kubenswrapper[4183]: I0813 20:42:34.901075 4183 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Aug 13 20:42:34 crc systemd[1]: kubelet.service: Deactivated successfully. Aug 13 20:42:34 crc systemd[1]: Stopped Kubernetes Kubelet. Aug 13 20:42:34 crc systemd[1]: kubelet.service: Consumed 9min 48.169s CPU time. -- Boot da2de9f777ba48f1be421e0e9b862fb3 -- Dec 13 00:01:44 crc systemd[1]: Starting Kubernetes Kubelet... Dec 13 00:01:44 crc kubenswrapper[3020]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 00:01:44 crc kubenswrapper[3020]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 13 00:01:44 crc kubenswrapper[3020]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 00:01:44 crc kubenswrapper[3020]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 00:01:44 crc kubenswrapper[3020]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 00:01:44 crc kubenswrapper[3020]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.516998 3020 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.520101 3020 feature_gate.go:227] unrecognized feature gate: ChunkSizeMiB Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.520136 3020 feature_gate.go:227] unrecognized feature gate: InsightsOnDemandDataGather Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.520147 3020 feature_gate.go:227] unrecognized feature gate: MixedCPUsAllocation Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.520158 3020 feature_gate.go:227] unrecognized feature gate: SigstoreImageVerification Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.520169 3020 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstall Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.520182 3020 feature_gate.go:227] unrecognized feature gate: ExternalRouteCertificate Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.520206 3020 feature_gate.go:227] unrecognized feature gate: GCPClusterHostedDNS Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.520217 3020 feature_gate.go:227] unrecognized feature gate: OnClusterBuild Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.520227 3020 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallGCP Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.520237 3020 feature_gate.go:227] unrecognized feature gate: InsightsConfigAPI Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.520248 3020 feature_gate.go:227] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.520260 3020 feature_gate.go:227] unrecognized feature gate: NewOLM Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.520271 3020 feature_gate.go:227] unrecognized feature gate: PlatformOperators Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.520282 3020 feature_gate.go:227] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.520292 3020 feature_gate.go:227] unrecognized feature gate: MetricsCollectionProfiles Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.520302 3020 feature_gate.go:227] unrecognized feature gate: Example Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.520312 3020 feature_gate.go:227] unrecognized feature gate: AdminNetworkPolicy Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.520322 3020 feature_gate.go:227] unrecognized feature gate: AlibabaPlatform Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.520333 3020 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderExternal Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.520343 3020 feature_gate.go:227] unrecognized feature gate: NetworkLiveMigration Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.520353 3020 feature_gate.go:227] unrecognized feature gate: AzureWorkloadIdentity Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.520363 3020 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.520373 3020 feature_gate.go:227] unrecognized feature gate: EtcdBackendQuota Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.520383 3020 feature_gate.go:227] unrecognized feature gate: ExternalCloudProvider Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.520409 3020 feature_gate.go:227] unrecognized feature gate: GatewayAPI Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.520421 3020 feature_gate.go:227] unrecognized feature gate: ImagePolicy Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.520432 3020 feature_gate.go:227] unrecognized feature gate: InsightsConfig Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.520443 3020 feature_gate.go:227] unrecognized feature gate: MachineAPIProviderOpenStack Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.520481 3020 feature_gate.go:227] unrecognized feature gate: NetworkDiagnosticsConfig Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.520492 3020 feature_gate.go:227] unrecognized feature gate: PinnedImages Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.520502 3020 feature_gate.go:227] unrecognized feature gate: UpgradeStatus Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.520512 3020 feature_gate.go:227] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.520522 3020 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAWS Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.520533 3020 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAzure Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.520543 3020 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallPowerVS Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.520553 3020 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderGCP Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.520564 3020 feature_gate.go:227] unrecognized feature gate: GCPLabelsTags Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.520576 3020 feature_gate.go:227] unrecognized feature gate: MachineConfigNodes Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.520586 3020 feature_gate.go:227] unrecognized feature gate: MetricsServer Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.520597 3020 feature_gate.go:227] unrecognized feature gate: VSphereMultiVCenters Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.520607 3020 feature_gate.go:227] unrecognized feature gate: AutomatedEtcdBackup Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.520617 3020 feature_gate.go:227] unrecognized feature gate: VolumeGroupSnapshot Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.520627 3020 feature_gate.go:227] unrecognized feature gate: BuildCSIVolumes Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.520637 3020 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallVSphere Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.520647 3020 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderAzure Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.520657 3020 feature_gate.go:227] unrecognized feature gate: NodeDisruptionPolicy Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.520667 3020 feature_gate.go:227] unrecognized feature gate: SignatureStores Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.520677 3020 feature_gate.go:227] unrecognized feature gate: VSphereStaticIPs Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.520687 3020 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallNutanix Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.520697 3020 feature_gate.go:227] unrecognized feature gate: DNSNameResolver Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.520707 3020 feature_gate.go:227] unrecognized feature gate: InstallAlternateInfrastructureAWS Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.520717 3020 feature_gate.go:227] unrecognized feature gate: ManagedBootImages Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.520727 3020 feature_gate.go:227] unrecognized feature gate: PrivateHostedZoneAWS Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.520737 3020 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallOpenStack Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.520748 3020 feature_gate.go:227] unrecognized feature gate: CSIDriverSharedResource Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.520760 3020 feature_gate.go:227] unrecognized feature gate: HardwareSpeed Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.520773 3020 feature_gate.go:240] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.520786 3020 feature_gate.go:227] unrecognized feature gate: BareMetalLoadBalancer Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.520796 3020 feature_gate.go:227] unrecognized feature gate: ExternalOIDC Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.520806 3020 feature_gate.go:227] unrecognized feature gate: VSphereDriverConfiguration Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.521405 3020 flags.go:64] FLAG: --address="0.0.0.0" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.521437 3020 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.521480 3020 flags.go:64] FLAG: --anonymous-auth="true" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.521491 3020 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.521502 3020 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.521510 3020 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.521521 3020 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.521531 3020 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.521540 3020 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.521549 3020 flags.go:64] FLAG: --azure-container-registry-config="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.521557 3020 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.521566 3020 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.521575 3020 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.521586 3020 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.521594 3020 flags.go:64] FLAG: --cgroup-root="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.521602 3020 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.521610 3020 flags.go:64] FLAG: --client-ca-file="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.521619 3020 flags.go:64] FLAG: --cloud-config="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.521627 3020 flags.go:64] FLAG: --cloud-provider="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.521637 3020 flags.go:64] FLAG: --cluster-dns="[]" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.521647 3020 flags.go:64] FLAG: --cluster-domain="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.521655 3020 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.521663 3020 flags.go:64] FLAG: --config-dir="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.521671 3020 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.521681 3020 flags.go:64] FLAG: --container-log-max-files="5" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.521693 3020 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.521701 3020 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.521711 3020 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.521720 3020 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.521729 3020 flags.go:64] FLAG: --contention-profiling="false" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.521737 3020 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.521745 3020 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.521753 3020 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.521761 3020 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.521773 3020 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.521781 3020 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.521789 3020 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.521797 3020 flags.go:64] FLAG: --enable-load-reader="false" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.521805 3020 flags.go:64] FLAG: --enable-server="true" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.521813 3020 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.521825 3020 flags.go:64] FLAG: --event-burst="100" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.521833 3020 flags.go:64] FLAG: --event-qps="50" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.521841 3020 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.521849 3020 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.521857 3020 flags.go:64] FLAG: --eviction-hard="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.521867 3020 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.521875 3020 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.521882 3020 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.521890 3020 flags.go:64] FLAG: --eviction-soft="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.521900 3020 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.521908 3020 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.521920 3020 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.521928 3020 flags.go:64] FLAG: --experimental-mounter-path="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.521934 3020 flags.go:64] FLAG: --fail-swap-on="true" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.521939 3020 flags.go:64] FLAG: --feature-gates="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.521947 3020 flags.go:64] FLAG: --file-check-frequency="20s" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.521952 3020 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.521958 3020 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.521963 3020 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.521970 3020 flags.go:64] FLAG: --healthz-port="10248" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.521976 3020 flags.go:64] FLAG: --help="false" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.521981 3020 flags.go:64] FLAG: --hostname-override="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.521987 3020 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.521993 3020 flags.go:64] FLAG: --http-check-frequency="20s" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.521999 3020 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522004 3020 flags.go:64] FLAG: --image-credential-provider-config="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522009 3020 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522015 3020 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522020 3020 flags.go:64] FLAG: --image-service-endpoint="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522025 3020 flags.go:64] FLAG: --iptables-drop-bit="15" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522031 3020 flags.go:64] FLAG: --iptables-masquerade-bit="14" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522037 3020 flags.go:64] FLAG: --keep-terminated-pod-volumes="false" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522042 3020 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522048 3020 flags.go:64] FLAG: --kube-api-burst="100" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522054 3020 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522060 3020 flags.go:64] FLAG: --kube-api-qps="50" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522065 3020 flags.go:64] FLAG: --kube-reserved="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522071 3020 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522076 3020 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522081 3020 flags.go:64] FLAG: --kubelet-cgroups="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522087 3020 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522092 3020 flags.go:64] FLAG: --lock-file="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522098 3020 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522107 3020 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522113 3020 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522124 3020 flags.go:64] FLAG: --log-json-split-stream="false" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522130 3020 flags.go:64] FLAG: --logging-format="text" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522135 3020 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522142 3020 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522147 3020 flags.go:64] FLAG: --manifest-url="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522153 3020 flags.go:64] FLAG: --manifest-url-header="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522162 3020 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522168 3020 flags.go:64] FLAG: --max-open-files="1000000" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522175 3020 flags.go:64] FLAG: --max-pods="110" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522181 3020 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522186 3020 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522192 3020 flags.go:64] FLAG: --memory-manager-policy="None" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522198 3020 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522204 3020 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522210 3020 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522215 3020 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522229 3020 flags.go:64] FLAG: --node-status-max-images="50" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522235 3020 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522241 3020 flags.go:64] FLAG: --oom-score-adj="-999" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522246 3020 flags.go:64] FLAG: --pod-cidr="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522252 3020 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce0319702e115e7248d135e58342ccf3f458e19c39e86dc8e79036f578ce80a4" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522262 3020 flags.go:64] FLAG: --pod-manifest-path="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522267 3020 flags.go:64] FLAG: --pod-max-pids="-1" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522274 3020 flags.go:64] FLAG: --pods-per-core="0" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522279 3020 flags.go:64] FLAG: --port="10250" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522285 3020 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522291 3020 flags.go:64] FLAG: --provider-id="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522296 3020 flags.go:64] FLAG: --qos-reserved="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522302 3020 flags.go:64] FLAG: --read-only-port="10255" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522308 3020 flags.go:64] FLAG: --register-node="true" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522316 3020 flags.go:64] FLAG: --register-schedulable="true" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522323 3020 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522333 3020 flags.go:64] FLAG: --registry-burst="10" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522339 3020 flags.go:64] FLAG: --registry-qps="5" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522345 3020 flags.go:64] FLAG: --reserved-cpus="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522350 3020 flags.go:64] FLAG: --reserved-memory="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522359 3020 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522365 3020 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522371 3020 flags.go:64] FLAG: --rotate-certificates="false" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522377 3020 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522382 3020 flags.go:64] FLAG: --runonce="false" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522388 3020 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522395 3020 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522402 3020 flags.go:64] FLAG: --seccomp-default="false" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522407 3020 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522413 3020 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522419 3020 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522425 3020 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522432 3020 flags.go:64] FLAG: --storage-driver-password="root" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522437 3020 flags.go:64] FLAG: --storage-driver-secure="false" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522443 3020 flags.go:64] FLAG: --storage-driver-table="stats" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522467 3020 flags.go:64] FLAG: --storage-driver-user="root" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522474 3020 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522481 3020 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522487 3020 flags.go:64] FLAG: --system-cgroups="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522493 3020 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522504 3020 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522509 3020 flags.go:64] FLAG: --tls-cert-file="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522515 3020 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522523 3020 flags.go:64] FLAG: --tls-min-version="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522528 3020 flags.go:64] FLAG: --tls-private-key-file="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522534 3020 flags.go:64] FLAG: --topology-manager-policy="none" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522540 3020 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522572 3020 flags.go:64] FLAG: --topology-manager-scope="container" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522578 3020 flags.go:64] FLAG: --v="2" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522586 3020 flags.go:64] FLAG: --version="false" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522595 3020 flags.go:64] FLAG: --vmodule="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522602 3020 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.522608 3020 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.522688 3020 feature_gate.go:227] unrecognized feature gate: VSphereStaticIPs Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.522697 3020 feature_gate.go:227] unrecognized feature gate: BuildCSIVolumes Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.522704 3020 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallVSphere Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.522711 3020 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderAzure Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.522718 3020 feature_gate.go:227] unrecognized feature gate: NodeDisruptionPolicy Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.522726 3020 feature_gate.go:227] unrecognized feature gate: SignatureStores Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.522734 3020 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallNutanix Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.522741 3020 feature_gate.go:227] unrecognized feature gate: DNSNameResolver Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.522748 3020 feature_gate.go:227] unrecognized feature gate: InstallAlternateInfrastructureAWS Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.522755 3020 feature_gate.go:227] unrecognized feature gate: ManagedBootImages Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.522762 3020 feature_gate.go:227] unrecognized feature gate: PrivateHostedZoneAWS Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.522770 3020 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallOpenStack Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.522777 3020 feature_gate.go:227] unrecognized feature gate: CSIDriverSharedResource Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.522783 3020 feature_gate.go:227] unrecognized feature gate: HardwareSpeed Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.522790 3020 feature_gate.go:240] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.522797 3020 feature_gate.go:227] unrecognized feature gate: BareMetalLoadBalancer Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.522803 3020 feature_gate.go:227] unrecognized feature gate: ExternalOIDC Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.522810 3020 feature_gate.go:227] unrecognized feature gate: VSphereDriverConfiguration Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.522817 3020 feature_gate.go:227] unrecognized feature gate: ChunkSizeMiB Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.522823 3020 feature_gate.go:227] unrecognized feature gate: InsightsOnDemandDataGather Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.522830 3020 feature_gate.go:227] unrecognized feature gate: MixedCPUsAllocation Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.522836 3020 feature_gate.go:227] unrecognized feature gate: SigstoreImageVerification Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.522843 3020 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstall Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.522850 3020 feature_gate.go:227] unrecognized feature gate: ExternalRouteCertificate Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.522856 3020 feature_gate.go:227] unrecognized feature gate: GCPClusterHostedDNS Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.522863 3020 feature_gate.go:227] unrecognized feature gate: OnClusterBuild Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.522870 3020 feature_gate.go:227] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.522877 3020 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallGCP Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.522884 3020 feature_gate.go:227] unrecognized feature gate: InsightsConfigAPI Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.522890 3020 feature_gate.go:227] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.522896 3020 feature_gate.go:227] unrecognized feature gate: NewOLM Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.522903 3020 feature_gate.go:227] unrecognized feature gate: PlatformOperators Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.522909 3020 feature_gate.go:227] unrecognized feature gate: MetricsCollectionProfiles Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.522915 3020 feature_gate.go:227] unrecognized feature gate: Example Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.522922 3020 feature_gate.go:227] unrecognized feature gate: AdminNetworkPolicy Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.522928 3020 feature_gate.go:227] unrecognized feature gate: AlibabaPlatform Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.522937 3020 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderExternal Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.522943 3020 feature_gate.go:227] unrecognized feature gate: NetworkLiveMigration Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.522949 3020 feature_gate.go:227] unrecognized feature gate: AzureWorkloadIdentity Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.522957 3020 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.522963 3020 feature_gate.go:227] unrecognized feature gate: EtcdBackendQuota Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.522970 3020 feature_gate.go:227] unrecognized feature gate: ExternalCloudProvider Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.522977 3020 feature_gate.go:227] unrecognized feature gate: NetworkDiagnosticsConfig Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.522984 3020 feature_gate.go:227] unrecognized feature gate: PinnedImages Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.522991 3020 feature_gate.go:227] unrecognized feature gate: GatewayAPI Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.522999 3020 feature_gate.go:227] unrecognized feature gate: ImagePolicy Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.523005 3020 feature_gate.go:227] unrecognized feature gate: InsightsConfig Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.523012 3020 feature_gate.go:227] unrecognized feature gate: MachineAPIProviderOpenStack Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.523018 3020 feature_gate.go:227] unrecognized feature gate: UpgradeStatus Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.523026 3020 feature_gate.go:227] unrecognized feature gate: MachineConfigNodes Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.523033 3020 feature_gate.go:227] unrecognized feature gate: MetricsServer Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.523040 3020 feature_gate.go:227] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.523046 3020 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAWS Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.523052 3020 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAzure Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.523059 3020 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallPowerVS Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.523066 3020 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderGCP Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.523073 3020 feature_gate.go:227] unrecognized feature gate: GCPLabelsTags Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.523081 3020 feature_gate.go:227] unrecognized feature gate: VSphereMultiVCenters Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.523089 3020 feature_gate.go:227] unrecognized feature gate: AutomatedEtcdBackup Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.523096 3020 feature_gate.go:227] unrecognized feature gate: VolumeGroupSnapshot Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.523104 3020 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false ServiceAccountTokenNodeBindingValidation:false ServiceAccountTokenPodNodeInfo:false TranslateStreamCloseWebsocketRequests:false ValidatingAdmissionPolicy:false]} Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.535164 3020 server.go:487] "Kubelet version" kubeletVersion="v1.29.5+29c95f3" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.535228 3020 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.535298 3020 feature_gate.go:227] unrecognized feature gate: BareMetalLoadBalancer Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.535315 3020 feature_gate.go:227] unrecognized feature gate: ExternalOIDC Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.535326 3020 feature_gate.go:227] unrecognized feature gate: VSphereDriverConfiguration Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.535340 3020 feature_gate.go:227] unrecognized feature gate: ChunkSizeMiB Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.535351 3020 feature_gate.go:227] unrecognized feature gate: InsightsOnDemandDataGather Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.535362 3020 feature_gate.go:227] unrecognized feature gate: MixedCPUsAllocation Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.535373 3020 feature_gate.go:227] unrecognized feature gate: SigstoreImageVerification Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.535384 3020 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstall Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.535396 3020 feature_gate.go:227] unrecognized feature gate: ExternalRouteCertificate Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.535407 3020 feature_gate.go:227] unrecognized feature gate: GCPClusterHostedDNS Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.535418 3020 feature_gate.go:227] unrecognized feature gate: OnClusterBuild Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.535431 3020 feature_gate.go:227] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.535443 3020 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallGCP Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.535506 3020 feature_gate.go:227] unrecognized feature gate: InsightsConfigAPI Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.535523 3020 feature_gate.go:227] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.535534 3020 feature_gate.go:227] unrecognized feature gate: NewOLM Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.535545 3020 feature_gate.go:227] unrecognized feature gate: PlatformOperators Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.535556 3020 feature_gate.go:227] unrecognized feature gate: MetricsCollectionProfiles Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.535568 3020 feature_gate.go:227] unrecognized feature gate: Example Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.535581 3020 feature_gate.go:227] unrecognized feature gate: AdminNetworkPolicy Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.535592 3020 feature_gate.go:227] unrecognized feature gate: AlibabaPlatform Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.535604 3020 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderExternal Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.535616 3020 feature_gate.go:227] unrecognized feature gate: NetworkLiveMigration Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.535628 3020 feature_gate.go:227] unrecognized feature gate: AzureWorkloadIdentity Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.535640 3020 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.535652 3020 feature_gate.go:227] unrecognized feature gate: EtcdBackendQuota Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.535663 3020 feature_gate.go:227] unrecognized feature gate: ExternalCloudProvider Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.535675 3020 feature_gate.go:227] unrecognized feature gate: NetworkDiagnosticsConfig Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.535687 3020 feature_gate.go:227] unrecognized feature gate: PinnedImages Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.535699 3020 feature_gate.go:227] unrecognized feature gate: GatewayAPI Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.535711 3020 feature_gate.go:227] unrecognized feature gate: ImagePolicy Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.535722 3020 feature_gate.go:227] unrecognized feature gate: InsightsConfig Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.535734 3020 feature_gate.go:227] unrecognized feature gate: MachineAPIProviderOpenStack Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.535745 3020 feature_gate.go:227] unrecognized feature gate: UpgradeStatus Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.535758 3020 feature_gate.go:227] unrecognized feature gate: MachineConfigNodes Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.535770 3020 feature_gate.go:227] unrecognized feature gate: MetricsServer Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.535781 3020 feature_gate.go:227] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.535793 3020 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAWS Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.535804 3020 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAzure Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.535817 3020 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallPowerVS Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.535828 3020 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderGCP Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.535838 3020 feature_gate.go:227] unrecognized feature gate: GCPLabelsTags Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.535850 3020 feature_gate.go:227] unrecognized feature gate: VSphereMultiVCenters Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.535861 3020 feature_gate.go:227] unrecognized feature gate: AutomatedEtcdBackup Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.535872 3020 feature_gate.go:227] unrecognized feature gate: VolumeGroupSnapshot Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.535884 3020 feature_gate.go:227] unrecognized feature gate: VSphereStaticIPs Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.535895 3020 feature_gate.go:227] unrecognized feature gate: BuildCSIVolumes Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.535905 3020 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallVSphere Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.535916 3020 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderAzure Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.535926 3020 feature_gate.go:227] unrecognized feature gate: NodeDisruptionPolicy Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.535937 3020 feature_gate.go:227] unrecognized feature gate: SignatureStores Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.535948 3020 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallNutanix Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.535958 3020 feature_gate.go:227] unrecognized feature gate: DNSNameResolver Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.535969 3020 feature_gate.go:227] unrecognized feature gate: InstallAlternateInfrastructureAWS Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.535980 3020 feature_gate.go:227] unrecognized feature gate: ManagedBootImages Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.535992 3020 feature_gate.go:227] unrecognized feature gate: PrivateHostedZoneAWS Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.536004 3020 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallOpenStack Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.536015 3020 feature_gate.go:227] unrecognized feature gate: CSIDriverSharedResource Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.536027 3020 feature_gate.go:227] unrecognized feature gate: HardwareSpeed Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.536038 3020 feature_gate.go:240] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.536053 3020 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false ServiceAccountTokenNodeBindingValidation:false ServiceAccountTokenPodNodeInfo:false TranslateStreamCloseWebsocketRequests:false ValidatingAdmissionPolicy:false]} Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.536193 3020 feature_gate.go:227] unrecognized feature gate: ExternalRouteCertificate Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.536204 3020 feature_gate.go:227] unrecognized feature gate: GCPClusterHostedDNS Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.536215 3020 feature_gate.go:227] unrecognized feature gate: OnClusterBuild Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.536226 3020 feature_gate.go:227] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.536237 3020 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallGCP Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.536248 3020 feature_gate.go:227] unrecognized feature gate: InsightsConfigAPI Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.536258 3020 feature_gate.go:227] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.536270 3020 feature_gate.go:227] unrecognized feature gate: NewOLM Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.536280 3020 feature_gate.go:227] unrecognized feature gate: PlatformOperators Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.536292 3020 feature_gate.go:227] unrecognized feature gate: MetricsCollectionProfiles Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.536302 3020 feature_gate.go:227] unrecognized feature gate: Example Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.536313 3020 feature_gate.go:227] unrecognized feature gate: AdminNetworkPolicy Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.536324 3020 feature_gate.go:227] unrecognized feature gate: AlibabaPlatform Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.536336 3020 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderExternal Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.536348 3020 feature_gate.go:227] unrecognized feature gate: NetworkLiveMigration Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.536359 3020 feature_gate.go:227] unrecognized feature gate: AzureWorkloadIdentity Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.536369 3020 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.536382 3020 feature_gate.go:227] unrecognized feature gate: EtcdBackendQuota Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.536393 3020 feature_gate.go:227] unrecognized feature gate: ExternalCloudProvider Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.536405 3020 feature_gate.go:227] unrecognized feature gate: PinnedImages Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.536416 3020 feature_gate.go:227] unrecognized feature gate: GatewayAPI Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.536427 3020 feature_gate.go:227] unrecognized feature gate: ImagePolicy Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.536438 3020 feature_gate.go:227] unrecognized feature gate: InsightsConfig Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.536488 3020 feature_gate.go:227] unrecognized feature gate: MachineAPIProviderOpenStack Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.536506 3020 feature_gate.go:227] unrecognized feature gate: NetworkDiagnosticsConfig Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.536518 3020 feature_gate.go:227] unrecognized feature gate: UpgradeStatus Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.536528 3020 feature_gate.go:227] unrecognized feature gate: MetricsServer Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.536539 3020 feature_gate.go:227] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.536550 3020 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAWS Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.536560 3020 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAzure Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.536571 3020 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallPowerVS Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.536582 3020 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderGCP Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.536593 3020 feature_gate.go:227] unrecognized feature gate: GCPLabelsTags Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.536604 3020 feature_gate.go:227] unrecognized feature gate: MachineConfigNodes Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.536614 3020 feature_gate.go:227] unrecognized feature gate: VSphereMultiVCenters Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.536625 3020 feature_gate.go:227] unrecognized feature gate: AutomatedEtcdBackup Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.536636 3020 feature_gate.go:227] unrecognized feature gate: VolumeGroupSnapshot Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.536647 3020 feature_gate.go:227] unrecognized feature gate: BuildCSIVolumes Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.536658 3020 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallVSphere Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.536668 3020 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderAzure Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.536678 3020 feature_gate.go:227] unrecognized feature gate: NodeDisruptionPolicy Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.536690 3020 feature_gate.go:227] unrecognized feature gate: SignatureStores Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.536700 3020 feature_gate.go:227] unrecognized feature gate: VSphereStaticIPs Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.536711 3020 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallNutanix Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.536721 3020 feature_gate.go:227] unrecognized feature gate: DNSNameResolver Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.536734 3020 feature_gate.go:227] unrecognized feature gate: InstallAlternateInfrastructureAWS Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.536758 3020 feature_gate.go:227] unrecognized feature gate: ManagedBootImages Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.536769 3020 feature_gate.go:227] unrecognized feature gate: PrivateHostedZoneAWS Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.536780 3020 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallOpenStack Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.536792 3020 feature_gate.go:227] unrecognized feature gate: CSIDriverSharedResource Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.536804 3020 feature_gate.go:227] unrecognized feature gate: HardwareSpeed Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.536815 3020 feature_gate.go:240] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.536826 3020 feature_gate.go:227] unrecognized feature gate: BareMetalLoadBalancer Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.536837 3020 feature_gate.go:227] unrecognized feature gate: ExternalOIDC Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.536847 3020 feature_gate.go:227] unrecognized feature gate: VSphereDriverConfiguration Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.536858 3020 feature_gate.go:227] unrecognized feature gate: ChunkSizeMiB Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.536869 3020 feature_gate.go:227] unrecognized feature gate: InsightsOnDemandDataGather Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.536881 3020 feature_gate.go:227] unrecognized feature gate: MixedCPUsAllocation Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.536892 3020 feature_gate.go:227] unrecognized feature gate: SigstoreImageVerification Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.536903 3020 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstall Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.536915 3020 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false ServiceAccountTokenNodeBindingValidation:false ServiceAccountTokenPodNodeInfo:false TranslateStreamCloseWebsocketRequests:false ValidatingAdmissionPolicy:false]} Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.537491 3020 server.go:925] "Client rotation is on, will bootstrap in background" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.542706 3020 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.546058 3020 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.546711 3020 server.go:982] "Starting client certificate rotation" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.546741 3020 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.547367 3020 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-06-27 13:05:20 +0000 UTC, rotation deadline is 2026-03-27 22:19:27.25311621 +0000 UTC Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.547596 3020 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 2518h17m42.705526589s for next certificate rotation Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.555435 3020 dynamic_cafile_content.go:119] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.559482 3020 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.562003 3020 util_unix.go:103] "Using this endpoint is deprecated, please consider using full URL format" endpoint="/var/run/crio/crio.sock" URL="unix:///var/run/crio/crio.sock" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.576320 3020 remote_runtime.go:143] "Validated CRI v1 runtime API" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.576380 3020 util_unix.go:103] "Using this endpoint is deprecated, please consider using full URL format" endpoint="/var/run/crio/crio.sock" URL="unix:///var/run/crio/crio.sock" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.610003 3020 remote_image.go:111] "Validated CRI v1 image API" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.614393 3020 fs.go:132] Filesystem UUIDs: map[2025-12-13-00-01-10-00:/dev/sr0 68d6f3e9-64e9-44a4-a1d0-311f9c629a01:/dev/vda4 6ea7ef63-bc43-49c4-9337-b3b14ffb2763:/dev/vda3 7B77-95E7:/dev/vda2] Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.614420 3020 fs.go:133] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/0:{mountpoint:/run/user/0 major:0 minor:41 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0}] Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.627555 3020 manager.go:217] Machine: {Timestamp:2025-12-13 00:01:44.625582419 +0000 UTC m=+0.378303516 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654132736 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:c1bd596843fb445da20eca66471ddf66 SystemUUID:3461abea-dacc-43bb-a12a-d3dbbec9e4b5 BootID:da2de9f7-77ba-48f1-be42-1e0e9b862fb3 Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827068416 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/0 DeviceMajor:0 DeviceMinor:41 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730829824 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85294297088 Type:vfs Inodes:41680320 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:ef:a3:5e Speed:0 Mtu:1500} {Name:br-int MacAddress:4e:ec:11:72:80:3b Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:ef:a3:5e Speed:-1 Mtu:1500} {Name:eth10 MacAddress:46:99:b8:73:19:98 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:b6:dc:d9:26:03:d4 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:66:3b:bf:76:64:4e Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654132736 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.628016 3020 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.628226 3020 manager.go:233] Version: {KernelVersion:5.14.0-427.22.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 416.94.202406172220-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.634651 3020 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.634840 3020 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.634880 3020 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.634891 3020 container_manager_linux.go:304] "Creating device plugin manager" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.635034 3020 manager.go:136] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.635243 3020 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.635628 3020 state_mem.go:36] "Initialized new in-memory state store" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.635708 3020 server.go:1227] "Using root directory" path="/var/lib/kubelet" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.636638 3020 kubelet.go:406] "Attempting to sync node with API server" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.636669 3020 kubelet.go:311] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.636694 3020 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.636712 3020 kubelet.go:322] "Adding apiserver pod source" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.636889 3020 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.639103 3020 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="cri-o" version="1.29.5-5.rhaos4.16.git7032128.el9" apiVersion="v1" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.639754 3020 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.640207 3020 kubelet.go:826] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.640426 3020 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/azure-file" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.640478 3020 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.640490 3020 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/rbd" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.640507 3020 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.640518 3020 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.640534 3020 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.640546 3020 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.640557 3020 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.640569 3020 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.640579 3020 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/cephfs" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.640603 3020 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.640613 3020 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.640624 3020 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.640668 3020 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.640679 3020 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.641944 3020 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.642418 3020 server.go:1262] "Started kubelet" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.642573 3020 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.643054 3020 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.644032 3020 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 00:01:44 crc systemd[1]: Started Kubernetes Kubelet. Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.646197 3020 server.go:461] "Adding debug handlers to kubelet server" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.649940 3020 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.652981 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.653178 3020 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:01:44 crc kubenswrapper[3020]: E1213 00:01:44.653593 3020 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.653106 3020 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.653699 3020 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 00:01:44 crc kubenswrapper[3020]: E1213 00:01:44.653726 3020 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.653718 3020 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-06-27 13:05:20 +0000 UTC, rotation deadline is 2026-03-31 11:32:24.59347906 +0000 UTC Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.653828 3020 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 2603h30m39.939654363s for next certificate rotation Dec 13 00:01:44 crc kubenswrapper[3020]: E1213 00:01:44.653921 3020 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" event="&Event{ObjectMeta:{crc.18809d63824912f3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-13 00:01:44.642384627 +0000 UTC m=+0.395105704,LastTimestamp:2025-12-13 00:01:44.642384627 +0000 UTC m=+0.395105704,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.654302 3020 volume_manager.go:289] "The desired_state_of_world populator starts" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.654325 3020 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.656941 3020 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.657386 3020 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:01:44 crc kubenswrapper[3020]: E1213 00:01:44.658323 3020 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:01:44 crc kubenswrapper[3020]: E1213 00:01:44.663194 3020 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="200ms" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.663345 3020 factory.go:55] Registering systemd factory Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.663397 3020 factory.go:221] Registration of the systemd container factory successfully Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.664071 3020 factory.go:153] Registering CRI-O factory Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.664112 3020 factory.go:221] Registration of the crio container factory successfully Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.664291 3020 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.664333 3020 factory.go:103] Registering Raw factory Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.664367 3020 manager.go:1196] Started watching for new ooms in manager Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.668528 3020 manager.go:319] Starting recovery of all containers Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.689851 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c085412c-b875-46c9-ae3e-e6b0d8067091" volumeName="kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.689914 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" volumeName="kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.689938 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="5bacb25d-97b6-4491-8fb4-99feae1d802a" volumeName="kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.689974 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa90b3c2-febd-4588-a063-7fbbe82f00c1" volumeName="kubernetes.io/secret/aa90b3c2-febd-4588-a063-7fbbe82f00c1-default-certificate" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.690012 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" volumeName="kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.690036 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a5ae51d-d173-4531-8975-f164c975ce1f" volumeName="kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.690077 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" volumeName="kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.690102 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c782cf62-a827-4677-b3c2-6f82c5f09cbb" volumeName="kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.690139 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b5c38ff-1fa8-4219-994d-15776acd4a4d" volumeName="kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.690172 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a5ae51d-d173-4531-8975-f164c975ce1f" volumeName="kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.690197 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa90b3c2-febd-4588-a063-7fbbe82f00c1" volumeName="kubernetes.io/secret/aa90b3c2-febd-4588-a063-7fbbe82f00c1-metrics-certs" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.690281 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.691318 3020 reconstruct_new.go:149] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6ea5f9a7192af1960ec8c50a86fd2d9a756dbf85695798868f611e04a03ec009/globalmount" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.691377 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" volumeName="kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-kube-api-access-scpwv" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.691403 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0f394926-bdb9-425c-b36e-264d7fd34550" volumeName="kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.691428 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="4092a9f8-5acc-4932-9e90-ef962eeb301a" volumeName="kubernetes.io/empty-dir/4092a9f8-5acc-4932-9e90-ef962eeb301a-catalog-content" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.691492 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="410cf605-1970-4691-9c95-53fdc123b1f3" volumeName="kubernetes.io/configmap/410cf605-1970-4691-9c95-53fdc123b1f3-ovnkube-config" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.691536 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d0dcce3-d96e-48cb-9b9f-362105911589" volumeName="kubernetes.io/projected/9d0dcce3-d96e-48cb-9b9f-362105911589-kube-api-access-xkzjk" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.691561 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9fb762d1-812f-43f1-9eac-68034c1ecec7" volumeName="kubernetes.io/projected/9fb762d1-812f-43f1-9eac-68034c1ecec7-kube-api-access" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.691586 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" volumeName="kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.691650 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec1bae8b-3200-4ad9-b33b-cf8701f3027c" volumeName="kubernetes.io/projected/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-kube-api-access-4qr9t" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.691675 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="12e733dd-0939-4f1b-9cbb-13897e093787" volumeName="kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.691701 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" volumeName="kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.691725 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" volumeName="kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.691747 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="887d596e-c519-4bfa-af90-3edd9e1b2f0f" volumeName="kubernetes.io/empty-dir/887d596e-c519-4bfa-af90-3edd9e1b2f0f-utilities" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.691783 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" volumeName="kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.691810 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd556935-a077-45df-ba3f-d42c39326ccd" volumeName="kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.691833 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf1a8b70-3856-486f-9912-a2de1d57c3fb" volumeName="kubernetes.io/projected/bf1a8b70-3856-486f-9912-a2de1d57c3fb-kube-api-access-6z2n9" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.691868 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" volumeName="kubernetes.io/empty-dir/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-catalog-content" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.691890 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b5d722a-1123-4935-9740-52a08d018bc9" volumeName="kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.691912 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="13045510-8717-4a71-ade4-be95a76440a7" volumeName="kubernetes.io/projected/13045510-8717-4a71-ade4-be95a76440a7-kube-api-access-dtjml" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.691935 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd556935-a077-45df-ba3f-d42c39326ccd" volumeName="kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.691966 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="120b38dc-8236-4fa6-a452-642b8ad738ee" volumeName="kubernetes.io/projected/120b38dc-8236-4fa6-a452-642b8ad738ee-kube-api-access-bwvjb" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.692000 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" volumeName="kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.692022 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd556935-a077-45df-ba3f-d42c39326ccd" volumeName="kubernetes.io/empty-dir/bd556935-a077-45df-ba3f-d42c39326ccd-tmpfs" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.692044 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c782cf62-a827-4677-b3c2-6f82c5f09cbb" volumeName="kubernetes.io/empty-dir/c782cf62-a827-4677-b3c2-6f82c5f09cbb-catalog-content" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.692067 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" volumeName="kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.692090 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="530553aa-0a1d-423e-8a22-f5eb4bdbb883" volumeName="kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.692112 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c085412c-b875-46c9-ae3e-e6b0d8067091" volumeName="kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.692145 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="120b38dc-8236-4fa6-a452-642b8ad738ee" volumeName="kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.692168 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3f4dca86-e6ee-4ec9-8324-86aff960225e" volumeName="kubernetes.io/empty-dir/3f4dca86-e6ee-4ec9-8324-86aff960225e-utilities" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.692200 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="13045510-8717-4a71-ade4-be95a76440a7" volumeName="kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.692225 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="51a02bbf-2d40-4f84-868a-d399ea18a846" volumeName="kubernetes.io/configmap/51a02bbf-2d40-4f84-868a-d399ea18a846-ovnkube-identity-cm" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.692248 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="41e8708a-e40d-4d28-846b-c52eda4d1755" volumeName="kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.692272 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec1bae8b-3200-4ad9-b33b-cf8701f3027c" volumeName="kubernetes.io/configmap/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-config" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.692313 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0f394926-bdb9-425c-b36e-264d7fd34550" volumeName="kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.692336 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="21d29937-debd-4407-b2b1-d1053cb0f342" volumeName="kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.692358 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d0dcce3-d96e-48cb-9b9f-362105911589" volumeName="kubernetes.io/configmap/9d0dcce3-d96e-48cb-9b9f-362105911589-mcd-auth-proxy-config" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.692389 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="cf1a8966-f594-490a-9fbb-eec5bafd13d3" volumeName="kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.692437 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" volumeName="kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.692519 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="7d51f445-054a-4e4f-a67b-a828f5a32511" volumeName="kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.692575 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" volumeName="kubernetes.io/projected/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-kube-api-access-rkkfv" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.692602 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" volumeName="kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.692626 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" volumeName="kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.692649 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="7d51f445-054a-4e4f-a67b-a828f5a32511" volumeName="kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.692672 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="475321a1-8b7e-4033-8f72-b05a8b377347" volumeName="kubernetes.io/configmap/475321a1-8b7e-4033-8f72-b05a8b377347-cni-binary-copy" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.692707 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" volumeName="kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-certificates" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.692731 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="1a3e81c3-c292-4130-9436-f94062c91efd" volumeName="kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.692770 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" volumeName="kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.692794 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3f4dca86-e6ee-4ec9-8324-86aff960225e" volumeName="kubernetes.io/empty-dir/3f4dca86-e6ee-4ec9-8324-86aff960225e-catalog-content" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.692834 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" volumeName="kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.692866 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="887d596e-c519-4bfa-af90-3edd9e1b2f0f" volumeName="kubernetes.io/empty-dir/887d596e-c519-4bfa-af90-3edd9e1b2f0f-catalog-content" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.692909 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" volumeName="kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.692934 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa90b3c2-febd-4588-a063-7fbbe82f00c1" volumeName="kubernetes.io/projected/aa90b3c2-febd-4588-a063-7fbbe82f00c1-kube-api-access-v45vm" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.693024 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" volumeName="kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.693047 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="13045510-8717-4a71-ade4-be95a76440a7" volumeName="kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.693069 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="297ab9b6-2186-4d5b-a952-2bfd59af63c4" volumeName="kubernetes.io/configmap/297ab9b6-2186-4d5b-a952-2bfd59af63c4-mcc-auth-proxy-config" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.693091 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" volumeName="kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.693131 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="41e8708a-e40d-4d28-846b-c52eda4d1755" volumeName="kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.693153 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="7d51f445-054a-4e4f-a67b-a828f5a32511" volumeName="kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-bound-sa-token" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.693187 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" volumeName="kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.693221 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ed024e5d-8fc2-4c22-803d-73f3c9795f19" volumeName="kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.693243 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="41e8708a-e40d-4d28-846b-c52eda4d1755" volumeName="kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.693265 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="e4a7de23-6134-4044-902a-0900dc04a501" volumeName="kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.693297 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.693359 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="4092a9f8-5acc-4932-9e90-ef962eeb301a" volumeName="kubernetes.io/empty-dir/4092a9f8-5acc-4932-9e90-ef962eeb301a-utilities" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.693383 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" volumeName="kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.693405 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c085412c-b875-46c9-ae3e-e6b0d8067091" volumeName="kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.693428 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" volumeName="kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.693451 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.693525 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="71af81a9-7d43-49b2-9287-c375900aa905" volumeName="kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.693563 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="b54e8941-2fc4-432a-9e51-39684df9089e" volumeName="kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.693585 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" volumeName="kubernetes.io/projected/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-kube-api-access-8svnk" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.693607 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="10603adc-d495-423c-9459-4caa405960bb" volumeName="kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.693633 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="530553aa-0a1d-423e-8a22-f5eb4bdbb883" volumeName="kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.693655 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" volumeName="kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.693687 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0f394926-bdb9-425c-b36e-264d7fd34550" volumeName="kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.693713 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" volumeName="kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.693738 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" volumeName="kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.693762 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" volumeName="kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.693785 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="59748b9b-c309-4712-aa85-bb38d71c4915" volumeName="kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.693714 3020 manager.go:324] Recovery completed Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.693817 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" volumeName="kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.693857 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="410cf605-1970-4691-9c95-53fdc123b1f3" volumeName="kubernetes.io/projected/410cf605-1970-4691-9c95-53fdc123b1f3-kube-api-access-cx4f9" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.693883 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="51a02bbf-2d40-4f84-868a-d399ea18a846" volumeName="kubernetes.io/projected/51a02bbf-2d40-4f84-868a-d399ea18a846-kube-api-access-zjg2w" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.693908 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a5ae51d-d173-4531-8975-f164c975ce1f" volumeName="kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.693930 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="e4a7de23-6134-4044-902a-0900dc04a501" volumeName="kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.693962 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.694056 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b5c38ff-1fa8-4219-994d-15776acd4a4d" volumeName="kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.694083 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec1bae8b-3200-4ad9-b33b-cf8701f3027c" volumeName="kubernetes.io/secret/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-machine-approver-tls" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.694108 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" volumeName="kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.694133 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" volumeName="kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-bound-sa-token" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.694158 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8" volumeName="kubernetes.io/configmap/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-cni-binary-copy" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.694182 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" volumeName="kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.694206 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="5bacb25d-97b6-4491-8fb4-99feae1d802a" volumeName="kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.694232 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" volumeName="kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.694257 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="41e8708a-e40d-4d28-846b-c52eda4d1755" volumeName="kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.694281 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3f4dca86-e6ee-4ec9-8324-86aff960225e" volumeName="kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.694303 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="41e8708a-e40d-4d28-846b-c52eda4d1755" volumeName="kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.694325 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3e19f9e8-9a37-4ca8-9790-c219750ab482" volumeName="kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovnkube-config" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.694347 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" volumeName="kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.694370 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b5c38ff-1fa8-4219-994d-15776acd4a4d" volumeName="kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.694392 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3e19f9e8-9a37-4ca8-9790-c219750ab482" volumeName="kubernetes.io/secret/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovn-node-metrics-cert" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.694416 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="5bacb25d-97b6-4491-8fb4-99feae1d802a" volumeName="kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.694437 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf1a8b70-3856-486f-9912-a2de1d57c3fb" volumeName="kubernetes.io/secret/bf1a8b70-3856-486f-9912-a2de1d57c3fb-node-bootstrap-token" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.694646 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc291782-27d2-4a74-af79-c7dcb31535d2" volumeName="kubernetes.io/secret/cc291782-27d2-4a74-af79-c7dcb31535d2-metrics-tls" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.694681 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e" volumeName="kubernetes.io/projected/f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e-kube-api-access-d7jw8" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.694704 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="1a3e81c3-c292-4130-9436-f94062c91efd" volumeName="kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.694726 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" volumeName="kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.694748 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd556935-a077-45df-ba3f-d42c39326ccd" volumeName="kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.694770 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="1a3e81c3-c292-4130-9436-f94062c91efd" volumeName="kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.694791 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="1a3e81c3-c292-4130-9436-f94062c91efd" volumeName="kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.694813 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="6268b7fe-8910-4505-b404-6f1df638105c" volumeName="kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.694836 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" volumeName="kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.694857 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" volumeName="kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.694881 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec1bae8b-3200-4ad9-b33b-cf8701f3027c" volumeName="kubernetes.io/configmap/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-auth-proxy-config" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.694903 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="5bacb25d-97b6-4491-8fb4-99feae1d802a" volumeName="kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.694924 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" volumeName="kubernetes.io/empty-dir/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-ca-trust-extracted" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.694946 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="2b6d14a5-ca00-40c7-af7a-051a98a24eed" volumeName="kubernetes.io/projected/2b6d14a5-ca00-40c7-af7a-051a98a24eed-kube-api-access-j4qn7" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.694967 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="71af81a9-7d43-49b2-9287-c375900aa905" volumeName="kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.694990 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="21d29937-debd-4407-b2b1-d1053cb0f342" volumeName="kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.695012 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" volumeName="kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.695035 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa90b3c2-febd-4588-a063-7fbbe82f00c1" volumeName="kubernetes.io/secret/aa90b3c2-febd-4588-a063-7fbbe82f00c1-stats-auth" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.695057 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" volumeName="kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.695081 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.695103 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="51a02bbf-2d40-4f84-868a-d399ea18a846" volumeName="kubernetes.io/configmap/51a02bbf-2d40-4f84-868a-d399ea18a846-env-overrides" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.695125 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf1a8b70-3856-486f-9912-a2de1d57c3fb" volumeName="kubernetes.io/secret/bf1a8b70-3856-486f-9912-a2de1d57c3fb-certs" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.695184 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" volumeName="kubernetes.io/empty-dir/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-utilities" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.695207 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="2b6d14a5-ca00-40c7-af7a-051a98a24eed" volumeName="kubernetes.io/configmap/2b6d14a5-ca00-40c7-af7a-051a98a24eed-iptables-alerter-script" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.695234 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8" volumeName="kubernetes.io/projected/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-kube-api-access-bwbqm" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.695258 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" volumeName="kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.695283 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" volumeName="kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.695309 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" volumeName="kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.695333 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.695357 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.695386 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" volumeName="kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.695408 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" volumeName="kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.695488 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="41e8708a-e40d-4d28-846b-c52eda4d1755" volumeName="kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.695542 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="475321a1-8b7e-4033-8f72-b05a8b377347" volumeName="kubernetes.io/configmap/475321a1-8b7e-4033-8f72-b05a8b377347-multus-daemon-config" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.695568 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9fb762d1-812f-43f1-9eac-68034c1ecec7" volumeName="kubernetes.io/secret/9fb762d1-812f-43f1-9eac-68034c1ecec7-serving-cert" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.695591 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b5c38ff-1fa8-4219-994d-15776acd4a4d" volumeName="kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.695619 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="410cf605-1970-4691-9c95-53fdc123b1f3" volumeName="kubernetes.io/configmap/410cf605-1970-4691-9c95-53fdc123b1f3-env-overrides" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.695642 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" volumeName="kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.695666 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.695691 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="e4a7de23-6134-4044-902a-0900dc04a501" volumeName="kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.695727 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.695751 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ed024e5d-8fc2-4c22-803d-73f3c9795f19" volumeName="kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.695776 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="51a02bbf-2d40-4f84-868a-d399ea18a846" volumeName="kubernetes.io/secret/51a02bbf-2d40-4f84-868a-d399ea18a846-webhook-cert" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.695819 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="6d67253e-2acd-4bc1-8185-793587da4f17" volumeName="kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.695842 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" volumeName="kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.695866 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.695891 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="21d29937-debd-4407-b2b1-d1053cb0f342" volumeName="kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.695919 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="6d67253e-2acd-4bc1-8185-793587da4f17" volumeName="kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.695941 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="41e8708a-e40d-4d28-846b-c52eda4d1755" volumeName="kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.695965 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" volumeName="kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.696005 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="d0f40333-c860-4c04-8058-a0bf572dcf12" volumeName="kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.696027 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" volumeName="kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.696049 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3e19f9e8-9a37-4ca8-9790-c219750ab482" volumeName="kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovnkube-script-lib" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.696072 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3e19f9e8-9a37-4ca8-9790-c219750ab482" volumeName="kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-env-overrides" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.696121 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="59748b9b-c309-4712-aa85-bb38d71c4915" volumeName="kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.696145 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="6d67253e-2acd-4bc1-8185-793587da4f17" volumeName="kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.696168 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.696191 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b5c38ff-1fa8-4219-994d-15776acd4a4d" volumeName="kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.696216 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa90b3c2-febd-4588-a063-7fbbe82f00c1" volumeName="kubernetes.io/configmap/aa90b3c2-febd-4588-a063-7fbbe82f00c1-service-ca-bundle" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.696238 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c782cf62-a827-4677-b3c2-6f82c5f09cbb" volumeName="kubernetes.io/empty-dir/c782cf62-a827-4677-b3c2-6f82c5f09cbb-utilities" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.696261 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="5bacb25d-97b6-4491-8fb4-99feae1d802a" volumeName="kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.696282 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8" volumeName="kubernetes.io/configmap/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-cni-sysctl-allowlist" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.696304 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="7d51f445-054a-4e4f-a67b-a828f5a32511" volumeName="kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.696327 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d0dcce3-d96e-48cb-9b9f-362105911589" volumeName="kubernetes.io/secret/9d0dcce3-d96e-48cb-9b9f-362105911589-proxy-tls" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.696350 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="120b38dc-8236-4fa6-a452-642b8ad738ee" volumeName="kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.696373 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="41e8708a-e40d-4d28-846b-c52eda4d1755" volumeName="kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.696396 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="297ab9b6-2186-4d5b-a952-2bfd59af63c4" volumeName="kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.696437 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b5c38ff-1fa8-4219-994d-15776acd4a4d" volumeName="kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.696499 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="21d29937-debd-4407-b2b1-d1053cb0f342" volumeName="kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.696529 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="475321a1-8b7e-4033-8f72-b05a8b377347" volumeName="kubernetes.io/projected/475321a1-8b7e-4033-8f72-b05a8b377347-kube-api-access-c2f8t" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.696552 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a23c0ee-5648-448c-b772-83dced2891ce" volumeName="kubernetes.io/projected/6a23c0ee-5648-448c-b772-83dced2891ce-kube-api-access-gsxd9" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.696574 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="b54e8941-2fc4-432a-9e51-39684df9089e" volumeName="kubernetes.io/projected/b54e8941-2fc4-432a-9e51-39684df9089e-bound-sa-token" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.696596 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="120b38dc-8236-4fa6-a452-642b8ad738ee" volumeName="kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-auth-proxy-config" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.696618 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="4092a9f8-5acc-4932-9e90-ef962eeb301a" volumeName="kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.696640 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="5bacb25d-97b6-4491-8fb4-99feae1d802a" volumeName="kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.696662 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="887d596e-c519-4bfa-af90-3edd9e1b2f0f" volumeName="kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.696683 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="10603adc-d495-423c-9459-4caa405960bb" volumeName="kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.696705 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="410cf605-1970-4691-9c95-53fdc123b1f3" volumeName="kubernetes.io/secret/410cf605-1970-4691-9c95-53fdc123b1f3-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.696741 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.696901 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="34a48baf-1bee-4921-8bb2-9b7320e76f79" volumeName="kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.696966 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="71af81a9-7d43-49b2-9287-c375900aa905" volumeName="kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.697011 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9fb762d1-812f-43f1-9eac-68034c1ecec7" volumeName="kubernetes.io/configmap/9fb762d1-812f-43f1-9eac-68034c1ecec7-service-ca" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.697045 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="b54e8941-2fc4-432a-9e51-39684df9089e" volumeName="kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.697071 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc291782-27d2-4a74-af79-c7dcb31535d2" volumeName="kubernetes.io/projected/cc291782-27d2-4a74-af79-c7dcb31535d2-kube-api-access-4sfhc" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.697096 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.697124 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.697154 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="530553aa-0a1d-423e-8a22-f5eb4bdbb883" volumeName="kubernetes.io/empty-dir/530553aa-0a1d-423e-8a22-f5eb4bdbb883-available-featuregates" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.697181 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="5bacb25d-97b6-4491-8fb4-99feae1d802a" volumeName="kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.697205 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="b54e8941-2fc4-432a-9e51-39684df9089e" volumeName="kubernetes.io/projected/b54e8941-2fc4-432a-9e51-39684df9089e-kube-api-access-9x6dp" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.697229 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ed024e5d-8fc2-4c22-803d-73f3c9795f19" volumeName="kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.697252 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e" volumeName="kubernetes.io/configmap/f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e-serviceca" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.697276 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="1a3e81c3-c292-4130-9436-f94062c91efd" volumeName="kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.697300 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="41e8708a-e40d-4d28-846b-c52eda4d1755" volumeName="kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.697361 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="297ab9b6-2186-4d5b-a952-2bfd59af63c4" volumeName="kubernetes.io/projected/297ab9b6-2186-4d5b-a952-2bfd59af63c4-kube-api-access-vtgqn" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.697386 3020 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3e19f9e8-9a37-4ca8-9790-c219750ab482" volumeName="kubernetes.io/projected/3e19f9e8-9a37-4ca8-9790-c219750ab482-kube-api-access-f9495" seLinuxMountContext="" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.697404 3020 reconstruct_new.go:102] "Volume reconstruction finished" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.697419 3020 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.716106 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.719148 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.719250 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.719285 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.720971 3020 cpu_manager.go:215] "Starting CPU manager" policy="none" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.721016 3020 cpu_manager.go:216] "Reconciling" reconcilePeriod="10s" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.721062 3020 state_mem.go:36] "Initialized new in-memory state store" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.737044 3020 policy_none.go:49] "None policy: Start" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.738115 3020 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.738155 3020 state_mem.go:35] "Initializing new in-memory state store" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.757396 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.758614 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.758678 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.758696 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.758739 3020 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 13 00:01:44 crc kubenswrapper[3020]: E1213 00:01:44.761256 3020 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.793581 3020 manager.go:296] "Starting Device Plugin manager" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.794103 3020 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.795523 3020 server.go:79] "Starting device plugin registration server" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.796073 3020 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.796199 3020 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.796210 3020 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.835981 3020 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.838084 3020 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.838136 3020 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.838170 3020 kubelet.go:2343] "Starting kubelet main sync loop" Dec 13 00:01:44 crc kubenswrapper[3020]: E1213 00:01:44.838224 3020 kubelet.go:2367] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Dec 13 00:01:44 crc kubenswrapper[3020]: W1213 00:01:44.840688 3020 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:01:44 crc kubenswrapper[3020]: E1213 00:01:44.840779 3020 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:01:44 crc kubenswrapper[3020]: E1213 00:01:44.865133 3020 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="400ms" Dec 13 00:01:44 crc kubenswrapper[3020]: E1213 00:01:44.902209 3020 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.939394 3020 kubelet.go:2429] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.939608 3020 topology_manager.go:215] "Topology Admit Handler" podUID="d3ae206906481b4831fd849b559269c8" podNamespace="openshift-machine-config-operator" podName="kube-rbac-proxy-crio-crc" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.939683 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.941813 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.941882 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.941907 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.942060 3020 topology_manager.go:215] "Topology Admit Handler" podUID="b2a6a3b2ca08062d24afa4c01aaf9e4f" podNamespace="openshift-etcd" podName="etcd-crc" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.942134 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.942388 3020 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.942543 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.943254 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.943304 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.943318 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.943482 3020 topology_manager.go:215] "Topology Admit Handler" podUID="ae85115fdc231b4002b57317b41a6400" podNamespace="openshift-kube-apiserver" podName="kube-apiserver-crc" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.943621 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.943853 3020 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.943944 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.944046 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.944122 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.944209 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.944679 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.944927 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.944984 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.945143 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.945211 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.945239 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.945405 3020 topology_manager.go:215] "Topology Admit Handler" podUID="bd6a3a59e513625ca0ae3724df2686bc" podNamespace="openshift-kube-controller-manager" podName="kube-controller-manager-crc" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.945535 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.945691 3020 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.945775 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.946872 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.946910 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.946948 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.946917 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.947007 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.946972 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.947272 3020 topology_manager.go:215] "Topology Admit Handler" podUID="6a57a7fb1944b43a6bd11a349520d301" podNamespace="openshift-kube-scheduler" podName="openshift-kube-scheduler-crc" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.947360 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.947670 3020 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.947719 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.948515 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.948574 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.948601 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.949010 3020 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.949063 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.949391 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.949534 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.949559 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.950106 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.950166 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.950197 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.962132 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.963265 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.963334 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.963356 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:01:44 crc kubenswrapper[3020]: I1213 00:01:44.963411 3020 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 13 00:01:44 crc kubenswrapper[3020]: E1213 00:01:44.964830 3020 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Dec 13 00:01:45 crc kubenswrapper[3020]: I1213 00:01:45.003572 3020 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d3ae206906481b4831fd849b559269c8-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d3ae206906481b4831fd849b559269c8\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 13 00:01:45 crc kubenswrapper[3020]: I1213 00:01:45.003628 3020 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-resource-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 13 00:01:45 crc kubenswrapper[3020]: I1213 00:01:45.003653 3020 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-data-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 13 00:01:45 crc kubenswrapper[3020]: I1213 00:01:45.003684 3020 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 00:01:45 crc kubenswrapper[3020]: I1213 00:01:45.003715 3020 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 00:01:45 crc kubenswrapper[3020]: I1213 00:01:45.003921 3020 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/bd6a3a59e513625ca0ae3724df2686bc-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"bd6a3a59e513625ca0ae3724df2686bc\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 00:01:45 crc kubenswrapper[3020]: I1213 00:01:45.003993 3020 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/6a57a7fb1944b43a6bd11a349520d301-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"6a57a7fb1944b43a6bd11a349520d301\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 13 00:01:45 crc kubenswrapper[3020]: I1213 00:01:45.004080 3020 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/6a57a7fb1944b43a6bd11a349520d301-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"6a57a7fb1944b43a6bd11a349520d301\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 13 00:01:45 crc kubenswrapper[3020]: I1213 00:01:45.004133 3020 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-cert-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 13 00:01:45 crc kubenswrapper[3020]: I1213 00:01:45.004189 3020 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-log-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 13 00:01:45 crc kubenswrapper[3020]: I1213 00:01:45.004234 3020 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/bd6a3a59e513625ca0ae3724df2686bc-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"bd6a3a59e513625ca0ae3724df2686bc\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 00:01:45 crc kubenswrapper[3020]: I1213 00:01:45.004347 3020 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d3ae206906481b4831fd849b559269c8-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d3ae206906481b4831fd849b559269c8\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 13 00:01:45 crc kubenswrapper[3020]: I1213 00:01:45.004497 3020 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-usr-local-bin\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 13 00:01:45 crc kubenswrapper[3020]: I1213 00:01:45.004554 3020 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 00:01:45 crc kubenswrapper[3020]: I1213 00:01:45.004600 3020 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-static-pod-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 13 00:01:45 crc kubenswrapper[3020]: I1213 00:01:45.106540 3020 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d3ae206906481b4831fd849b559269c8-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d3ae206906481b4831fd849b559269c8\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 13 00:01:45 crc kubenswrapper[3020]: I1213 00:01:45.106683 3020 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-resource-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 13 00:01:45 crc kubenswrapper[3020]: I1213 00:01:45.106587 3020 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d3ae206906481b4831fd849b559269c8-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d3ae206906481b4831fd849b559269c8\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 13 00:01:45 crc kubenswrapper[3020]: I1213 00:01:45.106736 3020 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-data-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 13 00:01:45 crc kubenswrapper[3020]: I1213 00:01:45.106805 3020 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 00:01:45 crc kubenswrapper[3020]: I1213 00:01:45.106869 3020 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-resource-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 13 00:01:45 crc kubenswrapper[3020]: I1213 00:01:45.106888 3020 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 00:01:45 crc kubenswrapper[3020]: I1213 00:01:45.106904 3020 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 00:01:45 crc kubenswrapper[3020]: I1213 00:01:45.106873 3020 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 00:01:45 crc kubenswrapper[3020]: I1213 00:01:45.106996 3020 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/bd6a3a59e513625ca0ae3724df2686bc-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"bd6a3a59e513625ca0ae3724df2686bc\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 00:01:45 crc kubenswrapper[3020]: I1213 00:01:45.106900 3020 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-data-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 13 00:01:45 crc kubenswrapper[3020]: I1213 00:01:45.107029 3020 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/6a57a7fb1944b43a6bd11a349520d301-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"6a57a7fb1944b43a6bd11a349520d301\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 13 00:01:45 crc kubenswrapper[3020]: I1213 00:01:45.107061 3020 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/6a57a7fb1944b43a6bd11a349520d301-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"6a57a7fb1944b43a6bd11a349520d301\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 13 00:01:45 crc kubenswrapper[3020]: I1213 00:01:45.107091 3020 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-cert-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 13 00:01:45 crc kubenswrapper[3020]: I1213 00:01:45.107102 3020 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/bd6a3a59e513625ca0ae3724df2686bc-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"bd6a3a59e513625ca0ae3724df2686bc\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 00:01:45 crc kubenswrapper[3020]: I1213 00:01:45.107116 3020 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-log-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 13 00:01:45 crc kubenswrapper[3020]: I1213 00:01:45.107147 3020 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/bd6a3a59e513625ca0ae3724df2686bc-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"bd6a3a59e513625ca0ae3724df2686bc\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 00:01:45 crc kubenswrapper[3020]: I1213 00:01:45.107166 3020 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-cert-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 13 00:01:45 crc kubenswrapper[3020]: I1213 00:01:45.107176 3020 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d3ae206906481b4831fd849b559269c8-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d3ae206906481b4831fd849b559269c8\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 13 00:01:45 crc kubenswrapper[3020]: I1213 00:01:45.107205 3020 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-usr-local-bin\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 13 00:01:45 crc kubenswrapper[3020]: I1213 00:01:45.107203 3020 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/6a57a7fb1944b43a6bd11a349520d301-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"6a57a7fb1944b43a6bd11a349520d301\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 13 00:01:45 crc kubenswrapper[3020]: I1213 00:01:45.107229 3020 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 00:01:45 crc kubenswrapper[3020]: I1213 00:01:45.107230 3020 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/bd6a3a59e513625ca0ae3724df2686bc-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"bd6a3a59e513625ca0ae3724df2686bc\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 00:01:45 crc kubenswrapper[3020]: I1213 00:01:45.107231 3020 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/6a57a7fb1944b43a6bd11a349520d301-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"6a57a7fb1944b43a6bd11a349520d301\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 13 00:01:45 crc kubenswrapper[3020]: I1213 00:01:45.107300 3020 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-usr-local-bin\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 13 00:01:45 crc kubenswrapper[3020]: I1213 00:01:45.107326 3020 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-static-pod-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 13 00:01:45 crc kubenswrapper[3020]: I1213 00:01:45.107287 3020 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-static-pod-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 13 00:01:45 crc kubenswrapper[3020]: I1213 00:01:45.107336 3020 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 00:01:45 crc kubenswrapper[3020]: I1213 00:01:45.107352 3020 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d3ae206906481b4831fd849b559269c8-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d3ae206906481b4831fd849b559269c8\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 13 00:01:45 crc kubenswrapper[3020]: I1213 00:01:45.107787 3020 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-log-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 13 00:01:45 crc kubenswrapper[3020]: E1213 00:01:45.267152 3020 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="800ms" Dec 13 00:01:45 crc kubenswrapper[3020]: I1213 00:01:45.274633 3020 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 13 00:01:45 crc kubenswrapper[3020]: I1213 00:01:45.293006 3020 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 13 00:01:45 crc kubenswrapper[3020]: I1213 00:01:45.301219 3020 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 00:01:45 crc kubenswrapper[3020]: I1213 00:01:45.316536 3020 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 00:01:45 crc kubenswrapper[3020]: I1213 00:01:45.320402 3020 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 13 00:01:45 crc kubenswrapper[3020]: W1213 00:01:45.323178 3020 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2a6a3b2ca08062d24afa4c01aaf9e4f.slice/crio-62604973b7d0373fd16b4a326739b51a8e19370776c7a73704f06a555dcb9422 WatchSource:0}: Error finding container 62604973b7d0373fd16b4a326739b51a8e19370776c7a73704f06a555dcb9422: Status 404 returned error can't find the container with id 62604973b7d0373fd16b4a326739b51a8e19370776c7a73704f06a555dcb9422 Dec 13 00:01:45 crc kubenswrapper[3020]: W1213 00:01:45.324119 3020 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae85115fdc231b4002b57317b41a6400.slice/crio-de090a23275e9ff4f2ef0ba1559b3c31f7c5989f47039fe863008d30d3989b9b WatchSource:0}: Error finding container de090a23275e9ff4f2ef0ba1559b3c31f7c5989f47039fe863008d30d3989b9b: Status 404 returned error can't find the container with id de090a23275e9ff4f2ef0ba1559b3c31f7c5989f47039fe863008d30d3989b9b Dec 13 00:01:45 crc kubenswrapper[3020]: W1213 00:01:45.324789 3020 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3ae206906481b4831fd849b559269c8.slice/crio-2fb74e565cff336c05124a43d5fa93a777f134f91d772c8944fd71e0bd70bceb WatchSource:0}: Error finding container 2fb74e565cff336c05124a43d5fa93a777f134f91d772c8944fd71e0bd70bceb: Status 404 returned error can't find the container with id 2fb74e565cff336c05124a43d5fa93a777f134f91d772c8944fd71e0bd70bceb Dec 13 00:01:45 crc kubenswrapper[3020]: I1213 00:01:45.365275 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:01:45 crc kubenswrapper[3020]: I1213 00:01:45.367469 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:01:45 crc kubenswrapper[3020]: I1213 00:01:45.367523 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:01:45 crc kubenswrapper[3020]: I1213 00:01:45.367538 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:01:45 crc kubenswrapper[3020]: I1213 00:01:45.367574 3020 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 13 00:01:45 crc kubenswrapper[3020]: E1213 00:01:45.369152 3020 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Dec 13 00:01:45 crc kubenswrapper[3020]: I1213 00:01:45.662107 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:01:45 crc kubenswrapper[3020]: I1213 00:01:45.847217 3020 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d3ae206906481b4831fd849b559269c8","Type":"ContainerStarted","Data":"2fb74e565cff336c05124a43d5fa93a777f134f91d772c8944fd71e0bd70bceb"} Dec 13 00:01:45 crc kubenswrapper[3020]: I1213 00:01:45.848660 3020 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerStarted","Data":"9147de6c17917b9733bb729439d93851d6a77b44afce4cde2bfb4d69574fd999"} Dec 13 00:01:45 crc kubenswrapper[3020]: I1213 00:01:45.850471 3020 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"6a57a7fb1944b43a6bd11a349520d301","Type":"ContainerStarted","Data":"8731899e2ee6b7bc9cc9f597c0d8f4cf0a9667e3d73c14084d2c050a3603a5d3"} Dec 13 00:01:45 crc kubenswrapper[3020]: I1213 00:01:45.852142 3020 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"ae85115fdc231b4002b57317b41a6400","Type":"ContainerStarted","Data":"de090a23275e9ff4f2ef0ba1559b3c31f7c5989f47039fe863008d30d3989b9b"} Dec 13 00:01:45 crc kubenswrapper[3020]: I1213 00:01:45.853623 3020 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerStarted","Data":"62604973b7d0373fd16b4a326739b51a8e19370776c7a73704f06a555dcb9422"} Dec 13 00:01:45 crc kubenswrapper[3020]: W1213 00:01:45.870200 3020 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:01:45 crc kubenswrapper[3020]: E1213 00:01:45.870302 3020 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:01:45 crc kubenswrapper[3020]: W1213 00:01:45.871396 3020 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:01:45 crc kubenswrapper[3020]: E1213 00:01:45.871531 3020 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:01:45 crc kubenswrapper[3020]: W1213 00:01:45.915789 3020 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:01:45 crc kubenswrapper[3020]: E1213 00:01:45.915881 3020 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:01:46 crc kubenswrapper[3020]: E1213 00:01:46.069422 3020 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="1.6s" Dec 13 00:01:46 crc kubenswrapper[3020]: I1213 00:01:46.169709 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:01:46 crc kubenswrapper[3020]: I1213 00:01:46.171573 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:01:46 crc kubenswrapper[3020]: I1213 00:01:46.171779 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:01:46 crc kubenswrapper[3020]: I1213 00:01:46.171823 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:01:46 crc kubenswrapper[3020]: I1213 00:01:46.171921 3020 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 13 00:01:46 crc kubenswrapper[3020]: E1213 00:01:46.173903 3020 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Dec 13 00:01:46 crc kubenswrapper[3020]: W1213 00:01:46.217496 3020 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:01:46 crc kubenswrapper[3020]: E1213 00:01:46.217933 3020 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:01:46 crc kubenswrapper[3020]: I1213 00:01:46.654325 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:01:46 crc kubenswrapper[3020]: I1213 00:01:46.859369 3020 generic.go:334] "Generic (PLEG): container finished" podID="ae85115fdc231b4002b57317b41a6400" containerID="0e7c2c3e562c9d9919442f40f7882ebbafac044309ad23f75a5ea127edd01df3" exitCode=0 Dec 13 00:01:46 crc kubenswrapper[3020]: I1213 00:01:46.859426 3020 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"ae85115fdc231b4002b57317b41a6400","Type":"ContainerDied","Data":"0e7c2c3e562c9d9919442f40f7882ebbafac044309ad23f75a5ea127edd01df3"} Dec 13 00:01:46 crc kubenswrapper[3020]: I1213 00:01:46.859576 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:01:46 crc kubenswrapper[3020]: I1213 00:01:46.865321 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:01:46 crc kubenswrapper[3020]: I1213 00:01:46.865368 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:01:46 crc kubenswrapper[3020]: I1213 00:01:46.865378 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:01:46 crc kubenswrapper[3020]: I1213 00:01:46.866855 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:01:46 crc kubenswrapper[3020]: I1213 00:01:46.867644 3020 generic.go:334] "Generic (PLEG): container finished" podID="b2a6a3b2ca08062d24afa4c01aaf9e4f" containerID="f5f986f5f7769af76962426cc5d89891b367ea283cf4f111e6df73793b6a9d4d" exitCode=0 Dec 13 00:01:46 crc kubenswrapper[3020]: I1213 00:01:46.867705 3020 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerDied","Data":"f5f986f5f7769af76962426cc5d89891b367ea283cf4f111e6df73793b6a9d4d"} Dec 13 00:01:46 crc kubenswrapper[3020]: I1213 00:01:46.867782 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:01:46 crc kubenswrapper[3020]: I1213 00:01:46.868084 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:01:46 crc kubenswrapper[3020]: I1213 00:01:46.868103 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:01:46 crc kubenswrapper[3020]: I1213 00:01:46.868112 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:01:46 crc kubenswrapper[3020]: I1213 00:01:46.868419 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:01:46 crc kubenswrapper[3020]: I1213 00:01:46.868443 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:01:46 crc kubenswrapper[3020]: I1213 00:01:46.868465 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:01:46 crc kubenswrapper[3020]: I1213 00:01:46.874041 3020 generic.go:334] "Generic (PLEG): container finished" podID="d3ae206906481b4831fd849b559269c8" containerID="317c8573999c79a6d7ffececaa138bb78cd577f17177de9e8b9ccda0869e5d1b" exitCode=0 Dec 13 00:01:46 crc kubenswrapper[3020]: I1213 00:01:46.874103 3020 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d3ae206906481b4831fd849b559269c8","Type":"ContainerDied","Data":"317c8573999c79a6d7ffececaa138bb78cd577f17177de9e8b9ccda0869e5d1b"} Dec 13 00:01:46 crc kubenswrapper[3020]: I1213 00:01:46.874178 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:01:46 crc kubenswrapper[3020]: I1213 00:01:46.876527 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:01:46 crc kubenswrapper[3020]: I1213 00:01:46.876578 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:01:46 crc kubenswrapper[3020]: I1213 00:01:46.876592 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:01:46 crc kubenswrapper[3020]: I1213 00:01:46.880063 3020 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerStarted","Data":"8b7dd56acf95babb69114bfb734c2fe832aae315fa93832279c466e2c2c330dc"} Dec 13 00:01:46 crc kubenswrapper[3020]: I1213 00:01:46.880141 3020 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerStarted","Data":"9302f0b546189f3a87b8d9663cc93cb01b5a2d3db95c7bc3af58fb27ce6cbeab"} Dec 13 00:01:46 crc kubenswrapper[3020]: I1213 00:01:46.880166 3020 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerStarted","Data":"f3e4c481e653f9eb97c4e024e125a3cf736ee7e248d4feb35d2b8148abf49f3a"} Dec 13 00:01:46 crc kubenswrapper[3020]: I1213 00:01:46.881803 3020 generic.go:334] "Generic (PLEG): container finished" podID="6a57a7fb1944b43a6bd11a349520d301" containerID="024d5df2021736f8bff28a353040f152829503176350a514f3126e49c802fc98" exitCode=0 Dec 13 00:01:46 crc kubenswrapper[3020]: I1213 00:01:46.881846 3020 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"6a57a7fb1944b43a6bd11a349520d301","Type":"ContainerDied","Data":"024d5df2021736f8bff28a353040f152829503176350a514f3126e49c802fc98"} Dec 13 00:01:46 crc kubenswrapper[3020]: I1213 00:01:46.882079 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:01:46 crc kubenswrapper[3020]: I1213 00:01:46.883571 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:01:46 crc kubenswrapper[3020]: I1213 00:01:46.883621 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:01:46 crc kubenswrapper[3020]: I1213 00:01:46.883654 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:01:47 crc kubenswrapper[3020]: E1213 00:01:47.540619 3020 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" event="&Event{ObjectMeta:{crc.18809d63824912f3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-13 00:01:44.642384627 +0000 UTC m=+0.395105704,LastTimestamp:2025-12-13 00:01:44.642384627 +0000 UTC m=+0.395105704,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 13 00:01:47 crc kubenswrapper[3020]: I1213 00:01:47.654780 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:01:47 crc kubenswrapper[3020]: E1213 00:01:47.671557 3020 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="3.2s" Dec 13 00:01:47 crc kubenswrapper[3020]: I1213 00:01:47.774928 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:01:47 crc kubenswrapper[3020]: I1213 00:01:47.776629 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:01:47 crc kubenswrapper[3020]: I1213 00:01:47.776674 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:01:47 crc kubenswrapper[3020]: I1213 00:01:47.776689 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:01:47 crc kubenswrapper[3020]: I1213 00:01:47.776718 3020 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 13 00:01:47 crc kubenswrapper[3020]: E1213 00:01:47.777939 3020 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Dec 13 00:01:47 crc kubenswrapper[3020]: W1213 00:01:47.810087 3020 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:01:47 crc kubenswrapper[3020]: E1213 00:01:47.810170 3020 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:01:47 crc kubenswrapper[3020]: I1213 00:01:47.892547 3020 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerStarted","Data":"953ff6c5db2dba9cad953eea0c19f73e9b6831a7fbaeb626f7244a9c2c2ce5a1"} Dec 13 00:01:47 crc kubenswrapper[3020]: I1213 00:01:47.892680 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:01:47 crc kubenswrapper[3020]: I1213 00:01:47.893653 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:01:47 crc kubenswrapper[3020]: I1213 00:01:47.893676 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:01:47 crc kubenswrapper[3020]: I1213 00:01:47.893690 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:01:47 crc kubenswrapper[3020]: I1213 00:01:47.903139 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:01:47 crc kubenswrapper[3020]: I1213 00:01:47.903141 3020 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"6a57a7fb1944b43a6bd11a349520d301","Type":"ContainerStarted","Data":"c47f2340f4df6a026d962be7fc8f87e20ecddfcc84b57b5d103811f7ff2e285b"} Dec 13 00:01:47 crc kubenswrapper[3020]: I1213 00:01:47.903258 3020 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"6a57a7fb1944b43a6bd11a349520d301","Type":"ContainerStarted","Data":"031338d28fc223a78b063dd2e4f67821165cf8deffcfd65998e2d7635fccefec"} Dec 13 00:01:47 crc kubenswrapper[3020]: I1213 00:01:47.903310 3020 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"6a57a7fb1944b43a6bd11a349520d301","Type":"ContainerStarted","Data":"748a828358192b95aaf301a83de070a3d0c88db887d37f7b6746e0a79b7703a9"} Dec 13 00:01:47 crc kubenswrapper[3020]: I1213 00:01:47.904148 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:01:47 crc kubenswrapper[3020]: I1213 00:01:47.904186 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:01:47 crc kubenswrapper[3020]: I1213 00:01:47.904198 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:01:47 crc kubenswrapper[3020]: I1213 00:01:47.909423 3020 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"ae85115fdc231b4002b57317b41a6400","Type":"ContainerStarted","Data":"3889e4065c8ccc3198452d7de12d524904c3a80cb7e5b2d0bbdb2501d86822fd"} Dec 13 00:01:47 crc kubenswrapper[3020]: I1213 00:01:47.909471 3020 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"ae85115fdc231b4002b57317b41a6400","Type":"ContainerStarted","Data":"192260781d494fb0fda0ec530cfb2f3769b95d68d7ed51cdba7ece427e3eeeb9"} Dec 13 00:01:47 crc kubenswrapper[3020]: I1213 00:01:47.909485 3020 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"ae85115fdc231b4002b57317b41a6400","Type":"ContainerStarted","Data":"03f468a807b805d0e77421246b6dab8f884b5ff4b6ec700846bd126ee447fd64"} Dec 13 00:01:47 crc kubenswrapper[3020]: I1213 00:01:47.911279 3020 generic.go:334] "Generic (PLEG): container finished" podID="b2a6a3b2ca08062d24afa4c01aaf9e4f" containerID="b20fe8c25abff54b41abec807c7bdac48cbd360572bf501d6abe8c947cb345fd" exitCode=0 Dec 13 00:01:47 crc kubenswrapper[3020]: I1213 00:01:47.911350 3020 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerDied","Data":"b20fe8c25abff54b41abec807c7bdac48cbd360572bf501d6abe8c947cb345fd"} Dec 13 00:01:47 crc kubenswrapper[3020]: I1213 00:01:47.911424 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:01:47 crc kubenswrapper[3020]: I1213 00:01:47.912469 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:01:47 crc kubenswrapper[3020]: I1213 00:01:47.912502 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:01:47 crc kubenswrapper[3020]: I1213 00:01:47.912513 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:01:47 crc kubenswrapper[3020]: I1213 00:01:47.914802 3020 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d3ae206906481b4831fd849b559269c8","Type":"ContainerStarted","Data":"a25e126ba2b196534ba37c9d0aa6c1b3838a1ff771fe1a1bbbc6bfe296bb2336"} Dec 13 00:01:47 crc kubenswrapper[3020]: I1213 00:01:47.915338 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:01:47 crc kubenswrapper[3020]: I1213 00:01:47.916146 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:01:47 crc kubenswrapper[3020]: I1213 00:01:47.916194 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:01:47 crc kubenswrapper[3020]: I1213 00:01:47.916208 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:01:47 crc kubenswrapper[3020]: W1213 00:01:47.972649 3020 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:01:47 crc kubenswrapper[3020]: E1213 00:01:47.972906 3020 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:01:48 crc kubenswrapper[3020]: W1213 00:01:48.121703 3020 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:01:48 crc kubenswrapper[3020]: E1213 00:01:48.121770 3020 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:01:48 crc kubenswrapper[3020]: W1213 00:01:48.547855 3020 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:01:48 crc kubenswrapper[3020]: E1213 00:01:48.547923 3020 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:01:48 crc kubenswrapper[3020]: I1213 00:01:48.655814 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:01:48 crc kubenswrapper[3020]: I1213 00:01:48.853965 3020 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 13 00:01:48 crc kubenswrapper[3020]: I1213 00:01:48.921571 3020 generic.go:334] "Generic (PLEG): container finished" podID="b2a6a3b2ca08062d24afa4c01aaf9e4f" containerID="e99fd4233ac401868c37d117bde49684623b166932f58bd465d1b0e4239ff07b" exitCode=0 Dec 13 00:01:48 crc kubenswrapper[3020]: I1213 00:01:48.921732 3020 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerDied","Data":"e99fd4233ac401868c37d117bde49684623b166932f58bd465d1b0e4239ff07b"} Dec 13 00:01:48 crc kubenswrapper[3020]: I1213 00:01:48.921761 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:01:48 crc kubenswrapper[3020]: I1213 00:01:48.923481 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:01:48 crc kubenswrapper[3020]: I1213 00:01:48.923561 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:01:48 crc kubenswrapper[3020]: I1213 00:01:48.923593 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:01:48 crc kubenswrapper[3020]: I1213 00:01:48.930040 3020 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"ae85115fdc231b4002b57317b41a6400","Type":"ContainerStarted","Data":"a3c1aebe7819c560028dcd0c0b6a7f4a2ff5bd55f8fe8b6cc8f9d88c23e36a22"} Dec 13 00:01:48 crc kubenswrapper[3020]: I1213 00:01:48.930122 3020 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"ae85115fdc231b4002b57317b41a6400","Type":"ContainerStarted","Data":"7f88ead159a2dab22e458270562aeefb8fa7f52c4e4928e89183b5ccc06142b7"} Dec 13 00:01:48 crc kubenswrapper[3020]: I1213 00:01:48.930125 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:01:48 crc kubenswrapper[3020]: I1213 00:01:48.930190 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:01:48 crc kubenswrapper[3020]: I1213 00:01:48.930207 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:01:48 crc kubenswrapper[3020]: I1213 00:01:48.931323 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:01:48 crc kubenswrapper[3020]: I1213 00:01:48.932024 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:01:48 crc kubenswrapper[3020]: I1213 00:01:48.932027 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:01:48 crc kubenswrapper[3020]: I1213 00:01:48.932084 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:01:48 crc kubenswrapper[3020]: I1213 00:01:48.932110 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:01:48 crc kubenswrapper[3020]: I1213 00:01:48.932053 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:01:48 crc kubenswrapper[3020]: I1213 00:01:48.932152 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:01:48 crc kubenswrapper[3020]: I1213 00:01:48.932229 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:01:48 crc kubenswrapper[3020]: I1213 00:01:48.932290 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:01:48 crc kubenswrapper[3020]: I1213 00:01:48.932319 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:01:48 crc kubenswrapper[3020]: I1213 00:01:48.932601 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:01:48 crc kubenswrapper[3020]: I1213 00:01:48.932629 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:01:48 crc kubenswrapper[3020]: I1213 00:01:48.932642 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:01:49 crc kubenswrapper[3020]: I1213 00:01:49.655503 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:01:49 crc kubenswrapper[3020]: I1213 00:01:49.937645 3020 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerStarted","Data":"9e4e72bca69112f7acae5d9f09a568e8943686a9d9296203fddbaaf66901a17c"} Dec 13 00:01:49 crc kubenswrapper[3020]: I1213 00:01:49.937720 3020 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerStarted","Data":"5c6f3dce8acddaf41952437fc8268c1138d263545d9f7c6e0c271ad2e4421bd8"} Dec 13 00:01:49 crc kubenswrapper[3020]: I1213 00:01:49.937795 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:01:49 crc kubenswrapper[3020]: I1213 00:01:49.937817 3020 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 00:01:49 crc kubenswrapper[3020]: I1213 00:01:49.937915 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:01:49 crc kubenswrapper[3020]: I1213 00:01:49.939129 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:01:49 crc kubenswrapper[3020]: I1213 00:01:49.939172 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:01:49 crc kubenswrapper[3020]: I1213 00:01:49.939186 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:01:49 crc kubenswrapper[3020]: I1213 00:01:49.939332 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:01:49 crc kubenswrapper[3020]: I1213 00:01:49.939353 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:01:49 crc kubenswrapper[3020]: I1213 00:01:49.939366 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:01:50 crc kubenswrapper[3020]: I1213 00:01:50.181785 3020 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 00:01:50 crc kubenswrapper[3020]: I1213 00:01:50.181982 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:01:50 crc kubenswrapper[3020]: I1213 00:01:50.183372 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:01:50 crc kubenswrapper[3020]: I1213 00:01:50.183442 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:01:50 crc kubenswrapper[3020]: I1213 00:01:50.183531 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:01:50 crc kubenswrapper[3020]: I1213 00:01:50.654816 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:01:50 crc kubenswrapper[3020]: I1213 00:01:50.700734 3020 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 00:01:50 crc kubenswrapper[3020]: E1213 00:01:50.875602 3020 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="6.4s" Dec 13 00:01:50 crc kubenswrapper[3020]: I1213 00:01:50.948255 3020 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 00:01:50 crc kubenswrapper[3020]: I1213 00:01:50.948310 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:01:50 crc kubenswrapper[3020]: I1213 00:01:50.948903 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:01:50 crc kubenswrapper[3020]: I1213 00:01:50.949174 3020 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerStarted","Data":"59158c2016f37ba76cae4f98ea22f7862612ce0ef52a336054f7e7eba2bb5031"} Dec 13 00:01:50 crc kubenswrapper[3020]: I1213 00:01:50.949198 3020 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerStarted","Data":"d5ab0d9b15c9bd736cc92aa1fb290af93ac9f990ac911a56be4ab9ecd7d913d4"} Dec 13 00:01:50 crc kubenswrapper[3020]: I1213 00:01:50.949615 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:01:50 crc kubenswrapper[3020]: I1213 00:01:50.949673 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:01:50 crc kubenswrapper[3020]: I1213 00:01:50.949686 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:01:50 crc kubenswrapper[3020]: I1213 00:01:50.950916 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:01:50 crc kubenswrapper[3020]: I1213 00:01:50.950986 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:01:50 crc kubenswrapper[3020]: I1213 00:01:50.951011 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:01:50 crc kubenswrapper[3020]: I1213 00:01:50.978364 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:01:50 crc kubenswrapper[3020]: I1213 00:01:50.980043 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:01:50 crc kubenswrapper[3020]: I1213 00:01:50.980103 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:01:50 crc kubenswrapper[3020]: I1213 00:01:50.980123 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:01:50 crc kubenswrapper[3020]: I1213 00:01:50.980157 3020 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 13 00:01:50 crc kubenswrapper[3020]: E1213 00:01:50.981630 3020 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Dec 13 00:01:51 crc kubenswrapper[3020]: I1213 00:01:51.575075 3020 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 00:01:51 crc kubenswrapper[3020]: I1213 00:01:51.575315 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:01:51 crc kubenswrapper[3020]: I1213 00:01:51.579237 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:01:51 crc kubenswrapper[3020]: I1213 00:01:51.579303 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:01:51 crc kubenswrapper[3020]: I1213 00:01:51.579317 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:01:51 crc kubenswrapper[3020]: I1213 00:01:51.656023 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:01:51 crc kubenswrapper[3020]: I1213 00:01:51.781358 3020 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 00:01:51 crc kubenswrapper[3020]: W1213 00:01:51.860879 3020 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:01:51 crc kubenswrapper[3020]: E1213 00:01:51.861006 3020 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:01:51 crc kubenswrapper[3020]: I1213 00:01:51.951349 3020 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 00:01:51 crc kubenswrapper[3020]: I1213 00:01:51.951999 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:01:51 crc kubenswrapper[3020]: I1213 00:01:51.951396 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:01:51 crc kubenswrapper[3020]: I1213 00:01:51.953562 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:01:51 crc kubenswrapper[3020]: I1213 00:01:51.953631 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:01:51 crc kubenswrapper[3020]: I1213 00:01:51.953657 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:01:51 crc kubenswrapper[3020]: I1213 00:01:51.953764 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:01:51 crc kubenswrapper[3020]: I1213 00:01:51.953822 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:01:51 crc kubenswrapper[3020]: I1213 00:01:51.953853 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:01:52 crc kubenswrapper[3020]: I1213 00:01:52.395777 3020 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Dec 13 00:01:52 crc kubenswrapper[3020]: W1213 00:01:52.506336 3020 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:01:52 crc kubenswrapper[3020]: E1213 00:01:52.506443 3020 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:01:52 crc kubenswrapper[3020]: I1213 00:01:52.655420 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:01:52 crc kubenswrapper[3020]: W1213 00:01:52.752809 3020 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:01:52 crc kubenswrapper[3020]: E1213 00:01:52.752937 3020 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:01:52 crc kubenswrapper[3020]: I1213 00:01:52.954304 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:01:52 crc kubenswrapper[3020]: I1213 00:01:52.955621 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:01:52 crc kubenswrapper[3020]: I1213 00:01:52.955698 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:01:52 crc kubenswrapper[3020]: I1213 00:01:52.955719 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:01:52 crc kubenswrapper[3020]: W1213 00:01:52.969885 3020 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:01:52 crc kubenswrapper[3020]: E1213 00:01:52.970013 3020 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:01:53 crc kubenswrapper[3020]: I1213 00:01:53.239886 3020 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 00:01:53 crc kubenswrapper[3020]: I1213 00:01:53.240170 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:01:53 crc kubenswrapper[3020]: I1213 00:01:53.242272 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:01:53 crc kubenswrapper[3020]: I1213 00:01:53.242307 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:01:53 crc kubenswrapper[3020]: I1213 00:01:53.242316 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:01:53 crc kubenswrapper[3020]: I1213 00:01:53.655040 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:01:54 crc kubenswrapper[3020]: I1213 00:01:54.322631 3020 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 00:01:54 crc kubenswrapper[3020]: I1213 00:01:54.322884 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:01:54 crc kubenswrapper[3020]: I1213 00:01:54.327875 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:01:54 crc kubenswrapper[3020]: I1213 00:01:54.327951 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:01:54 crc kubenswrapper[3020]: I1213 00:01:54.328001 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:01:54 crc kubenswrapper[3020]: I1213 00:01:54.575821 3020 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 13 00:01:54 crc kubenswrapper[3020]: I1213 00:01:54.576042 3020 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 13 00:01:54 crc kubenswrapper[3020]: I1213 00:01:54.656427 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:01:54 crc kubenswrapper[3020]: E1213 00:01:54.902762 3020 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 13 00:01:55 crc kubenswrapper[3020]: I1213 00:01:55.655859 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:01:55 crc kubenswrapper[3020]: I1213 00:01:55.873694 3020 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 00:01:55 crc kubenswrapper[3020]: I1213 00:01:55.873995 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:01:55 crc kubenswrapper[3020]: I1213 00:01:55.876074 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:01:55 crc kubenswrapper[3020]: I1213 00:01:55.876158 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:01:55 crc kubenswrapper[3020]: I1213 00:01:55.876181 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:01:55 crc kubenswrapper[3020]: I1213 00:01:55.889469 3020 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 00:01:55 crc kubenswrapper[3020]: I1213 00:01:55.962805 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:01:55 crc kubenswrapper[3020]: I1213 00:01:55.964322 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:01:55 crc kubenswrapper[3020]: I1213 00:01:55.964383 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:01:55 crc kubenswrapper[3020]: I1213 00:01:55.964404 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:01:55 crc kubenswrapper[3020]: I1213 00:01:55.968746 3020 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 00:01:56 crc kubenswrapper[3020]: I1213 00:01:56.655086 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:01:56 crc kubenswrapper[3020]: I1213 00:01:56.965857 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:01:56 crc kubenswrapper[3020]: I1213 00:01:56.967236 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:01:56 crc kubenswrapper[3020]: I1213 00:01:56.967406 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:01:56 crc kubenswrapper[3020]: I1213 00:01:56.967550 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:01:57 crc kubenswrapper[3020]: E1213 00:01:57.277908 3020 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Dec 13 00:01:57 crc kubenswrapper[3020]: I1213 00:01:57.381922 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:01:57 crc kubenswrapper[3020]: I1213 00:01:57.383589 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:01:57 crc kubenswrapper[3020]: I1213 00:01:57.383645 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:01:57 crc kubenswrapper[3020]: I1213 00:01:57.383662 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:01:57 crc kubenswrapper[3020]: I1213 00:01:57.383690 3020 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 13 00:01:57 crc kubenswrapper[3020]: E1213 00:01:57.384955 3020 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Dec 13 00:01:57 crc kubenswrapper[3020]: E1213 00:01:57.543118 3020 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" event="&Event{ObjectMeta:{crc.18809d63824912f3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-13 00:01:44.642384627 +0000 UTC m=+0.395105704,LastTimestamp:2025-12-13 00:01:44.642384627 +0000 UTC m=+0.395105704,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 13 00:01:57 crc kubenswrapper[3020]: I1213 00:01:57.655717 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:01:58 crc kubenswrapper[3020]: I1213 00:01:58.656720 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:01:59 crc kubenswrapper[3020]: I1213 00:01:59.093268 3020 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 13 00:01:59 crc kubenswrapper[3020]: I1213 00:01:59.093573 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:01:59 crc kubenswrapper[3020]: I1213 00:01:59.094845 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:01:59 crc kubenswrapper[3020]: I1213 00:01:59.094891 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:01:59 crc kubenswrapper[3020]: I1213 00:01:59.094905 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:01:59 crc kubenswrapper[3020]: I1213 00:01:59.234642 3020 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 13 00:01:59 crc kubenswrapper[3020]: I1213 00:01:59.655109 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:01:59 crc kubenswrapper[3020]: I1213 00:01:59.974921 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:01:59 crc kubenswrapper[3020]: I1213 00:01:59.976634 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:01:59 crc kubenswrapper[3020]: I1213 00:01:59.976709 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:01:59 crc kubenswrapper[3020]: I1213 00:01:59.976725 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:01:59 crc kubenswrapper[3020]: I1213 00:01:59.998632 3020 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 13 00:02:00 crc kubenswrapper[3020]: I1213 00:02:00.307211 3020 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403} Dec 13 00:02:00 crc kubenswrapper[3020]: I1213 00:02:00.307375 3020 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 13 00:02:00 crc kubenswrapper[3020]: I1213 00:02:00.322392 3020 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403} Dec 13 00:02:00 crc kubenswrapper[3020]: I1213 00:02:00.322721 3020 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 13 00:02:00 crc kubenswrapper[3020]: I1213 00:02:00.655200 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:00 crc kubenswrapper[3020]: I1213 00:02:00.708042 3020 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 13 00:02:00 crc kubenswrapper[3020]: [+]log ok Dec 13 00:02:00 crc kubenswrapper[3020]: [+]etcd ok Dec 13 00:02:00 crc kubenswrapper[3020]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 13 00:02:00 crc kubenswrapper[3020]: [+]poststarthook/openshift.io-api-request-count-filter ok Dec 13 00:02:00 crc kubenswrapper[3020]: [+]poststarthook/openshift.io-startkubeinformers ok Dec 13 00:02:00 crc kubenswrapper[3020]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Dec 13 00:02:00 crc kubenswrapper[3020]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Dec 13 00:02:00 crc kubenswrapper[3020]: [+]poststarthook/start-kube-apiserver-admission-initializer ok Dec 13 00:02:00 crc kubenswrapper[3020]: [+]poststarthook/generic-apiserver-start-informers ok Dec 13 00:02:00 crc kubenswrapper[3020]: [+]poststarthook/priority-and-fairness-config-consumer ok Dec 13 00:02:00 crc kubenswrapper[3020]: [+]poststarthook/priority-and-fairness-filter ok Dec 13 00:02:00 crc kubenswrapper[3020]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 13 00:02:00 crc kubenswrapper[3020]: [+]poststarthook/start-apiextensions-informers ok Dec 13 00:02:00 crc kubenswrapper[3020]: [+]poststarthook/start-apiextensions-controllers ok Dec 13 00:02:00 crc kubenswrapper[3020]: [+]poststarthook/crd-informer-synced ok Dec 13 00:02:00 crc kubenswrapper[3020]: [+]poststarthook/start-service-ip-repair-controllers ok Dec 13 00:02:00 crc kubenswrapper[3020]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Dec 13 00:02:00 crc kubenswrapper[3020]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Dec 13 00:02:00 crc kubenswrapper[3020]: [+]poststarthook/priority-and-fairness-config-producer ok Dec 13 00:02:00 crc kubenswrapper[3020]: [+]poststarthook/start-system-namespaces-controller ok Dec 13 00:02:00 crc kubenswrapper[3020]: [+]poststarthook/bootstrap-controller ok Dec 13 00:02:00 crc kubenswrapper[3020]: [+]poststarthook/start-cluster-authentication-info-controller ok Dec 13 00:02:00 crc kubenswrapper[3020]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Dec 13 00:02:00 crc kubenswrapper[3020]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Dec 13 00:02:00 crc kubenswrapper[3020]: [+]poststarthook/start-legacy-token-tracking-controller ok Dec 13 00:02:00 crc kubenswrapper[3020]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Dec 13 00:02:00 crc kubenswrapper[3020]: [+]poststarthook/start-kube-aggregator-informers ok Dec 13 00:02:00 crc kubenswrapper[3020]: [+]poststarthook/apiservice-registration-controller ok Dec 13 00:02:00 crc kubenswrapper[3020]: [+]poststarthook/apiservice-status-available-controller ok Dec 13 00:02:00 crc kubenswrapper[3020]: [+]poststarthook/apiservice-wait-for-first-sync ok Dec 13 00:02:00 crc kubenswrapper[3020]: [+]poststarthook/kube-apiserver-autoregistration ok Dec 13 00:02:00 crc kubenswrapper[3020]: [+]autoregister-completion ok Dec 13 00:02:00 crc kubenswrapper[3020]: [+]poststarthook/apiservice-openapi-controller ok Dec 13 00:02:00 crc kubenswrapper[3020]: [+]poststarthook/apiservice-openapiv3-controller ok Dec 13 00:02:00 crc kubenswrapper[3020]: [+]poststarthook/apiservice-discovery-controller ok Dec 13 00:02:00 crc kubenswrapper[3020]: healthz check failed Dec 13 00:02:00 crc kubenswrapper[3020]: I1213 00:02:00.708122 3020 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:02:00 crc kubenswrapper[3020]: I1213 00:02:00.977158 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:02:00 crc kubenswrapper[3020]: I1213 00:02:00.978031 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:02:00 crc kubenswrapper[3020]: I1213 00:02:00.978068 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:02:00 crc kubenswrapper[3020]: I1213 00:02:00.978080 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:02:01 crc kubenswrapper[3020]: W1213 00:02:01.087652 3020 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:01 crc kubenswrapper[3020]: E1213 00:02:01.087779 3020 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:01 crc kubenswrapper[3020]: I1213 00:02:01.654578 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:02 crc kubenswrapper[3020]: I1213 00:02:02.655637 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:03 crc kubenswrapper[3020]: W1213 00:02:03.613167 3020 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:03 crc kubenswrapper[3020]: E1213 00:02:03.613335 3020 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:03 crc kubenswrapper[3020]: I1213 00:02:03.654489 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:04 crc kubenswrapper[3020]: E1213 00:02:04.280336 3020 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Dec 13 00:02:04 crc kubenswrapper[3020]: I1213 00:02:04.385728 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:02:04 crc kubenswrapper[3020]: I1213 00:02:04.387646 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:02:04 crc kubenswrapper[3020]: I1213 00:02:04.387710 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:02:04 crc kubenswrapper[3020]: I1213 00:02:04.387729 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:02:04 crc kubenswrapper[3020]: I1213 00:02:04.387771 3020 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 13 00:02:04 crc kubenswrapper[3020]: E1213 00:02:04.389877 3020 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Dec 13 00:02:04 crc kubenswrapper[3020]: I1213 00:02:04.576044 3020 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 13 00:02:04 crc kubenswrapper[3020]: I1213 00:02:04.576218 3020 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 13 00:02:04 crc kubenswrapper[3020]: I1213 00:02:04.655382 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:04 crc kubenswrapper[3020]: E1213 00:02:04.904025 3020 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 13 00:02:04 crc kubenswrapper[3020]: W1213 00:02:04.940051 3020 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:04 crc kubenswrapper[3020]: E1213 00:02:04.940149 3020 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:05 crc kubenswrapper[3020]: I1213 00:02:05.346263 3020 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 00:02:05 crc kubenswrapper[3020]: I1213 00:02:05.346406 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:02:05 crc kubenswrapper[3020]: I1213 00:02:05.348053 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:02:05 crc kubenswrapper[3020]: I1213 00:02:05.348126 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:02:05 crc kubenswrapper[3020]: I1213 00:02:05.348156 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:02:05 crc kubenswrapper[3020]: W1213 00:02:05.397731 3020 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:05 crc kubenswrapper[3020]: E1213 00:02:05.397823 3020 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:05 crc kubenswrapper[3020]: I1213 00:02:05.654958 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:05 crc kubenswrapper[3020]: I1213 00:02:05.706152 3020 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 00:02:05 crc kubenswrapper[3020]: I1213 00:02:05.993325 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:02:05 crc kubenswrapper[3020]: I1213 00:02:05.994527 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:02:05 crc kubenswrapper[3020]: I1213 00:02:05.994630 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:02:05 crc kubenswrapper[3020]: I1213 00:02:05.994699 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:02:06 crc kubenswrapper[3020]: I1213 00:02:06.000855 3020 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 00:02:06 crc kubenswrapper[3020]: I1213 00:02:06.655552 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:06 crc kubenswrapper[3020]: I1213 00:02:06.996707 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:02:06 crc kubenswrapper[3020]: I1213 00:02:06.999005 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:02:06 crc kubenswrapper[3020]: I1213 00:02:06.999105 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:02:06 crc kubenswrapper[3020]: I1213 00:02:06.999135 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:02:07 crc kubenswrapper[3020]: E1213 00:02:07.544898 3020 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" event="&Event{ObjectMeta:{crc.18809d63824912f3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-13 00:01:44.642384627 +0000 UTC m=+0.395105704,LastTimestamp:2025-12-13 00:01:44.642384627 +0000 UTC m=+0.395105704,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 13 00:02:07 crc kubenswrapper[3020]: I1213 00:02:07.654317 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:08 crc kubenswrapper[3020]: I1213 00:02:08.656215 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:09 crc kubenswrapper[3020]: I1213 00:02:09.654905 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:10 crc kubenswrapper[3020]: I1213 00:02:10.655677 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:11 crc kubenswrapper[3020]: E1213 00:02:11.282673 3020 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Dec 13 00:02:11 crc kubenswrapper[3020]: I1213 00:02:11.391097 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:02:11 crc kubenswrapper[3020]: I1213 00:02:11.392771 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:02:11 crc kubenswrapper[3020]: I1213 00:02:11.392815 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:02:11 crc kubenswrapper[3020]: I1213 00:02:11.392829 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:02:11 crc kubenswrapper[3020]: I1213 00:02:11.392853 3020 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 13 00:02:11 crc kubenswrapper[3020]: E1213 00:02:11.394417 3020 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Dec 13 00:02:11 crc kubenswrapper[3020]: I1213 00:02:11.655431 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:12 crc kubenswrapper[3020]: I1213 00:02:12.655296 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:13 crc kubenswrapper[3020]: I1213 00:02:13.655795 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:14 crc kubenswrapper[3020]: I1213 00:02:14.575168 3020 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 13 00:02:14 crc kubenswrapper[3020]: I1213 00:02:14.575300 3020 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 13 00:02:14 crc kubenswrapper[3020]: I1213 00:02:14.575355 3020 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 00:02:14 crc kubenswrapper[3020]: I1213 00:02:14.579431 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:02:14 crc kubenswrapper[3020]: I1213 00:02:14.581185 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:02:14 crc kubenswrapper[3020]: I1213 00:02:14.581228 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:02:14 crc kubenswrapper[3020]: I1213 00:02:14.581243 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:02:14 crc kubenswrapper[3020]: I1213 00:02:14.583349 3020 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"9302f0b546189f3a87b8d9663cc93cb01b5a2d3db95c7bc3af58fb27ce6cbeab"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Dec 13 00:02:14 crc kubenswrapper[3020]: I1213 00:02:14.583787 3020 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" containerName="cluster-policy-controller" containerID="cri-o://9302f0b546189f3a87b8d9663cc93cb01b5a2d3db95c7bc3af58fb27ce6cbeab" gracePeriod=30 Dec 13 00:02:14 crc kubenswrapper[3020]: I1213 00:02:14.655345 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:14 crc kubenswrapper[3020]: E1213 00:02:14.905096 3020 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 13 00:02:15 crc kubenswrapper[3020]: I1213 00:02:15.019500 3020 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/1.log" Dec 13 00:02:15 crc kubenswrapper[3020]: I1213 00:02:15.020162 3020 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerDied","Data":"9302f0b546189f3a87b8d9663cc93cb01b5a2d3db95c7bc3af58fb27ce6cbeab"} Dec 13 00:02:15 crc kubenswrapper[3020]: I1213 00:02:15.020299 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:02:15 crc kubenswrapper[3020]: I1213 00:02:15.020169 3020 generic.go:334] "Generic (PLEG): container finished" podID="bd6a3a59e513625ca0ae3724df2686bc" containerID="9302f0b546189f3a87b8d9663cc93cb01b5a2d3db95c7bc3af58fb27ce6cbeab" exitCode=255 Dec 13 00:02:15 crc kubenswrapper[3020]: I1213 00:02:15.020633 3020 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerStarted","Data":"25cf66d93a23a1216ee61df90cad33f3170c0ff5e364f333b4adebf41e14f6b0"} Dec 13 00:02:15 crc kubenswrapper[3020]: I1213 00:02:15.021412 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:02:15 crc kubenswrapper[3020]: I1213 00:02:15.021477 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:02:15 crc kubenswrapper[3020]: I1213 00:02:15.021494 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:02:15 crc kubenswrapper[3020]: I1213 00:02:15.656252 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:16 crc kubenswrapper[3020]: I1213 00:02:16.023423 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:02:16 crc kubenswrapper[3020]: I1213 00:02:16.024762 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:02:16 crc kubenswrapper[3020]: I1213 00:02:16.024865 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:02:16 crc kubenswrapper[3020]: I1213 00:02:16.024890 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:02:16 crc kubenswrapper[3020]: W1213 00:02:16.225241 3020 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:16 crc kubenswrapper[3020]: E1213 00:02:16.225372 3020 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:16 crc kubenswrapper[3020]: I1213 00:02:16.655216 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:17 crc kubenswrapper[3020]: E1213 00:02:17.547523 3020 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" event="&Event{ObjectMeta:{crc.18809d63824912f3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-13 00:01:44.642384627 +0000 UTC m=+0.395105704,LastTimestamp:2025-12-13 00:01:44.642384627 +0000 UTC m=+0.395105704,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 13 00:02:17 crc kubenswrapper[3020]: I1213 00:02:17.655268 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:18 crc kubenswrapper[3020]: E1213 00:02:18.284507 3020 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Dec 13 00:02:18 crc kubenswrapper[3020]: I1213 00:02:18.395504 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:02:18 crc kubenswrapper[3020]: I1213 00:02:18.397186 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:02:18 crc kubenswrapper[3020]: I1213 00:02:18.397252 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:02:18 crc kubenswrapper[3020]: I1213 00:02:18.397276 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:02:18 crc kubenswrapper[3020]: I1213 00:02:18.397311 3020 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 13 00:02:18 crc kubenswrapper[3020]: E1213 00:02:18.398589 3020 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Dec 13 00:02:18 crc kubenswrapper[3020]: I1213 00:02:18.655011 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:19 crc kubenswrapper[3020]: I1213 00:02:19.656113 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:20 crc kubenswrapper[3020]: I1213 00:02:20.655084 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:20 crc kubenswrapper[3020]: W1213 00:02:20.706617 3020 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:20 crc kubenswrapper[3020]: E1213 00:02:20.706730 3020 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:21 crc kubenswrapper[3020]: I1213 00:02:21.574877 3020 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 00:02:21 crc kubenswrapper[3020]: I1213 00:02:21.575139 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:02:21 crc kubenswrapper[3020]: I1213 00:02:21.576680 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:02:21 crc kubenswrapper[3020]: I1213 00:02:21.576712 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:02:21 crc kubenswrapper[3020]: I1213 00:02:21.576722 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:02:21 crc kubenswrapper[3020]: I1213 00:02:21.655419 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:22 crc kubenswrapper[3020]: I1213 00:02:22.655224 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:22 crc kubenswrapper[3020]: W1213 00:02:22.745313 3020 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:22 crc kubenswrapper[3020]: E1213 00:02:22.745532 3020 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:23 crc kubenswrapper[3020]: I1213 00:02:23.660797 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:24 crc kubenswrapper[3020]: I1213 00:02:24.323107 3020 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 00:02:24 crc kubenswrapper[3020]: I1213 00:02:24.323955 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:02:24 crc kubenswrapper[3020]: I1213 00:02:24.325930 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:02:24 crc kubenswrapper[3020]: I1213 00:02:24.325993 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:02:24 crc kubenswrapper[3020]: I1213 00:02:24.326020 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:02:24 crc kubenswrapper[3020]: I1213 00:02:24.575793 3020 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 13 00:02:24 crc kubenswrapper[3020]: I1213 00:02:24.576257 3020 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 13 00:02:24 crc kubenswrapper[3020]: I1213 00:02:24.655299 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:24 crc kubenswrapper[3020]: E1213 00:02:24.906315 3020 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 13 00:02:25 crc kubenswrapper[3020]: E1213 00:02:25.287231 3020 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Dec 13 00:02:25 crc kubenswrapper[3020]: I1213 00:02:25.400143 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:02:25 crc kubenswrapper[3020]: I1213 00:02:25.403001 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:02:25 crc kubenswrapper[3020]: I1213 00:02:25.403039 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:02:25 crc kubenswrapper[3020]: I1213 00:02:25.403049 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:02:25 crc kubenswrapper[3020]: I1213 00:02:25.403106 3020 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 13 00:02:25 crc kubenswrapper[3020]: E1213 00:02:25.404316 3020 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Dec 13 00:02:25 crc kubenswrapper[3020]: I1213 00:02:25.654533 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:26 crc kubenswrapper[3020]: I1213 00:02:26.655222 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:27 crc kubenswrapper[3020]: E1213 00:02:27.549368 3020 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" event="&Event{ObjectMeta:{crc.18809d63824912f3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-13 00:01:44.642384627 +0000 UTC m=+0.395105704,LastTimestamp:2025-12-13 00:01:44.642384627 +0000 UTC m=+0.395105704,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 13 00:02:27 crc kubenswrapper[3020]: I1213 00:02:27.654828 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:28 crc kubenswrapper[3020]: I1213 00:02:28.654934 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:29 crc kubenswrapper[3020]: I1213 00:02:29.655541 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:30 crc kubenswrapper[3020]: I1213 00:02:30.654959 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:30 crc kubenswrapper[3020]: W1213 00:02:30.784820 3020 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:30 crc kubenswrapper[3020]: E1213 00:02:30.784982 3020 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:31 crc kubenswrapper[3020]: I1213 00:02:31.655303 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:32 crc kubenswrapper[3020]: E1213 00:02:32.289034 3020 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Dec 13 00:02:32 crc kubenswrapper[3020]: I1213 00:02:32.404895 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:02:32 crc kubenswrapper[3020]: I1213 00:02:32.406425 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:02:32 crc kubenswrapper[3020]: I1213 00:02:32.406501 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:02:32 crc kubenswrapper[3020]: I1213 00:02:32.406517 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:02:32 crc kubenswrapper[3020]: I1213 00:02:32.406553 3020 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 13 00:02:32 crc kubenswrapper[3020]: E1213 00:02:32.407958 3020 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Dec 13 00:02:32 crc kubenswrapper[3020]: I1213 00:02:32.655070 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:33 crc kubenswrapper[3020]: I1213 00:02:33.655299 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:34 crc kubenswrapper[3020]: I1213 00:02:34.575382 3020 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 13 00:02:34 crc kubenswrapper[3020]: I1213 00:02:34.575536 3020 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 13 00:02:34 crc kubenswrapper[3020]: I1213 00:02:34.655489 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:34 crc kubenswrapper[3020]: E1213 00:02:34.907054 3020 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 13 00:02:35 crc kubenswrapper[3020]: I1213 00:02:35.655439 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:36 crc kubenswrapper[3020]: I1213 00:02:36.655118 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:37 crc kubenswrapper[3020]: E1213 00:02:37.551811 3020 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" event="&Event{ObjectMeta:{crc.18809d63824912f3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-13 00:01:44.642384627 +0000 UTC m=+0.395105704,LastTimestamp:2025-12-13 00:01:44.642384627 +0000 UTC m=+0.395105704,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 13 00:02:37 crc kubenswrapper[3020]: I1213 00:02:37.655839 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:38 crc kubenswrapper[3020]: I1213 00:02:38.654955 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:38 crc kubenswrapper[3020]: I1213 00:02:38.858520 3020 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 13 00:02:38 crc kubenswrapper[3020]: I1213 00:02:38.858667 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:02:38 crc kubenswrapper[3020]: I1213 00:02:38.860109 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:02:38 crc kubenswrapper[3020]: I1213 00:02:38.860157 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:02:38 crc kubenswrapper[3020]: I1213 00:02:38.860171 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:02:39 crc kubenswrapper[3020]: E1213 00:02:39.290625 3020 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Dec 13 00:02:39 crc kubenswrapper[3020]: I1213 00:02:39.408862 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:02:39 crc kubenswrapper[3020]: I1213 00:02:39.410277 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:02:39 crc kubenswrapper[3020]: I1213 00:02:39.410317 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:02:39 crc kubenswrapper[3020]: I1213 00:02:39.410332 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:02:39 crc kubenswrapper[3020]: I1213 00:02:39.410357 3020 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 13 00:02:39 crc kubenswrapper[3020]: E1213 00:02:39.411447 3020 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Dec 13 00:02:39 crc kubenswrapper[3020]: I1213 00:02:39.658693 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:40 crc kubenswrapper[3020]: I1213 00:02:40.655537 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:41 crc kubenswrapper[3020]: I1213 00:02:41.654613 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:42 crc kubenswrapper[3020]: I1213 00:02:42.655136 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:43 crc kubenswrapper[3020]: I1213 00:02:43.655604 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:44 crc kubenswrapper[3020]: I1213 00:02:44.575603 3020 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 13 00:02:44 crc kubenswrapper[3020]: I1213 00:02:44.575749 3020 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 13 00:02:44 crc kubenswrapper[3020]: I1213 00:02:44.575831 3020 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 00:02:44 crc kubenswrapper[3020]: I1213 00:02:44.576050 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:02:44 crc kubenswrapper[3020]: I1213 00:02:44.577904 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:02:44 crc kubenswrapper[3020]: I1213 00:02:44.577951 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:02:44 crc kubenswrapper[3020]: I1213 00:02:44.577968 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:02:44 crc kubenswrapper[3020]: I1213 00:02:44.580033 3020 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"25cf66d93a23a1216ee61df90cad33f3170c0ff5e364f333b4adebf41e14f6b0"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Dec 13 00:02:44 crc kubenswrapper[3020]: I1213 00:02:44.580371 3020 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" containerName="cluster-policy-controller" containerID="cri-o://25cf66d93a23a1216ee61df90cad33f3170c0ff5e364f333b4adebf41e14f6b0" gracePeriod=30 Dec 13 00:02:44 crc kubenswrapper[3020]: I1213 00:02:44.654158 3020 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 13 00:02:44 crc kubenswrapper[3020]: I1213 00:02:44.654246 3020 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 13 00:02:44 crc kubenswrapper[3020]: I1213 00:02:44.654307 3020 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 13 00:02:44 crc kubenswrapper[3020]: I1213 00:02:44.654356 3020 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 13 00:02:44 crc kubenswrapper[3020]: I1213 00:02:44.654396 3020 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 13 00:02:44 crc kubenswrapper[3020]: I1213 00:02:44.655834 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:44 crc kubenswrapper[3020]: E1213 00:02:44.907985 3020 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 13 00:02:45 crc kubenswrapper[3020]: I1213 00:02:45.100904 3020 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/2.log" Dec 13 00:02:45 crc kubenswrapper[3020]: I1213 00:02:45.102655 3020 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/1.log" Dec 13 00:02:45 crc kubenswrapper[3020]: I1213 00:02:45.103238 3020 generic.go:334] "Generic (PLEG): container finished" podID="bd6a3a59e513625ca0ae3724df2686bc" containerID="25cf66d93a23a1216ee61df90cad33f3170c0ff5e364f333b4adebf41e14f6b0" exitCode=255 Dec 13 00:02:45 crc kubenswrapper[3020]: I1213 00:02:45.103304 3020 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerDied","Data":"25cf66d93a23a1216ee61df90cad33f3170c0ff5e364f333b4adebf41e14f6b0"} Dec 13 00:02:45 crc kubenswrapper[3020]: I1213 00:02:45.103343 3020 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerStarted","Data":"e731595fa3a5d4caf6abb6b5f0de1823bd7feedbe79a73f39316f7ff5364ccba"} Dec 13 00:02:45 crc kubenswrapper[3020]: I1213 00:02:45.103393 3020 scope.go:117] "RemoveContainer" containerID="9302f0b546189f3a87b8d9663cc93cb01b5a2d3db95c7bc3af58fb27ce6cbeab" Dec 13 00:02:45 crc kubenswrapper[3020]: I1213 00:02:45.103621 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:02:45 crc kubenswrapper[3020]: I1213 00:02:45.110222 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:02:45 crc kubenswrapper[3020]: I1213 00:02:45.110264 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:02:45 crc kubenswrapper[3020]: I1213 00:02:45.110279 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:02:45 crc kubenswrapper[3020]: I1213 00:02:45.655068 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:46 crc kubenswrapper[3020]: I1213 00:02:46.108635 3020 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/2.log" Dec 13 00:02:46 crc kubenswrapper[3020]: E1213 00:02:46.292569 3020 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Dec 13 00:02:46 crc kubenswrapper[3020]: I1213 00:02:46.412504 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:02:46 crc kubenswrapper[3020]: I1213 00:02:46.413972 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:02:46 crc kubenswrapper[3020]: I1213 00:02:46.414015 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:02:46 crc kubenswrapper[3020]: I1213 00:02:46.414031 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:02:46 crc kubenswrapper[3020]: I1213 00:02:46.414060 3020 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 13 00:02:46 crc kubenswrapper[3020]: E1213 00:02:46.415278 3020 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Dec 13 00:02:46 crc kubenswrapper[3020]: W1213 00:02:46.635606 3020 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:46 crc kubenswrapper[3020]: E1213 00:02:46.635708 3020 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:46 crc kubenswrapper[3020]: I1213 00:02:46.655119 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:47 crc kubenswrapper[3020]: E1213 00:02:47.554394 3020 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" event="&Event{ObjectMeta:{crc.18809d63824912f3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-13 00:01:44.642384627 +0000 UTC m=+0.395105704,LastTimestamp:2025-12-13 00:01:44.642384627 +0000 UTC m=+0.395105704,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 13 00:02:47 crc kubenswrapper[3020]: I1213 00:02:47.655654 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:48 crc kubenswrapper[3020]: I1213 00:02:48.655245 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:49 crc kubenswrapper[3020]: I1213 00:02:49.656252 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:50 crc kubenswrapper[3020]: I1213 00:02:50.655298 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:50 crc kubenswrapper[3020]: I1213 00:02:50.839112 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:02:50 crc kubenswrapper[3020]: I1213 00:02:50.841031 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:02:50 crc kubenswrapper[3020]: I1213 00:02:50.841094 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:02:50 crc kubenswrapper[3020]: I1213 00:02:50.841109 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:02:51 crc kubenswrapper[3020]: I1213 00:02:51.575557 3020 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 00:02:51 crc kubenswrapper[3020]: I1213 00:02:51.575800 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:02:51 crc kubenswrapper[3020]: I1213 00:02:51.577143 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:02:51 crc kubenswrapper[3020]: I1213 00:02:51.577264 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:02:51 crc kubenswrapper[3020]: I1213 00:02:51.577357 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:02:51 crc kubenswrapper[3020]: I1213 00:02:51.655638 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:52 crc kubenswrapper[3020]: I1213 00:02:52.655437 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:53 crc kubenswrapper[3020]: E1213 00:02:53.294654 3020 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Dec 13 00:02:53 crc kubenswrapper[3020]: I1213 00:02:53.416164 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:02:53 crc kubenswrapper[3020]: I1213 00:02:53.418365 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:02:53 crc kubenswrapper[3020]: I1213 00:02:53.418407 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:02:53 crc kubenswrapper[3020]: I1213 00:02:53.418417 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:02:53 crc kubenswrapper[3020]: I1213 00:02:53.418445 3020 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 13 00:02:53 crc kubenswrapper[3020]: E1213 00:02:53.419652 3020 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Dec 13 00:02:53 crc kubenswrapper[3020]: I1213 00:02:53.654826 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:54 crc kubenswrapper[3020]: I1213 00:02:54.322688 3020 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 00:02:54 crc kubenswrapper[3020]: I1213 00:02:54.322990 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:02:54 crc kubenswrapper[3020]: I1213 00:02:54.324663 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:02:54 crc kubenswrapper[3020]: I1213 00:02:54.324731 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:02:54 crc kubenswrapper[3020]: I1213 00:02:54.324757 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:02:54 crc kubenswrapper[3020]: I1213 00:02:54.575668 3020 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 13 00:02:54 crc kubenswrapper[3020]: I1213 00:02:54.575765 3020 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 13 00:02:54 crc kubenswrapper[3020]: I1213 00:02:54.655597 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:54 crc kubenswrapper[3020]: E1213 00:02:54.908755 3020 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 13 00:02:55 crc kubenswrapper[3020]: I1213 00:02:55.655474 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:56 crc kubenswrapper[3020]: W1213 00:02:56.302667 3020 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:56 crc kubenswrapper[3020]: E1213 00:02:56.303097 3020 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:56 crc kubenswrapper[3020]: I1213 00:02:56.654840 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:57 crc kubenswrapper[3020]: W1213 00:02:57.290524 3020 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:57 crc kubenswrapper[3020]: E1213 00:02:57.290636 3020 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:57 crc kubenswrapper[3020]: E1213 00:02:57.556475 3020 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" event="&Event{ObjectMeta:{crc.18809d63824912f3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-13 00:01:44.642384627 +0000 UTC m=+0.395105704,LastTimestamp:2025-12-13 00:01:44.642384627 +0000 UTC m=+0.395105704,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 13 00:02:57 crc kubenswrapper[3020]: I1213 00:02:57.655211 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:58 crc kubenswrapper[3020]: I1213 00:02:58.682081 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:02:59 crc kubenswrapper[3020]: I1213 00:02:59.655624 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:03:00 crc kubenswrapper[3020]: E1213 00:03:00.298021 3020 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Dec 13 00:03:00 crc kubenswrapper[3020]: I1213 00:03:00.420437 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:03:00 crc kubenswrapper[3020]: I1213 00:03:00.424822 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:03:00 crc kubenswrapper[3020]: I1213 00:03:00.425062 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:03:00 crc kubenswrapper[3020]: I1213 00:03:00.425139 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:03:00 crc kubenswrapper[3020]: I1213 00:03:00.425221 3020 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 13 00:03:00 crc kubenswrapper[3020]: E1213 00:03:00.426877 3020 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Dec 13 00:03:00 crc kubenswrapper[3020]: I1213 00:03:00.655154 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:03:01 crc kubenswrapper[3020]: I1213 00:03:01.655258 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:03:02 crc kubenswrapper[3020]: I1213 00:03:02.655055 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:03:03 crc kubenswrapper[3020]: I1213 00:03:03.655213 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:03:04 crc kubenswrapper[3020]: W1213 00:03:04.546838 3020 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:03:04 crc kubenswrapper[3020]: E1213 00:03:04.547002 3020 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:03:04 crc kubenswrapper[3020]: I1213 00:03:04.576065 3020 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 13 00:03:04 crc kubenswrapper[3020]: I1213 00:03:04.576317 3020 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 13 00:03:04 crc kubenswrapper[3020]: I1213 00:03:04.655297 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:03:04 crc kubenswrapper[3020]: E1213 00:03:04.909822 3020 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 13 00:03:05 crc kubenswrapper[3020]: I1213 00:03:05.655559 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:03:06 crc kubenswrapper[3020]: I1213 00:03:06.654883 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:03:07 crc kubenswrapper[3020]: E1213 00:03:07.300840 3020 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Dec 13 00:03:07 crc kubenswrapper[3020]: I1213 00:03:07.427771 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:03:07 crc kubenswrapper[3020]: I1213 00:03:07.429374 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:03:07 crc kubenswrapper[3020]: I1213 00:03:07.429437 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:03:07 crc kubenswrapper[3020]: I1213 00:03:07.429478 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:03:07 crc kubenswrapper[3020]: I1213 00:03:07.429516 3020 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 13 00:03:07 crc kubenswrapper[3020]: E1213 00:03:07.431054 3020 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Dec 13 00:03:07 crc kubenswrapper[3020]: E1213 00:03:07.558726 3020 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" event="&Event{ObjectMeta:{crc.18809d63824912f3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-13 00:01:44.642384627 +0000 UTC m=+0.395105704,LastTimestamp:2025-12-13 00:01:44.642384627 +0000 UTC m=+0.395105704,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 13 00:03:07 crc kubenswrapper[3020]: I1213 00:03:07.654875 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:03:08 crc kubenswrapper[3020]: I1213 00:03:08.654854 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:03:09 crc kubenswrapper[3020]: I1213 00:03:09.655224 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:03:10 crc kubenswrapper[3020]: I1213 00:03:10.654583 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:03:11 crc kubenswrapper[3020]: I1213 00:03:11.655279 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:03:12 crc kubenswrapper[3020]: I1213 00:03:12.654903 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:03:13 crc kubenswrapper[3020]: I1213 00:03:13.654965 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:03:14 crc kubenswrapper[3020]: E1213 00:03:14.302713 3020 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Dec 13 00:03:14 crc kubenswrapper[3020]: I1213 00:03:14.431845 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:03:14 crc kubenswrapper[3020]: I1213 00:03:14.432919 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:03:14 crc kubenswrapper[3020]: I1213 00:03:14.432964 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:03:14 crc kubenswrapper[3020]: I1213 00:03:14.432975 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:03:14 crc kubenswrapper[3020]: I1213 00:03:14.432999 3020 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 13 00:03:14 crc kubenswrapper[3020]: E1213 00:03:14.434236 3020 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Dec 13 00:03:14 crc kubenswrapper[3020]: I1213 00:03:14.576005 3020 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 13 00:03:14 crc kubenswrapper[3020]: I1213 00:03:14.576152 3020 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 13 00:03:14 crc kubenswrapper[3020]: I1213 00:03:14.576226 3020 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 00:03:14 crc kubenswrapper[3020]: I1213 00:03:14.576392 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:03:14 crc kubenswrapper[3020]: I1213 00:03:14.577526 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:03:14 crc kubenswrapper[3020]: I1213 00:03:14.577560 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:03:14 crc kubenswrapper[3020]: I1213 00:03:14.577570 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:03:14 crc kubenswrapper[3020]: I1213 00:03:14.578987 3020 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"e731595fa3a5d4caf6abb6b5f0de1823bd7feedbe79a73f39316f7ff5364ccba"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Dec 13 00:03:14 crc kubenswrapper[3020]: I1213 00:03:14.579248 3020 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" containerName="cluster-policy-controller" containerID="cri-o://e731595fa3a5d4caf6abb6b5f0de1823bd7feedbe79a73f39316f7ff5364ccba" gracePeriod=30 Dec 13 00:03:14 crc kubenswrapper[3020]: I1213 00:03:14.654757 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:03:14 crc kubenswrapper[3020]: E1213 00:03:14.910743 3020 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 13 00:03:15 crc kubenswrapper[3020]: I1213 00:03:15.195515 3020 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/3.log" Dec 13 00:03:15 crc kubenswrapper[3020]: I1213 00:03:15.196258 3020 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/2.log" Dec 13 00:03:15 crc kubenswrapper[3020]: I1213 00:03:15.197736 3020 generic.go:334] "Generic (PLEG): container finished" podID="bd6a3a59e513625ca0ae3724df2686bc" containerID="e731595fa3a5d4caf6abb6b5f0de1823bd7feedbe79a73f39316f7ff5364ccba" exitCode=255 Dec 13 00:03:15 crc kubenswrapper[3020]: I1213 00:03:15.197779 3020 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerDied","Data":"e731595fa3a5d4caf6abb6b5f0de1823bd7feedbe79a73f39316f7ff5364ccba"} Dec 13 00:03:15 crc kubenswrapper[3020]: I1213 00:03:15.197803 3020 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerStarted","Data":"684cf724d79b0e39769e8a7c94e8028394989479e264c13a2ba71dc45cb2fa84"} Dec 13 00:03:15 crc kubenswrapper[3020]: I1213 00:03:15.197823 3020 scope.go:117] "RemoveContainer" containerID="25cf66d93a23a1216ee61df90cad33f3170c0ff5e364f333b4adebf41e14f6b0" Dec 13 00:03:15 crc kubenswrapper[3020]: I1213 00:03:15.198028 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:03:15 crc kubenswrapper[3020]: I1213 00:03:15.200190 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:03:15 crc kubenswrapper[3020]: I1213 00:03:15.200218 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:03:15 crc kubenswrapper[3020]: I1213 00:03:15.200231 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:03:15 crc kubenswrapper[3020]: I1213 00:03:15.655672 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:03:16 crc kubenswrapper[3020]: I1213 00:03:16.203382 3020 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/3.log" Dec 13 00:03:16 crc kubenswrapper[3020]: I1213 00:03:16.654939 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:03:17 crc kubenswrapper[3020]: E1213 00:03:17.561176 3020 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" event="&Event{ObjectMeta:{crc.18809d63824912f3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-13 00:01:44.642384627 +0000 UTC m=+0.395105704,LastTimestamp:2025-12-13 00:01:44.642384627 +0000 UTC m=+0.395105704,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 13 00:03:17 crc kubenswrapper[3020]: I1213 00:03:17.655867 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:03:18 crc kubenswrapper[3020]: I1213 00:03:18.654803 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:03:19 crc kubenswrapper[3020]: I1213 00:03:19.655306 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:03:20 crc kubenswrapper[3020]: I1213 00:03:20.654960 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:03:21 crc kubenswrapper[3020]: E1213 00:03:21.304788 3020 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Dec 13 00:03:21 crc kubenswrapper[3020]: I1213 00:03:21.434433 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:03:21 crc kubenswrapper[3020]: I1213 00:03:21.436532 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:03:21 crc kubenswrapper[3020]: I1213 00:03:21.436609 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:03:21 crc kubenswrapper[3020]: I1213 00:03:21.436626 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:03:21 crc kubenswrapper[3020]: I1213 00:03:21.436672 3020 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 13 00:03:21 crc kubenswrapper[3020]: E1213 00:03:21.438288 3020 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Dec 13 00:03:21 crc kubenswrapper[3020]: I1213 00:03:21.575247 3020 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 00:03:21 crc kubenswrapper[3020]: I1213 00:03:21.575651 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:03:21 crc kubenswrapper[3020]: I1213 00:03:21.577439 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:03:21 crc kubenswrapper[3020]: I1213 00:03:21.577567 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:03:21 crc kubenswrapper[3020]: I1213 00:03:21.577591 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:03:21 crc kubenswrapper[3020]: I1213 00:03:21.655583 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:03:22 crc kubenswrapper[3020]: I1213 00:03:22.655520 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:03:23 crc kubenswrapper[3020]: I1213 00:03:23.655738 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:03:24 crc kubenswrapper[3020]: I1213 00:03:24.323065 3020 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 00:03:24 crc kubenswrapper[3020]: I1213 00:03:24.323290 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:03:24 crc kubenswrapper[3020]: I1213 00:03:24.324722 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:03:24 crc kubenswrapper[3020]: I1213 00:03:24.324847 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:03:24 crc kubenswrapper[3020]: I1213 00:03:24.325265 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:03:24 crc kubenswrapper[3020]: I1213 00:03:24.575602 3020 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 13 00:03:24 crc kubenswrapper[3020]: I1213 00:03:24.575712 3020 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 13 00:03:24 crc kubenswrapper[3020]: I1213 00:03:24.654925 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:03:24 crc kubenswrapper[3020]: E1213 00:03:24.911236 3020 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 13 00:03:25 crc kubenswrapper[3020]: I1213 00:03:25.656243 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:03:25 crc kubenswrapper[3020]: I1213 00:03:25.839350 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:03:25 crc kubenswrapper[3020]: I1213 00:03:25.840407 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:03:25 crc kubenswrapper[3020]: I1213 00:03:25.840471 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:03:25 crc kubenswrapper[3020]: I1213 00:03:25.840486 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:03:26 crc kubenswrapper[3020]: I1213 00:03:26.655537 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:03:27 crc kubenswrapper[3020]: E1213 00:03:27.563531 3020 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" event="&Event{ObjectMeta:{crc.18809d63824912f3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-13 00:01:44.642384627 +0000 UTC m=+0.395105704,LastTimestamp:2025-12-13 00:01:44.642384627 +0000 UTC m=+0.395105704,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 13 00:03:27 crc kubenswrapper[3020]: E1213 00:03:27.563640 3020 event.go:294] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{crc.18809d63824912f3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-13 00:01:44.642384627 +0000 UTC m=+0.395105704,LastTimestamp:2025-12-13 00:01:44.642384627 +0000 UTC m=+0.395105704,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 13 00:03:27 crc kubenswrapper[3020]: E1213 00:03:27.564841 3020 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" event="&Event{ObjectMeta:{crc.18809d6386dda438 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-13 00:01:44.719230008 +0000 UTC m=+0.471951125,LastTimestamp:2025-12-13 00:01:44.719230008 +0000 UTC m=+0.471951125,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 13 00:03:27 crc kubenswrapper[3020]: I1213 00:03:27.655938 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:03:28 crc kubenswrapper[3020]: E1213 00:03:28.307199 3020 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Dec 13 00:03:28 crc kubenswrapper[3020]: I1213 00:03:28.439048 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:03:28 crc kubenswrapper[3020]: I1213 00:03:28.441120 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:03:28 crc kubenswrapper[3020]: I1213 00:03:28.441229 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:03:28 crc kubenswrapper[3020]: I1213 00:03:28.441262 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:03:28 crc kubenswrapper[3020]: I1213 00:03:28.441324 3020 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 13 00:03:28 crc kubenswrapper[3020]: E1213 00:03:28.442923 3020 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Dec 13 00:03:28 crc kubenswrapper[3020]: I1213 00:03:28.655354 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:03:29 crc kubenswrapper[3020]: I1213 00:03:29.656113 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:03:30 crc kubenswrapper[3020]: W1213 00:03:30.094423 3020 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:03:30 crc kubenswrapper[3020]: E1213 00:03:30.094613 3020 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:03:30 crc kubenswrapper[3020]: I1213 00:03:30.655412 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:03:31 crc kubenswrapper[3020]: I1213 00:03:31.655478 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:03:32 crc kubenswrapper[3020]: I1213 00:03:32.655336 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:03:33 crc kubenswrapper[3020]: I1213 00:03:33.655072 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:03:34 crc kubenswrapper[3020]: I1213 00:03:34.575236 3020 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 13 00:03:34 crc kubenswrapper[3020]: I1213 00:03:34.575577 3020 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 13 00:03:34 crc kubenswrapper[3020]: I1213 00:03:34.654964 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:03:34 crc kubenswrapper[3020]: I1213 00:03:34.838544 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:03:34 crc kubenswrapper[3020]: I1213 00:03:34.839381 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:03:34 crc kubenswrapper[3020]: I1213 00:03:34.839413 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:03:34 crc kubenswrapper[3020]: I1213 00:03:34.839425 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:03:34 crc kubenswrapper[3020]: E1213 00:03:34.911411 3020 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 13 00:03:35 crc kubenswrapper[3020]: E1213 00:03:35.310086 3020 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Dec 13 00:03:35 crc kubenswrapper[3020]: I1213 00:03:35.443823 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:03:35 crc kubenswrapper[3020]: I1213 00:03:35.446570 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:03:35 crc kubenswrapper[3020]: I1213 00:03:35.446648 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:03:35 crc kubenswrapper[3020]: I1213 00:03:35.446668 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:03:35 crc kubenswrapper[3020]: I1213 00:03:35.446712 3020 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 13 00:03:35 crc kubenswrapper[3020]: E1213 00:03:35.448378 3020 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Dec 13 00:03:35 crc kubenswrapper[3020]: I1213 00:03:35.654862 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:03:36 crc kubenswrapper[3020]: I1213 00:03:36.655092 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:03:37 crc kubenswrapper[3020]: E1213 00:03:37.426380 3020 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" event="&Event{ObjectMeta:{crc.18809d6386dda438 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-13 00:01:44.719230008 +0000 UTC m=+0.471951125,LastTimestamp:2025-12-13 00:01:44.719230008 +0000 UTC m=+0.471951125,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 13 00:03:37 crc kubenswrapper[3020]: I1213 00:03:37.655299 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:03:38 crc kubenswrapper[3020]: I1213 00:03:38.655298 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:03:39 crc kubenswrapper[3020]: I1213 00:03:39.655660 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:03:40 crc kubenswrapper[3020]: I1213 00:03:40.655669 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:03:41 crc kubenswrapper[3020]: I1213 00:03:41.655412 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:03:42 crc kubenswrapper[3020]: E1213 00:03:42.312359 3020 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Dec 13 00:03:42 crc kubenswrapper[3020]: I1213 00:03:42.449323 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:03:42 crc kubenswrapper[3020]: I1213 00:03:42.451026 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:03:42 crc kubenswrapper[3020]: I1213 00:03:42.451108 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:03:42 crc kubenswrapper[3020]: I1213 00:03:42.451139 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:03:42 crc kubenswrapper[3020]: I1213 00:03:42.451193 3020 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 13 00:03:42 crc kubenswrapper[3020]: E1213 00:03:42.452932 3020 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Dec 13 00:03:42 crc kubenswrapper[3020]: I1213 00:03:42.655326 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:03:43 crc kubenswrapper[3020]: I1213 00:03:43.655319 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:03:43 crc kubenswrapper[3020]: W1213 00:03:43.885390 3020 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:03:43 crc kubenswrapper[3020]: E1213 00:03:43.885569 3020 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:03:44 crc kubenswrapper[3020]: I1213 00:03:44.575039 3020 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 13 00:03:44 crc kubenswrapper[3020]: I1213 00:03:44.575192 3020 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 13 00:03:44 crc kubenswrapper[3020]: I1213 00:03:44.575271 3020 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 00:03:44 crc kubenswrapper[3020]: I1213 00:03:44.575545 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:03:44 crc kubenswrapper[3020]: I1213 00:03:44.577334 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:03:44 crc kubenswrapper[3020]: I1213 00:03:44.577385 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:03:44 crc kubenswrapper[3020]: I1213 00:03:44.577405 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:03:44 crc kubenswrapper[3020]: I1213 00:03:44.580780 3020 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"684cf724d79b0e39769e8a7c94e8028394989479e264c13a2ba71dc45cb2fa84"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Dec 13 00:03:44 crc kubenswrapper[3020]: I1213 00:03:44.581361 3020 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" containerName="cluster-policy-controller" containerID="cri-o://684cf724d79b0e39769e8a7c94e8028394989479e264c13a2ba71dc45cb2fa84" gracePeriod=30 Dec 13 00:03:44 crc kubenswrapper[3020]: I1213 00:03:44.654934 3020 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 13 00:03:44 crc kubenswrapper[3020]: I1213 00:03:44.655051 3020 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 13 00:03:44 crc kubenswrapper[3020]: I1213 00:03:44.655129 3020 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 13 00:03:44 crc kubenswrapper[3020]: I1213 00:03:44.655191 3020 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 13 00:03:44 crc kubenswrapper[3020]: I1213 00:03:44.655232 3020 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 13 00:03:44 crc kubenswrapper[3020]: I1213 00:03:44.656720 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:03:44 crc kubenswrapper[3020]: E1213 00:03:44.911592 3020 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 13 00:03:45 crc kubenswrapper[3020]: I1213 00:03:45.283326 3020 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/4.log" Dec 13 00:03:45 crc kubenswrapper[3020]: I1213 00:03:45.284067 3020 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/3.log" Dec 13 00:03:45 crc kubenswrapper[3020]: I1213 00:03:45.285497 3020 generic.go:334] "Generic (PLEG): container finished" podID="bd6a3a59e513625ca0ae3724df2686bc" containerID="684cf724d79b0e39769e8a7c94e8028394989479e264c13a2ba71dc45cb2fa84" exitCode=255 Dec 13 00:03:45 crc kubenswrapper[3020]: I1213 00:03:45.285702 3020 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerDied","Data":"684cf724d79b0e39769e8a7c94e8028394989479e264c13a2ba71dc45cb2fa84"} Dec 13 00:03:45 crc kubenswrapper[3020]: I1213 00:03:45.285888 3020 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerStarted","Data":"0785f009aa8e5a7abadeb3965619cb99994f273aa439f6663a560b90c56b70b0"} Dec 13 00:03:45 crc kubenswrapper[3020]: I1213 00:03:45.286071 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:03:45 crc kubenswrapper[3020]: I1213 00:03:45.286072 3020 scope.go:117] "RemoveContainer" containerID="e731595fa3a5d4caf6abb6b5f0de1823bd7feedbe79a73f39316f7ff5364ccba" Dec 13 00:03:45 crc kubenswrapper[3020]: I1213 00:03:45.288327 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:03:45 crc kubenswrapper[3020]: I1213 00:03:45.288859 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:03:45 crc kubenswrapper[3020]: I1213 00:03:45.289044 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:03:45 crc kubenswrapper[3020]: I1213 00:03:45.655105 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:03:45 crc kubenswrapper[3020]: I1213 00:03:45.838947 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:03:45 crc kubenswrapper[3020]: I1213 00:03:45.839978 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:03:45 crc kubenswrapper[3020]: I1213 00:03:45.840020 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:03:45 crc kubenswrapper[3020]: I1213 00:03:45.840072 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:03:46 crc kubenswrapper[3020]: I1213 00:03:46.294611 3020 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/4.log" Dec 13 00:03:46 crc kubenswrapper[3020]: I1213 00:03:46.298023 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:03:46 crc kubenswrapper[3020]: I1213 00:03:46.299018 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:03:46 crc kubenswrapper[3020]: I1213 00:03:46.299068 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:03:46 crc kubenswrapper[3020]: I1213 00:03:46.299089 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:03:46 crc kubenswrapper[3020]: I1213 00:03:46.655569 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:03:47 crc kubenswrapper[3020]: E1213 00:03:47.429019 3020 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" event="&Event{ObjectMeta:{crc.18809d6386dda438 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-13 00:01:44.719230008 +0000 UTC m=+0.471951125,LastTimestamp:2025-12-13 00:01:44.719230008 +0000 UTC m=+0.471951125,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 13 00:03:47 crc kubenswrapper[3020]: I1213 00:03:47.656071 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:03:48 crc kubenswrapper[3020]: I1213 00:03:48.654911 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:03:49 crc kubenswrapper[3020]: E1213 00:03:49.314488 3020 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Dec 13 00:03:49 crc kubenswrapper[3020]: I1213 00:03:49.453997 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:03:49 crc kubenswrapper[3020]: I1213 00:03:49.455767 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:03:49 crc kubenswrapper[3020]: I1213 00:03:49.456005 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:03:49 crc kubenswrapper[3020]: I1213 00:03:49.456168 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:03:49 crc kubenswrapper[3020]: I1213 00:03:49.456349 3020 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 13 00:03:49 crc kubenswrapper[3020]: E1213 00:03:49.458120 3020 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Dec 13 00:03:49 crc kubenswrapper[3020]: I1213 00:03:49.655549 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:03:50 crc kubenswrapper[3020]: I1213 00:03:50.654978 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:03:51 crc kubenswrapper[3020]: I1213 00:03:51.574940 3020 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 00:03:51 crc kubenswrapper[3020]: I1213 00:03:51.575167 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:03:51 crc kubenswrapper[3020]: I1213 00:03:51.577706 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:03:51 crc kubenswrapper[3020]: I1213 00:03:51.577774 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:03:51 crc kubenswrapper[3020]: I1213 00:03:51.577795 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:03:51 crc kubenswrapper[3020]: I1213 00:03:51.656540 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:03:52 crc kubenswrapper[3020]: I1213 00:03:52.655548 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:03:52 crc kubenswrapper[3020]: W1213 00:03:52.734761 3020 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:03:52 crc kubenswrapper[3020]: E1213 00:03:52.734852 3020 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:03:53 crc kubenswrapper[3020]: I1213 00:03:53.655182 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:03:54 crc kubenswrapper[3020]: I1213 00:03:54.323496 3020 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 00:03:54 crc kubenswrapper[3020]: I1213 00:03:54.324051 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:03:54 crc kubenswrapper[3020]: I1213 00:03:54.324927 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:03:54 crc kubenswrapper[3020]: I1213 00:03:54.324996 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:03:54 crc kubenswrapper[3020]: I1213 00:03:54.325017 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:03:54 crc kubenswrapper[3020]: I1213 00:03:54.575617 3020 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 13 00:03:54 crc kubenswrapper[3020]: I1213 00:03:54.575714 3020 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 13 00:03:54 crc kubenswrapper[3020]: I1213 00:03:54.654736 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:03:54 crc kubenswrapper[3020]: E1213 00:03:54.912087 3020 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 13 00:03:55 crc kubenswrapper[3020]: I1213 00:03:55.655435 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:03:56 crc kubenswrapper[3020]: E1213 00:03:56.317151 3020 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Dec 13 00:03:56 crc kubenswrapper[3020]: I1213 00:03:56.458841 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:03:56 crc kubenswrapper[3020]: I1213 00:03:56.460996 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:03:56 crc kubenswrapper[3020]: I1213 00:03:56.461079 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:03:56 crc kubenswrapper[3020]: I1213 00:03:56.461095 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:03:56 crc kubenswrapper[3020]: I1213 00:03:56.461124 3020 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 13 00:03:56 crc kubenswrapper[3020]: E1213 00:03:56.462394 3020 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Dec 13 00:03:56 crc kubenswrapper[3020]: I1213 00:03:56.655645 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:03:57 crc kubenswrapper[3020]: E1213 00:03:57.431112 3020 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" event="&Event{ObjectMeta:{crc.18809d6386dda438 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-13 00:01:44.719230008 +0000 UTC m=+0.471951125,LastTimestamp:2025-12-13 00:01:44.719230008 +0000 UTC m=+0.471951125,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 13 00:03:57 crc kubenswrapper[3020]: I1213 00:03:57.654419 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:03:58 crc kubenswrapper[3020]: I1213 00:03:58.655396 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:03:59 crc kubenswrapper[3020]: I1213 00:03:59.655400 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:04:00 crc kubenswrapper[3020]: I1213 00:04:00.656018 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:04:01 crc kubenswrapper[3020]: I1213 00:04:01.657393 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:04:01 crc kubenswrapper[3020]: I1213 00:04:01.838747 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:04:01 crc kubenswrapper[3020]: I1213 00:04:01.840543 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:04:01 crc kubenswrapper[3020]: I1213 00:04:01.840592 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:04:01 crc kubenswrapper[3020]: I1213 00:04:01.840615 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:04:02 crc kubenswrapper[3020]: I1213 00:04:02.654675 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:04:03 crc kubenswrapper[3020]: E1213 00:04:03.318813 3020 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Dec 13 00:04:03 crc kubenswrapper[3020]: I1213 00:04:03.463509 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:04:03 crc kubenswrapper[3020]: I1213 00:04:03.464925 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:04:03 crc kubenswrapper[3020]: I1213 00:04:03.464976 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:04:03 crc kubenswrapper[3020]: I1213 00:04:03.464994 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:04:03 crc kubenswrapper[3020]: I1213 00:04:03.465040 3020 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 13 00:04:03 crc kubenswrapper[3020]: E1213 00:04:03.466410 3020 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Dec 13 00:04:03 crc kubenswrapper[3020]: I1213 00:04:03.666038 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:04:03 crc kubenswrapper[3020]: W1213 00:04:03.666047 3020 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:04:03 crc kubenswrapper[3020]: E1213 00:04:03.666105 3020 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:04:04 crc kubenswrapper[3020]: I1213 00:04:04.575514 3020 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 13 00:04:04 crc kubenswrapper[3020]: I1213 00:04:04.575624 3020 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 13 00:04:04 crc kubenswrapper[3020]: I1213 00:04:04.654565 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:04:04 crc kubenswrapper[3020]: E1213 00:04:04.912980 3020 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 13 00:04:05 crc kubenswrapper[3020]: I1213 00:04:05.654944 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:04:06 crc kubenswrapper[3020]: I1213 00:04:06.655342 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:04:07 crc kubenswrapper[3020]: E1213 00:04:07.434308 3020 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" event="&Event{ObjectMeta:{crc.18809d6386dda438 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-13 00:01:44.719230008 +0000 UTC m=+0.471951125,LastTimestamp:2025-12-13 00:01:44.719230008 +0000 UTC m=+0.471951125,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 13 00:04:07 crc kubenswrapper[3020]: I1213 00:04:07.655697 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:04:08 crc kubenswrapper[3020]: I1213 00:04:08.655278 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:04:09 crc kubenswrapper[3020]: I1213 00:04:09.655748 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:04:10 crc kubenswrapper[3020]: E1213 00:04:10.321587 3020 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Dec 13 00:04:10 crc kubenswrapper[3020]: I1213 00:04:10.466885 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:04:10 crc kubenswrapper[3020]: I1213 00:04:10.468136 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:04:10 crc kubenswrapper[3020]: I1213 00:04:10.468171 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:04:10 crc kubenswrapper[3020]: I1213 00:04:10.468181 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:04:10 crc kubenswrapper[3020]: I1213 00:04:10.468204 3020 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 13 00:04:10 crc kubenswrapper[3020]: E1213 00:04:10.469623 3020 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Dec 13 00:04:10 crc kubenswrapper[3020]: I1213 00:04:10.655107 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:04:11 crc kubenswrapper[3020]: I1213 00:04:11.654915 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:04:12 crc kubenswrapper[3020]: I1213 00:04:12.655399 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:04:13 crc kubenswrapper[3020]: I1213 00:04:13.654793 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:04:14 crc kubenswrapper[3020]: I1213 00:04:14.576255 3020 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 13 00:04:14 crc kubenswrapper[3020]: I1213 00:04:14.577423 3020 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 13 00:04:14 crc kubenswrapper[3020]: I1213 00:04:14.577767 3020 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 00:04:14 crc kubenswrapper[3020]: I1213 00:04:14.578118 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:04:14 crc kubenswrapper[3020]: I1213 00:04:14.580007 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:04:14 crc kubenswrapper[3020]: I1213 00:04:14.580222 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:04:14 crc kubenswrapper[3020]: I1213 00:04:14.580387 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:04:14 crc kubenswrapper[3020]: I1213 00:04:14.583689 3020 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"0785f009aa8e5a7abadeb3965619cb99994f273aa439f6663a560b90c56b70b0"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Dec 13 00:04:14 crc kubenswrapper[3020]: I1213 00:04:14.584701 3020 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" containerName="cluster-policy-controller" containerID="cri-o://0785f009aa8e5a7abadeb3965619cb99994f273aa439f6663a560b90c56b70b0" gracePeriod=30 Dec 13 00:04:14 crc kubenswrapper[3020]: I1213 00:04:14.655540 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:04:14 crc kubenswrapper[3020]: E1213 00:04:14.661346 3020 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-crc_openshift-kube-controller-manager(bd6a3a59e513625ca0ae3724df2686bc)\"" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" Dec 13 00:04:14 crc kubenswrapper[3020]: E1213 00:04:14.913302 3020 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 13 00:04:15 crc kubenswrapper[3020]: I1213 00:04:15.372269 3020 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/5.log" Dec 13 00:04:15 crc kubenswrapper[3020]: I1213 00:04:15.372954 3020 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/4.log" Dec 13 00:04:15 crc kubenswrapper[3020]: I1213 00:04:15.373798 3020 generic.go:334] "Generic (PLEG): container finished" podID="bd6a3a59e513625ca0ae3724df2686bc" containerID="0785f009aa8e5a7abadeb3965619cb99994f273aa439f6663a560b90c56b70b0" exitCode=255 Dec 13 00:04:15 crc kubenswrapper[3020]: I1213 00:04:15.373862 3020 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerDied","Data":"0785f009aa8e5a7abadeb3965619cb99994f273aa439f6663a560b90c56b70b0"} Dec 13 00:04:15 crc kubenswrapper[3020]: I1213 00:04:15.373949 3020 scope.go:117] "RemoveContainer" containerID="684cf724d79b0e39769e8a7c94e8028394989479e264c13a2ba71dc45cb2fa84" Dec 13 00:04:15 crc kubenswrapper[3020]: I1213 00:04:15.374114 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:04:15 crc kubenswrapper[3020]: I1213 00:04:15.375062 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:04:15 crc kubenswrapper[3020]: I1213 00:04:15.375111 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:04:15 crc kubenswrapper[3020]: I1213 00:04:15.375181 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:04:15 crc kubenswrapper[3020]: I1213 00:04:15.376938 3020 scope.go:117] "RemoveContainer" containerID="0785f009aa8e5a7abadeb3965619cb99994f273aa439f6663a560b90c56b70b0" Dec 13 00:04:15 crc kubenswrapper[3020]: E1213 00:04:15.377843 3020 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-crc_openshift-kube-controller-manager(bd6a3a59e513625ca0ae3724df2686bc)\"" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" Dec 13 00:04:15 crc kubenswrapper[3020]: I1213 00:04:15.655785 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:04:16 crc kubenswrapper[3020]: I1213 00:04:16.379673 3020 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/5.log" Dec 13 00:04:16 crc kubenswrapper[3020]: I1213 00:04:16.654933 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:04:17 crc kubenswrapper[3020]: E1213 00:04:17.323316 3020 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Dec 13 00:04:17 crc kubenswrapper[3020]: E1213 00:04:17.436433 3020 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" event="&Event{ObjectMeta:{crc.18809d6386dda438 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-13 00:01:44.719230008 +0000 UTC m=+0.471951125,LastTimestamp:2025-12-13 00:01:44.719230008 +0000 UTC m=+0.471951125,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 13 00:04:17 crc kubenswrapper[3020]: I1213 00:04:17.470229 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:04:17 crc kubenswrapper[3020]: I1213 00:04:17.472164 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:04:17 crc kubenswrapper[3020]: I1213 00:04:17.472234 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:04:17 crc kubenswrapper[3020]: I1213 00:04:17.472252 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:04:17 crc kubenswrapper[3020]: I1213 00:04:17.472292 3020 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 13 00:04:17 crc kubenswrapper[3020]: E1213 00:04:17.473824 3020 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Dec 13 00:04:17 crc kubenswrapper[3020]: I1213 00:04:17.655835 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:04:18 crc kubenswrapper[3020]: I1213 00:04:18.655048 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:04:19 crc kubenswrapper[3020]: I1213 00:04:19.543485 3020 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 00:04:19 crc kubenswrapper[3020]: I1213 00:04:19.543749 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:04:19 crc kubenswrapper[3020]: I1213 00:04:19.545465 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:04:19 crc kubenswrapper[3020]: I1213 00:04:19.545512 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:04:19 crc kubenswrapper[3020]: I1213 00:04:19.545528 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:04:19 crc kubenswrapper[3020]: I1213 00:04:19.546911 3020 scope.go:117] "RemoveContainer" containerID="0785f009aa8e5a7abadeb3965619cb99994f273aa439f6663a560b90c56b70b0" Dec 13 00:04:19 crc kubenswrapper[3020]: E1213 00:04:19.547558 3020 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-crc_openshift-kube-controller-manager(bd6a3a59e513625ca0ae3724df2686bc)\"" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" Dec 13 00:04:19 crc kubenswrapper[3020]: I1213 00:04:19.654899 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:04:20 crc kubenswrapper[3020]: I1213 00:04:20.655401 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:04:21 crc kubenswrapper[3020]: I1213 00:04:21.654579 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:04:22 crc kubenswrapper[3020]: W1213 00:04:22.084550 3020 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:04:22 crc kubenswrapper[3020]: E1213 00:04:22.084645 3020 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:04:22 crc kubenswrapper[3020]: I1213 00:04:22.654976 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:04:22 crc kubenswrapper[3020]: W1213 00:04:22.939561 3020 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:04:22 crc kubenswrapper[3020]: E1213 00:04:22.939654 3020 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:04:23 crc kubenswrapper[3020]: I1213 00:04:23.655396 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:04:24 crc kubenswrapper[3020]: E1213 00:04:24.325792 3020 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Dec 13 00:04:24 crc kubenswrapper[3020]: I1213 00:04:24.474129 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:04:24 crc kubenswrapper[3020]: I1213 00:04:24.475605 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:04:24 crc kubenswrapper[3020]: I1213 00:04:24.475658 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:04:24 crc kubenswrapper[3020]: I1213 00:04:24.475670 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:04:24 crc kubenswrapper[3020]: I1213 00:04:24.475705 3020 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 13 00:04:24 crc kubenswrapper[3020]: E1213 00:04:24.477185 3020 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Dec 13 00:04:24 crc kubenswrapper[3020]: I1213 00:04:24.654997 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:04:24 crc kubenswrapper[3020]: E1213 00:04:24.913500 3020 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 13 00:04:25 crc kubenswrapper[3020]: I1213 00:04:25.655444 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:04:26 crc kubenswrapper[3020]: I1213 00:04:26.654690 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:04:27 crc kubenswrapper[3020]: E1213 00:04:27.438721 3020 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" event="&Event{ObjectMeta:{crc.18809d6386dda438 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-13 00:01:44.719230008 +0000 UTC m=+0.471951125,LastTimestamp:2025-12-13 00:01:44.719230008 +0000 UTC m=+0.471951125,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 13 00:04:27 crc kubenswrapper[3020]: I1213 00:04:27.654585 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:04:28 crc kubenswrapper[3020]: I1213 00:04:28.655032 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:04:29 crc kubenswrapper[3020]: I1213 00:04:29.655923 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:04:30 crc kubenswrapper[3020]: I1213 00:04:30.655038 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:04:30 crc kubenswrapper[3020]: I1213 00:04:30.838754 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:04:30 crc kubenswrapper[3020]: I1213 00:04:30.840983 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:04:30 crc kubenswrapper[3020]: I1213 00:04:30.841135 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:04:30 crc kubenswrapper[3020]: I1213 00:04:30.841264 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:04:31 crc kubenswrapper[3020]: E1213 00:04:31.328065 3020 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.47.54:53: dial udp 199.204.47.54:53: connect: network is unreachable" interval="7s" Dec 13 00:04:31 crc kubenswrapper[3020]: W1213 00:04:31.404307 3020 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.47.54:53: dial udp 199.204.47.54:53: connect: network is unreachable Dec 13 00:04:31 crc kubenswrapper[3020]: E1213 00:04:31.404383 3020 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.47.54:53: dial udp 199.204.47.54:53: connect: network is unreachable Dec 13 00:04:31 crc kubenswrapper[3020]: I1213 00:04:31.479128 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:04:31 crc kubenswrapper[3020]: I1213 00:04:31.481795 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:04:31 crc kubenswrapper[3020]: I1213 00:04:31.481830 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:04:31 crc kubenswrapper[3020]: I1213 00:04:31.481845 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:04:31 crc kubenswrapper[3020]: I1213 00:04:31.481872 3020 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 13 00:04:31 crc kubenswrapper[3020]: E1213 00:04:31.482365 3020 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.47.54:53: dial udp 199.204.47.54:53: connect: network is unreachable" node="crc" Dec 13 00:04:31 crc kubenswrapper[3020]: I1213 00:04:31.654724 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.47.54:53: dial udp 199.204.47.54:53: connect: network is unreachable Dec 13 00:04:32 crc kubenswrapper[3020]: I1213 00:04:32.655214 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:04:33 crc kubenswrapper[3020]: I1213 00:04:33.655174 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:04:33 crc kubenswrapper[3020]: I1213 00:04:33.839212 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:04:33 crc kubenswrapper[3020]: I1213 00:04:33.840926 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:04:33 crc kubenswrapper[3020]: I1213 00:04:33.840969 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:04:33 crc kubenswrapper[3020]: I1213 00:04:33.840983 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:04:33 crc kubenswrapper[3020]: I1213 00:04:33.842409 3020 scope.go:117] "RemoveContainer" containerID="0785f009aa8e5a7abadeb3965619cb99994f273aa439f6663a560b90c56b70b0" Dec 13 00:04:33 crc kubenswrapper[3020]: E1213 00:04:33.843222 3020 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-crc_openshift-kube-controller-manager(bd6a3a59e513625ca0ae3724df2686bc)\"" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" Dec 13 00:04:34 crc kubenswrapper[3020]: I1213 00:04:34.655038 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:04:34 crc kubenswrapper[3020]: E1213 00:04:34.914086 3020 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 13 00:04:35 crc kubenswrapper[3020]: I1213 00:04:35.655133 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:04:36 crc kubenswrapper[3020]: I1213 00:04:36.654602 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:04:37 crc kubenswrapper[3020]: E1213 00:04:37.441372 3020 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" event="&Event{ObjectMeta:{crc.18809d6386dda438 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-13 00:01:44.719230008 +0000 UTC m=+0.471951125,LastTimestamp:2025-12-13 00:01:44.719230008 +0000 UTC m=+0.471951125,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 13 00:04:37 crc kubenswrapper[3020]: I1213 00:04:37.654914 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:04:38 crc kubenswrapper[3020]: E1213 00:04:38.330616 3020 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Dec 13 00:04:38 crc kubenswrapper[3020]: I1213 00:04:38.482654 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:04:38 crc kubenswrapper[3020]: I1213 00:04:38.483727 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:04:38 crc kubenswrapper[3020]: I1213 00:04:38.483760 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:04:38 crc kubenswrapper[3020]: I1213 00:04:38.483771 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:04:38 crc kubenswrapper[3020]: I1213 00:04:38.483796 3020 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 13 00:04:38 crc kubenswrapper[3020]: E1213 00:04:38.484945 3020 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Dec 13 00:04:38 crc kubenswrapper[3020]: I1213 00:04:38.656348 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:04:39 crc kubenswrapper[3020]: I1213 00:04:39.655262 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:04:40 crc kubenswrapper[3020]: I1213 00:04:40.654870 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:04:41 crc kubenswrapper[3020]: I1213 00:04:41.654917 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:04:42 crc kubenswrapper[3020]: I1213 00:04:42.655310 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:04:43 crc kubenswrapper[3020]: I1213 00:04:43.655135 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:04:44 crc kubenswrapper[3020]: I1213 00:04:44.655402 3020 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 13 00:04:44 crc kubenswrapper[3020]: I1213 00:04:44.655507 3020 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 13 00:04:44 crc kubenswrapper[3020]: I1213 00:04:44.655591 3020 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 13 00:04:44 crc kubenswrapper[3020]: I1213 00:04:44.655660 3020 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 13 00:04:44 crc kubenswrapper[3020]: I1213 00:04:44.655707 3020 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 13 00:04:44 crc kubenswrapper[3020]: I1213 00:04:44.656080 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:04:44 crc kubenswrapper[3020]: I1213 00:04:44.839067 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:04:44 crc kubenswrapper[3020]: I1213 00:04:44.840858 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:04:44 crc kubenswrapper[3020]: I1213 00:04:44.840951 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:04:44 crc kubenswrapper[3020]: I1213 00:04:44.840986 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:04:44 crc kubenswrapper[3020]: I1213 00:04:44.844646 3020 scope.go:117] "RemoveContainer" containerID="0785f009aa8e5a7abadeb3965619cb99994f273aa439f6663a560b90c56b70b0" Dec 13 00:04:44 crc kubenswrapper[3020]: E1213 00:04:44.846814 3020 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-crc_openshift-kube-controller-manager(bd6a3a59e513625ca0ae3724df2686bc)\"" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" Dec 13 00:04:44 crc kubenswrapper[3020]: E1213 00:04:44.914923 3020 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 13 00:04:45 crc kubenswrapper[3020]: E1213 00:04:45.332813 3020 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Dec 13 00:04:45 crc kubenswrapper[3020]: I1213 00:04:45.486085 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:04:45 crc kubenswrapper[3020]: I1213 00:04:45.487104 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:04:45 crc kubenswrapper[3020]: I1213 00:04:45.487142 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:04:45 crc kubenswrapper[3020]: I1213 00:04:45.487157 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:04:45 crc kubenswrapper[3020]: I1213 00:04:45.487185 3020 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 13 00:04:45 crc kubenswrapper[3020]: E1213 00:04:45.488645 3020 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Dec 13 00:04:45 crc kubenswrapper[3020]: I1213 00:04:45.655492 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:04:46 crc kubenswrapper[3020]: I1213 00:04:46.654205 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:04:47 crc kubenswrapper[3020]: E1213 00:04:47.443047 3020 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" event="&Event{ObjectMeta:{crc.18809d6386dda438 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-13 00:01:44.719230008 +0000 UTC m=+0.471951125,LastTimestamp:2025-12-13 00:01:44.719230008 +0000 UTC m=+0.471951125,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 13 00:04:47 crc kubenswrapper[3020]: I1213 00:04:47.654895 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:04:48 crc kubenswrapper[3020]: I1213 00:04:48.655548 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:04:49 crc kubenswrapper[3020]: I1213 00:04:49.655355 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:04:50 crc kubenswrapper[3020]: I1213 00:04:50.655673 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:04:51 crc kubenswrapper[3020]: I1213 00:04:51.654777 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:04:52 crc kubenswrapper[3020]: E1213 00:04:52.334995 3020 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Dec 13 00:04:52 crc kubenswrapper[3020]: I1213 00:04:52.489636 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:04:52 crc kubenswrapper[3020]: I1213 00:04:52.491752 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:04:52 crc kubenswrapper[3020]: I1213 00:04:52.491804 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:04:52 crc kubenswrapper[3020]: I1213 00:04:52.491819 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:04:52 crc kubenswrapper[3020]: I1213 00:04:52.491854 3020 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 13 00:04:52 crc kubenswrapper[3020]: E1213 00:04:52.492979 3020 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Dec 13 00:04:52 crc kubenswrapper[3020]: I1213 00:04:52.654889 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:04:53 crc kubenswrapper[3020]: I1213 00:04:53.655311 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:04:53 crc kubenswrapper[3020]: W1213 00:04:53.731361 3020 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:04:53 crc kubenswrapper[3020]: E1213 00:04:53.731528 3020 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:04:54 crc kubenswrapper[3020]: I1213 00:04:54.655809 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:04:54 crc kubenswrapper[3020]: E1213 00:04:54.915075 3020 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 13 00:04:55 crc kubenswrapper[3020]: I1213 00:04:55.655191 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:04:55 crc kubenswrapper[3020]: I1213 00:04:55.838887 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:04:55 crc kubenswrapper[3020]: I1213 00:04:55.840221 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:04:55 crc kubenswrapper[3020]: I1213 00:04:55.840301 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:04:55 crc kubenswrapper[3020]: I1213 00:04:55.840319 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:04:55 crc kubenswrapper[3020]: I1213 00:04:55.842175 3020 scope.go:117] "RemoveContainer" containerID="0785f009aa8e5a7abadeb3965619cb99994f273aa439f6663a560b90c56b70b0" Dec 13 00:04:56 crc kubenswrapper[3020]: I1213 00:04:56.488680 3020 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/5.log" Dec 13 00:04:56 crc kubenswrapper[3020]: I1213 00:04:56.491194 3020 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerStarted","Data":"96c5208412fe36ff09d7a05351ff92c5a40e830659baaf6c71e992ea0e4dc813"} Dec 13 00:04:56 crc kubenswrapper[3020]: I1213 00:04:56.491339 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:04:56 crc kubenswrapper[3020]: I1213 00:04:56.492722 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:04:56 crc kubenswrapper[3020]: I1213 00:04:56.492818 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:04:56 crc kubenswrapper[3020]: I1213 00:04:56.492849 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:04:56 crc kubenswrapper[3020]: I1213 00:04:56.655574 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:04:57 crc kubenswrapper[3020]: E1213 00:04:57.445100 3020 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" event="&Event{ObjectMeta:{crc.18809d6386dda438 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-13 00:01:44.719230008 +0000 UTC m=+0.471951125,LastTimestamp:2025-12-13 00:01:44.719230008 +0000 UTC m=+0.471951125,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 13 00:04:57 crc kubenswrapper[3020]: I1213 00:04:57.655160 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:04:58 crc kubenswrapper[3020]: I1213 00:04:58.655267 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:04:59 crc kubenswrapper[3020]: E1213 00:04:59.336874 3020 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Dec 13 00:04:59 crc kubenswrapper[3020]: I1213 00:04:59.494049 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:04:59 crc kubenswrapper[3020]: I1213 00:04:59.495640 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:04:59 crc kubenswrapper[3020]: I1213 00:04:59.495702 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:04:59 crc kubenswrapper[3020]: I1213 00:04:59.495716 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:04:59 crc kubenswrapper[3020]: I1213 00:04:59.495751 3020 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 13 00:04:59 crc kubenswrapper[3020]: E1213 00:04:59.497166 3020 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Dec 13 00:04:59 crc kubenswrapper[3020]: I1213 00:04:59.655788 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:04:59 crc kubenswrapper[3020]: I1213 00:04:59.839408 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:04:59 crc kubenswrapper[3020]: I1213 00:04:59.840646 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:04:59 crc kubenswrapper[3020]: I1213 00:04:59.840694 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:04:59 crc kubenswrapper[3020]: I1213 00:04:59.840705 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:05:00 crc kubenswrapper[3020]: I1213 00:05:00.655801 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:05:01 crc kubenswrapper[3020]: I1213 00:05:01.575666 3020 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 00:05:01 crc kubenswrapper[3020]: I1213 00:05:01.576431 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:05:01 crc kubenswrapper[3020]: I1213 00:05:01.578938 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:05:01 crc kubenswrapper[3020]: I1213 00:05:01.578993 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:05:01 crc kubenswrapper[3020]: I1213 00:05:01.579007 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:05:01 crc kubenswrapper[3020]: I1213 00:05:01.655006 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:05:02 crc kubenswrapper[3020]: I1213 00:05:02.654916 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:05:03 crc kubenswrapper[3020]: I1213 00:05:03.654638 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:05:04 crc kubenswrapper[3020]: W1213 00:05:04.315606 3020 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:05:04 crc kubenswrapper[3020]: E1213 00:05:04.315683 3020 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:05:04 crc kubenswrapper[3020]: I1213 00:05:04.322854 3020 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 00:05:04 crc kubenswrapper[3020]: I1213 00:05:04.323074 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:05:04 crc kubenswrapper[3020]: I1213 00:05:04.328679 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:05:04 crc kubenswrapper[3020]: I1213 00:05:04.328744 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:05:04 crc kubenswrapper[3020]: I1213 00:05:04.328765 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:05:04 crc kubenswrapper[3020]: I1213 00:05:04.575623 3020 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 13 00:05:04 crc kubenswrapper[3020]: I1213 00:05:04.575757 3020 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 13 00:05:04 crc kubenswrapper[3020]: I1213 00:05:04.654793 3020 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Dec 13 00:05:04 crc kubenswrapper[3020]: I1213 00:05:04.838624 3020 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:05:04 crc kubenswrapper[3020]: I1213 00:05:04.840076 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:05:04 crc kubenswrapper[3020]: I1213 00:05:04.840119 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:05:04 crc kubenswrapper[3020]: I1213 00:05:04.840130 3020 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:05:04 crc kubenswrapper[3020]: E1213 00:05:04.915413 3020 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 13 00:05:05 crc kubenswrapper[3020]: I1213 00:05:05.372211 3020 reconstruct_new.go:210] "DevicePaths of reconstructed volumes updated" Dec 13 00:05:06 crc systemd[1]: Stopping Kubernetes Kubelet... Dec 13 00:05:06 crc kubenswrapper[3020]: I1213 00:05:06.116867 3020 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 13 00:05:06 crc systemd[1]: kubelet.service: Deactivated successfully. Dec 13 00:05:06 crc systemd[1]: Stopped Kubernetes Kubelet. Dec 13 00:05:06 crc systemd[1]: kubelet.service: Consumed 11.144s CPU time. -- Boot 6438f10de0bf466fb6a4f002fdf9da10 -- Dec 13 00:06:03 crc systemd[1]: Starting Kubernetes Kubelet... Dec 13 00:06:03 crc kubenswrapper[3562]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 00:06:03 crc kubenswrapper[3562]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 13 00:06:03 crc kubenswrapper[3562]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 00:06:03 crc kubenswrapper[3562]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 00:06:03 crc kubenswrapper[3562]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 00:06:03 crc kubenswrapper[3562]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.629531 3562 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.631074 3562 feature_gate.go:227] unrecognized feature gate: ImagePolicy Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.631084 3562 feature_gate.go:227] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.631091 3562 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallNutanix Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.631097 3562 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallOpenStack Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.631110 3562 feature_gate.go:227] unrecognized feature gate: VolumeGroupSnapshot Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.631118 3562 feature_gate.go:227] unrecognized feature gate: MetricsServer Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.631124 3562 feature_gate.go:227] unrecognized feature gate: GCPLabelsTags Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.631130 3562 feature_gate.go:227] unrecognized feature gate: InsightsConfig Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.631135 3562 feature_gate.go:227] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.631140 3562 feature_gate.go:227] unrecognized feature gate: AutomatedEtcdBackup Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.631145 3562 feature_gate.go:227] unrecognized feature gate: BuildCSIVolumes Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.631150 3562 feature_gate.go:227] unrecognized feature gate: DNSNameResolver Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.631155 3562 feature_gate.go:227] unrecognized feature gate: GCPClusterHostedDNS Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.631160 3562 feature_gate.go:240] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.631166 3562 feature_gate.go:227] unrecognized feature gate: OnClusterBuild Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.631171 3562 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAzure Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.631177 3562 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallGCP Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.631184 3562 feature_gate.go:227] unrecognized feature gate: ExternalRouteCertificate Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.631196 3562 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallVSphere Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.631203 3562 feature_gate.go:227] unrecognized feature gate: ExternalOIDC Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.631213 3562 feature_gate.go:227] unrecognized feature gate: InstallAlternateInfrastructureAWS Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.631220 3562 feature_gate.go:227] unrecognized feature gate: EtcdBackendQuota Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.631226 3562 feature_gate.go:227] unrecognized feature gate: VSphereStaticIPs Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.631231 3562 feature_gate.go:227] unrecognized feature gate: PinnedImages Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.631236 3562 feature_gate.go:227] unrecognized feature gate: VSphereDriverConfiguration Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.631241 3562 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAWS Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.631246 3562 feature_gate.go:227] unrecognized feature gate: NetworkLiveMigration Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.631251 3562 feature_gate.go:227] unrecognized feature gate: AlibabaPlatform Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.631257 3562 feature_gate.go:227] unrecognized feature gate: SigstoreImageVerification Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.631262 3562 feature_gate.go:227] unrecognized feature gate: InsightsOnDemandDataGather Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.631267 3562 feature_gate.go:227] unrecognized feature gate: MachineConfigNodes Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.631272 3562 feature_gate.go:227] unrecognized feature gate: NewOLM Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.631277 3562 feature_gate.go:227] unrecognized feature gate: SignatureStores Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.631283 3562 feature_gate.go:227] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.631288 3562 feature_gate.go:227] unrecognized feature gate: BareMetalLoadBalancer Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.631293 3562 feature_gate.go:227] unrecognized feature gate: ExternalCloudProvider Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.631299 3562 feature_gate.go:227] unrecognized feature gate: UpgradeStatus Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.631304 3562 feature_gate.go:227] unrecognized feature gate: VSphereMultiVCenters Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.631310 3562 feature_gate.go:227] unrecognized feature gate: MachineAPIProviderOpenStack Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.631315 3562 feature_gate.go:227] unrecognized feature gate: PrivateHostedZoneAWS Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.631320 3562 feature_gate.go:227] unrecognized feature gate: InsightsConfigAPI Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.631325 3562 feature_gate.go:227] unrecognized feature gate: MetricsCollectionProfiles Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.631330 3562 feature_gate.go:227] unrecognized feature gate: PlatformOperators Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.631336 3562 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderExternal Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.631341 3562 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderGCP Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.631346 3562 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.631354 3562 feature_gate.go:227] unrecognized feature gate: GatewayAPI Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.631359 3562 feature_gate.go:227] unrecognized feature gate: NetworkDiagnosticsConfig Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.631364 3562 feature_gate.go:227] unrecognized feature gate: AdminNetworkPolicy Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.631370 3562 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstall Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.631375 3562 feature_gate.go:227] unrecognized feature gate: ManagedBootImages Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.631380 3562 feature_gate.go:227] unrecognized feature gate: MixedCPUsAllocation Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.631385 3562 feature_gate.go:227] unrecognized feature gate: AzureWorkloadIdentity Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.631390 3562 feature_gate.go:227] unrecognized feature gate: HardwareSpeed Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.631395 3562 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallPowerVS Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.631399 3562 feature_gate.go:227] unrecognized feature gate: Example Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.631404 3562 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderAzure Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.631409 3562 feature_gate.go:227] unrecognized feature gate: NodeDisruptionPolicy Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.631414 3562 feature_gate.go:227] unrecognized feature gate: CSIDriverSharedResource Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.631419 3562 feature_gate.go:227] unrecognized feature gate: ChunkSizeMiB Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.631732 3562 flags.go:64] FLAG: --address="0.0.0.0" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.631748 3562 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.631755 3562 flags.go:64] FLAG: --anonymous-auth="true" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.631760 3562 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.631765 3562 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.631770 3562 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.631776 3562 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.631781 3562 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.631785 3562 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.631790 3562 flags.go:64] FLAG: --azure-container-registry-config="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.631794 3562 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.631798 3562 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.631802 3562 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.631807 3562 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.631814 3562 flags.go:64] FLAG: --cgroup-root="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.631819 3562 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.631823 3562 flags.go:64] FLAG: --client-ca-file="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.631827 3562 flags.go:64] FLAG: --cloud-config="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.631831 3562 flags.go:64] FLAG: --cloud-provider="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.631835 3562 flags.go:64] FLAG: --cluster-dns="[]" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.631840 3562 flags.go:64] FLAG: --cluster-domain="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.631845 3562 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.631849 3562 flags.go:64] FLAG: --config-dir="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.631853 3562 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.631858 3562 flags.go:64] FLAG: --container-log-max-files="5" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.631863 3562 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.631867 3562 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.631871 3562 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.631876 3562 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.631880 3562 flags.go:64] FLAG: --contention-profiling="false" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.631884 3562 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.631888 3562 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.631893 3562 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.631897 3562 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.631917 3562 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.631922 3562 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.631926 3562 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.631930 3562 flags.go:64] FLAG: --enable-load-reader="false" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.631934 3562 flags.go:64] FLAG: --enable-server="true" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.631938 3562 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.631943 3562 flags.go:64] FLAG: --event-burst="100" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.631947 3562 flags.go:64] FLAG: --event-qps="50" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.631951 3562 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.631955 3562 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.631959 3562 flags.go:64] FLAG: --eviction-hard="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.631965 3562 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.631969 3562 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.631973 3562 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.631977 3562 flags.go:64] FLAG: --eviction-soft="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.631981 3562 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.631986 3562 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.631990 3562 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.631994 3562 flags.go:64] FLAG: --experimental-mounter-path="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.631998 3562 flags.go:64] FLAG: --fail-swap-on="true" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632003 3562 flags.go:64] FLAG: --feature-gates="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632008 3562 flags.go:64] FLAG: --file-check-frequency="20s" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632012 3562 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632016 3562 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632020 3562 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632024 3562 flags.go:64] FLAG: --healthz-port="10248" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632028 3562 flags.go:64] FLAG: --help="false" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632032 3562 flags.go:64] FLAG: --hostname-override="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632036 3562 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632041 3562 flags.go:64] FLAG: --http-check-frequency="20s" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632045 3562 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632051 3562 flags.go:64] FLAG: --image-credential-provider-config="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632055 3562 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632059 3562 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632063 3562 flags.go:64] FLAG: --image-service-endpoint="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632067 3562 flags.go:64] FLAG: --iptables-drop-bit="15" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632071 3562 flags.go:64] FLAG: --iptables-masquerade-bit="14" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632075 3562 flags.go:64] FLAG: --keep-terminated-pod-volumes="false" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632079 3562 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632083 3562 flags.go:64] FLAG: --kube-api-burst="100" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632087 3562 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632091 3562 flags.go:64] FLAG: --kube-api-qps="50" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632095 3562 flags.go:64] FLAG: --kube-reserved="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632099 3562 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632103 3562 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632108 3562 flags.go:64] FLAG: --kubelet-cgroups="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632112 3562 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632116 3562 flags.go:64] FLAG: --lock-file="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632120 3562 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632125 3562 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632129 3562 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632135 3562 flags.go:64] FLAG: --log-json-split-stream="false" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632143 3562 flags.go:64] FLAG: --logging-format="text" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632147 3562 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632152 3562 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632156 3562 flags.go:64] FLAG: --manifest-url="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632160 3562 flags.go:64] FLAG: --manifest-url-header="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632165 3562 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632169 3562 flags.go:64] FLAG: --max-open-files="1000000" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632174 3562 flags.go:64] FLAG: --max-pods="110" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632179 3562 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632183 3562 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632187 3562 flags.go:64] FLAG: --memory-manager-policy="None" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632192 3562 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632197 3562 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632202 3562 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632206 3562 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632216 3562 flags.go:64] FLAG: --node-status-max-images="50" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632221 3562 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632225 3562 flags.go:64] FLAG: --oom-score-adj="-999" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632230 3562 flags.go:64] FLAG: --pod-cidr="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632234 3562 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce0319702e115e7248d135e58342ccf3f458e19c39e86dc8e79036f578ce80a4" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632240 3562 flags.go:64] FLAG: --pod-manifest-path="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632244 3562 flags.go:64] FLAG: --pod-max-pids="-1" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632249 3562 flags.go:64] FLAG: --pods-per-core="0" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632253 3562 flags.go:64] FLAG: --port="10250" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632257 3562 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632261 3562 flags.go:64] FLAG: --provider-id="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632265 3562 flags.go:64] FLAG: --qos-reserved="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632269 3562 flags.go:64] FLAG: --read-only-port="10255" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632274 3562 flags.go:64] FLAG: --register-node="true" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632278 3562 flags.go:64] FLAG: --register-schedulable="true" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632282 3562 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632288 3562 flags.go:64] FLAG: --registry-burst="10" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632293 3562 flags.go:64] FLAG: --registry-qps="5" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632298 3562 flags.go:64] FLAG: --reserved-cpus="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632302 3562 flags.go:64] FLAG: --reserved-memory="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632307 3562 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632312 3562 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632316 3562 flags.go:64] FLAG: --rotate-certificates="false" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632320 3562 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632324 3562 flags.go:64] FLAG: --runonce="false" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632328 3562 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632333 3562 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632337 3562 flags.go:64] FLAG: --seccomp-default="false" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632341 3562 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632345 3562 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632349 3562 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632353 3562 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632358 3562 flags.go:64] FLAG: --storage-driver-password="root" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632362 3562 flags.go:64] FLAG: --storage-driver-secure="false" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632366 3562 flags.go:64] FLAG: --storage-driver-table="stats" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632370 3562 flags.go:64] FLAG: --storage-driver-user="root" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632374 3562 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632378 3562 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632382 3562 flags.go:64] FLAG: --system-cgroups="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632387 3562 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632393 3562 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632397 3562 flags.go:64] FLAG: --tls-cert-file="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632401 3562 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632406 3562 flags.go:64] FLAG: --tls-min-version="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632411 3562 flags.go:64] FLAG: --tls-private-key-file="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632415 3562 flags.go:64] FLAG: --topology-manager-policy="none" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632419 3562 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632423 3562 flags.go:64] FLAG: --topology-manager-scope="container" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632427 3562 flags.go:64] FLAG: --v="2" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632432 3562 flags.go:64] FLAG: --version="false" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632439 3562 flags.go:64] FLAG: --vmodule="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632444 3562 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632448 3562 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.632501 3562 feature_gate.go:227] unrecognized feature gate: ImagePolicy Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.632507 3562 feature_gate.go:227] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.632512 3562 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallNutanix Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.632517 3562 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallOpenStack Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.632525 3562 feature_gate.go:227] unrecognized feature gate: VolumeGroupSnapshot Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.632531 3562 feature_gate.go:227] unrecognized feature gate: MetricsServer Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.632536 3562 feature_gate.go:227] unrecognized feature gate: GCPLabelsTags Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.632541 3562 feature_gate.go:227] unrecognized feature gate: InsightsConfig Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.632546 3562 feature_gate.go:227] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.632551 3562 feature_gate.go:227] unrecognized feature gate: AutomatedEtcdBackup Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.632556 3562 feature_gate.go:227] unrecognized feature gate: BuildCSIVolumes Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.632561 3562 feature_gate.go:227] unrecognized feature gate: DNSNameResolver Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.632566 3562 feature_gate.go:227] unrecognized feature gate: GCPClusterHostedDNS Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.632570 3562 feature_gate.go:240] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.632576 3562 feature_gate.go:227] unrecognized feature gate: OnClusterBuild Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.632581 3562 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAzure Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.632586 3562 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallGCP Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.632590 3562 feature_gate.go:227] unrecognized feature gate: ExternalRouteCertificate Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.632595 3562 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallVSphere Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.632600 3562 feature_gate.go:227] unrecognized feature gate: ExternalOIDC Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.632605 3562 feature_gate.go:227] unrecognized feature gate: InstallAlternateInfrastructureAWS Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.632610 3562 feature_gate.go:227] unrecognized feature gate: EtcdBackendQuota Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.632615 3562 feature_gate.go:227] unrecognized feature gate: VSphereStaticIPs Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.632621 3562 feature_gate.go:227] unrecognized feature gate: PinnedImages Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.632626 3562 feature_gate.go:227] unrecognized feature gate: VSphereDriverConfiguration Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.632630 3562 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAWS Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.632635 3562 feature_gate.go:227] unrecognized feature gate: NetworkLiveMigration Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.632640 3562 feature_gate.go:227] unrecognized feature gate: AlibabaPlatform Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.632645 3562 feature_gate.go:227] unrecognized feature gate: SigstoreImageVerification Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.632651 3562 feature_gate.go:227] unrecognized feature gate: InsightsOnDemandDataGather Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.632656 3562 feature_gate.go:227] unrecognized feature gate: MachineConfigNodes Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.632661 3562 feature_gate.go:227] unrecognized feature gate: NewOLM Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.632666 3562 feature_gate.go:227] unrecognized feature gate: SignatureStores Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.632671 3562 feature_gate.go:227] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.632676 3562 feature_gate.go:227] unrecognized feature gate: BareMetalLoadBalancer Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.632681 3562 feature_gate.go:227] unrecognized feature gate: ExternalCloudProvider Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.632686 3562 feature_gate.go:227] unrecognized feature gate: UpgradeStatus Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.632690 3562 feature_gate.go:227] unrecognized feature gate: VSphereMultiVCenters Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.632695 3562 feature_gate.go:227] unrecognized feature gate: MachineAPIProviderOpenStack Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.632700 3562 feature_gate.go:227] unrecognized feature gate: PrivateHostedZoneAWS Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.632705 3562 feature_gate.go:227] unrecognized feature gate: InsightsConfigAPI Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.632710 3562 feature_gate.go:227] unrecognized feature gate: MetricsCollectionProfiles Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.632715 3562 feature_gate.go:227] unrecognized feature gate: PlatformOperators Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.632720 3562 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderExternal Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.632725 3562 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderGCP Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.632729 3562 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.632735 3562 feature_gate.go:227] unrecognized feature gate: GatewayAPI Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.632739 3562 feature_gate.go:227] unrecognized feature gate: NetworkDiagnosticsConfig Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.632744 3562 feature_gate.go:227] unrecognized feature gate: AdminNetworkPolicy Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.632749 3562 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstall Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.632754 3562 feature_gate.go:227] unrecognized feature gate: ManagedBootImages Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.632759 3562 feature_gate.go:227] unrecognized feature gate: MixedCPUsAllocation Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.632763 3562 feature_gate.go:227] unrecognized feature gate: AzureWorkloadIdentity Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.632769 3562 feature_gate.go:227] unrecognized feature gate: HardwareSpeed Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.632774 3562 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallPowerVS Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.632778 3562 feature_gate.go:227] unrecognized feature gate: Example Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.632783 3562 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderAzure Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.632788 3562 feature_gate.go:227] unrecognized feature gate: NodeDisruptionPolicy Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.632793 3562 feature_gate.go:227] unrecognized feature gate: CSIDriverSharedResource Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.632798 3562 feature_gate.go:227] unrecognized feature gate: ChunkSizeMiB Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.632804 3562 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false ServiceAccountTokenNodeBindingValidation:false ServiceAccountTokenPodNodeInfo:false TranslateStreamCloseWebsocketRequests:false ValidatingAdmissionPolicy:false]} Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.639251 3562 server.go:487] "Kubelet version" kubeletVersion="v1.29.5+29c95f3" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.639540 3562 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.639627 3562 feature_gate.go:227] unrecognized feature gate: MachineAPIProviderOpenStack Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.639637 3562 feature_gate.go:227] unrecognized feature gate: PrivateHostedZoneAWS Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.639645 3562 feature_gate.go:227] unrecognized feature gate: UpgradeStatus Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.639652 3562 feature_gate.go:227] unrecognized feature gate: VSphereMultiVCenters Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.639659 3562 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderExternal Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.639666 3562 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderGCP Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.639673 3562 feature_gate.go:227] unrecognized feature gate: InsightsConfigAPI Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.639680 3562 feature_gate.go:227] unrecognized feature gate: MetricsCollectionProfiles Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.639687 3562 feature_gate.go:227] unrecognized feature gate: PlatformOperators Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.639694 3562 feature_gate.go:227] unrecognized feature gate: AdminNetworkPolicy Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.639701 3562 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstall Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.639708 3562 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.639716 3562 feature_gate.go:227] unrecognized feature gate: GatewayAPI Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.639723 3562 feature_gate.go:227] unrecognized feature gate: NetworkDiagnosticsConfig Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.639730 3562 feature_gate.go:227] unrecognized feature gate: AzureWorkloadIdentity Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.639737 3562 feature_gate.go:227] unrecognized feature gate: HardwareSpeed Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.639744 3562 feature_gate.go:227] unrecognized feature gate: ManagedBootImages Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.639750 3562 feature_gate.go:227] unrecognized feature gate: MixedCPUsAllocation Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.639758 3562 feature_gate.go:227] unrecognized feature gate: CSIDriverSharedResource Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.639764 3562 feature_gate.go:227] unrecognized feature gate: ChunkSizeMiB Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.639771 3562 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallPowerVS Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.639778 3562 feature_gate.go:227] unrecognized feature gate: Example Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.639784 3562 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderAzure Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.639791 3562 feature_gate.go:227] unrecognized feature gate: NodeDisruptionPolicy Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.639798 3562 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallNutanix Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.639805 3562 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallOpenStack Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.639812 3562 feature_gate.go:227] unrecognized feature gate: ImagePolicy Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.639818 3562 feature_gate.go:227] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.639826 3562 feature_gate.go:227] unrecognized feature gate: MetricsServer Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.639835 3562 feature_gate.go:227] unrecognized feature gate: VolumeGroupSnapshot Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.639842 3562 feature_gate.go:227] unrecognized feature gate: AutomatedEtcdBackup Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.639850 3562 feature_gate.go:227] unrecognized feature gate: BuildCSIVolumes Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.639857 3562 feature_gate.go:227] unrecognized feature gate: GCPLabelsTags Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.639864 3562 feature_gate.go:227] unrecognized feature gate: InsightsConfig Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.639871 3562 feature_gate.go:227] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.639878 3562 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAzure Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.639886 3562 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallGCP Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.639892 3562 feature_gate.go:227] unrecognized feature gate: DNSNameResolver Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.639899 3562 feature_gate.go:227] unrecognized feature gate: GCPClusterHostedDNS Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.639931 3562 feature_gate.go:240] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.639942 3562 feature_gate.go:227] unrecognized feature gate: OnClusterBuild Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.639951 3562 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallVSphere Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.639960 3562 feature_gate.go:227] unrecognized feature gate: ExternalOIDC Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.639969 3562 feature_gate.go:227] unrecognized feature gate: ExternalRouteCertificate Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.639976 3562 feature_gate.go:227] unrecognized feature gate: EtcdBackendQuota Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.639983 3562 feature_gate.go:227] unrecognized feature gate: InstallAlternateInfrastructureAWS Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.639990 3562 feature_gate.go:227] unrecognized feature gate: PinnedImages Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.639997 3562 feature_gate.go:227] unrecognized feature gate: VSphereDriverConfiguration Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640004 3562 feature_gate.go:227] unrecognized feature gate: VSphereStaticIPs Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640011 3562 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAWS Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640017 3562 feature_gate.go:227] unrecognized feature gate: NetworkLiveMigration Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640024 3562 feature_gate.go:227] unrecognized feature gate: AlibabaPlatform Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640031 3562 feature_gate.go:227] unrecognized feature gate: SigstoreImageVerification Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640038 3562 feature_gate.go:227] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640045 3562 feature_gate.go:227] unrecognized feature gate: BareMetalLoadBalancer Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640051 3562 feature_gate.go:227] unrecognized feature gate: ExternalCloudProvider Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640058 3562 feature_gate.go:227] unrecognized feature gate: InsightsOnDemandDataGather Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640065 3562 feature_gate.go:227] unrecognized feature gate: MachineConfigNodes Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640072 3562 feature_gate.go:227] unrecognized feature gate: NewOLM Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640079 3562 feature_gate.go:227] unrecognized feature gate: SignatureStores Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.640087 3562 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false ServiceAccountTokenNodeBindingValidation:false ServiceAccountTokenPodNodeInfo:false TranslateStreamCloseWebsocketRequests:false ValidatingAdmissionPolicy:false]} Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640180 3562 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAzure Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640188 3562 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallGCP Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640195 3562 feature_gate.go:227] unrecognized feature gate: DNSNameResolver Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640203 3562 feature_gate.go:227] unrecognized feature gate: GCPClusterHostedDNS Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640210 3562 feature_gate.go:240] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640218 3562 feature_gate.go:227] unrecognized feature gate: OnClusterBuild Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640226 3562 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallVSphere Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640233 3562 feature_gate.go:227] unrecognized feature gate: ExternalOIDC Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640239 3562 feature_gate.go:227] unrecognized feature gate: ExternalRouteCertificate Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640246 3562 feature_gate.go:227] unrecognized feature gate: EtcdBackendQuota Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640253 3562 feature_gate.go:227] unrecognized feature gate: InstallAlternateInfrastructureAWS Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640260 3562 feature_gate.go:227] unrecognized feature gate: PinnedImages Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640267 3562 feature_gate.go:227] unrecognized feature gate: VSphereDriverConfiguration Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640273 3562 feature_gate.go:227] unrecognized feature gate: VSphereStaticIPs Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640280 3562 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAWS Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640286 3562 feature_gate.go:227] unrecognized feature gate: NetworkLiveMigration Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640293 3562 feature_gate.go:227] unrecognized feature gate: AlibabaPlatform Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640300 3562 feature_gate.go:227] unrecognized feature gate: SigstoreImageVerification Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640307 3562 feature_gate.go:227] unrecognized feature gate: SignatureStores Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640314 3562 feature_gate.go:227] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640320 3562 feature_gate.go:227] unrecognized feature gate: BareMetalLoadBalancer Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640327 3562 feature_gate.go:227] unrecognized feature gate: ExternalCloudProvider Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640334 3562 feature_gate.go:227] unrecognized feature gate: InsightsOnDemandDataGather Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640340 3562 feature_gate.go:227] unrecognized feature gate: MachineConfigNodes Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640347 3562 feature_gate.go:227] unrecognized feature gate: NewOLM Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640354 3562 feature_gate.go:227] unrecognized feature gate: MachineAPIProviderOpenStack Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640360 3562 feature_gate.go:227] unrecognized feature gate: PrivateHostedZoneAWS Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640367 3562 feature_gate.go:227] unrecognized feature gate: UpgradeStatus Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640374 3562 feature_gate.go:227] unrecognized feature gate: VSphereMultiVCenters Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640381 3562 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderExternal Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640388 3562 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderGCP Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640396 3562 feature_gate.go:227] unrecognized feature gate: InsightsConfigAPI Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640403 3562 feature_gate.go:227] unrecognized feature gate: MetricsCollectionProfiles Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640410 3562 feature_gate.go:227] unrecognized feature gate: PlatformOperators Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640417 3562 feature_gate.go:227] unrecognized feature gate: NetworkDiagnosticsConfig Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640424 3562 feature_gate.go:227] unrecognized feature gate: AdminNetworkPolicy Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640430 3562 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstall Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640437 3562 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640444 3562 feature_gate.go:227] unrecognized feature gate: GatewayAPI Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640451 3562 feature_gate.go:227] unrecognized feature gate: AzureWorkloadIdentity Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640464 3562 feature_gate.go:227] unrecognized feature gate: HardwareSpeed Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640471 3562 feature_gate.go:227] unrecognized feature gate: ManagedBootImages Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640478 3562 feature_gate.go:227] unrecognized feature gate: MixedCPUsAllocation Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640485 3562 feature_gate.go:227] unrecognized feature gate: CSIDriverSharedResource Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640492 3562 feature_gate.go:227] unrecognized feature gate: ChunkSizeMiB Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640498 3562 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallPowerVS Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640505 3562 feature_gate.go:227] unrecognized feature gate: Example Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640512 3562 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderAzure Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640518 3562 feature_gate.go:227] unrecognized feature gate: NodeDisruptionPolicy Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640525 3562 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallNutanix Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640531 3562 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallOpenStack Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640538 3562 feature_gate.go:227] unrecognized feature gate: ImagePolicy Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640545 3562 feature_gate.go:227] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640552 3562 feature_gate.go:227] unrecognized feature gate: MetricsServer Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640559 3562 feature_gate.go:227] unrecognized feature gate: VolumeGroupSnapshot Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640566 3562 feature_gate.go:227] unrecognized feature gate: AutomatedEtcdBackup Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640573 3562 feature_gate.go:227] unrecognized feature gate: BuildCSIVolumes Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640580 3562 feature_gate.go:227] unrecognized feature gate: GCPLabelsTags Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640588 3562 feature_gate.go:227] unrecognized feature gate: InsightsConfig Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.640594 3562 feature_gate.go:227] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.640602 3562 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false ServiceAccountTokenNodeBindingValidation:false ServiceAccountTokenPodNodeInfo:false TranslateStreamCloseWebsocketRequests:false ValidatingAdmissionPolicy:false]} Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.640969 3562 server.go:925] "Client rotation is on, will bootstrap in background" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.644062 3562 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.644969 3562 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.645243 3562 server.go:982] "Starting client certificate rotation" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.645255 3562 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.647353 3562 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-06-27 13:05:20 +0000 UTC, rotation deadline is 2026-05-10 23:05:34.483334604 +0000 UTC Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.647520 3562 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 3574h59m30.835819843s for next certificate rotation Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.649942 3562 dynamic_cafile_content.go:119] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.652552 3562 util_unix.go:103] "Using this endpoint is deprecated, please consider using full URL format" endpoint="/var/run/crio/crio.sock" URL="unix:///var/run/crio/crio.sock" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.653896 3562 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.664839 3562 remote_runtime.go:143] "Validated CRI v1 runtime API" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.664892 3562 util_unix.go:103] "Using this endpoint is deprecated, please consider using full URL format" endpoint="/var/run/crio/crio.sock" URL="unix:///var/run/crio/crio.sock" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.685248 3562 remote_image.go:111] "Validated CRI v1 image API" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.689297 3562 fs.go:132] Filesystem UUIDs: map[2025-12-13-00-01-10-00:/dev/sr0 68d6f3e9-64e9-44a4-a1d0-311f9c629a01:/dev/vda4 6ea7ef63-bc43-49c4-9337-b3b14ffb2763:/dev/vda3 7B77-95E7:/dev/vda2] Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.689328 3562 fs.go:133] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:41 fsType:tmpfs blockSize:0}] Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.706524 3562 manager.go:217] Machine: {Timestamp:2025-12-13 00:06:03.705080368 +0000 UTC m=+0.188310931 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654132736 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:c1bd596843fb445da20eca66471ddf66 SystemUUID:3461abea-dacc-43bb-a12a-d3dbbec9e4b5 BootID:6438f10d-e0bf-466f-b6a4-f002fdf9da10 Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730829824 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85294297088 Type:vfs Inodes:41680320 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827068416 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:41 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:ef:a3:5e Speed:0 Mtu:1500} {Name:br-int MacAddress:4e:ec:11:72:80:3b Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:ef:a3:5e Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:56:b2:5e Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:5d:8a:35 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:fa:8b:e6 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:16:ad:68 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:fe:92:d5:c2:ac:94 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:b6:dc:d9:26:03:d4 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:6e:81:50:3a:c2:97 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654132736 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.706770 3562 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.707042 3562 manager.go:233] Version: {KernelVersion:5.14.0-427.22.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 416.94.202406172220-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.708390 3562 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.708634 3562 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.709890 3562 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.709970 3562 container_manager_linux.go:304] "Creating device plugin manager" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.710180 3562 manager.go:136] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.710458 3562 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.710971 3562 state_mem.go:36] "Initialized new in-memory state store" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.711298 3562 server.go:1227] "Using root directory" path="/var/lib/kubelet" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.711880 3562 kubelet.go:406] "Attempting to sync node with API server" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.711921 3562 kubelet.go:311] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.711945 3562 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.711961 3562 kubelet.go:322] "Adding apiserver pod source" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.712131 3562 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.713836 3562 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="cri-o" version="1.29.5-5.rhaos4.16.git7032128.el9" apiVersion="v1" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.714850 3562 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.715520 3562 kubelet.go:826] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.715743 3562 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 38.102.83.198:6443: connect: connection refused Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.715785 3562 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.198:6443: connect: connection refused Dec 13 00:06:03 crc kubenswrapper[3562]: E1213 00:06:03.715838 3562 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 38.102.83.198:6443: connect: connection refused Dec 13 00:06:03 crc kubenswrapper[3562]: E1213 00:06:03.715890 3562 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.198:6443: connect: connection refused Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.716108 3562 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/azure-file" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.716143 3562 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.716155 3562 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/rbd" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.716176 3562 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.716187 3562 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.716204 3562 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.716214 3562 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.716225 3562 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.716239 3562 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.716250 3562 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/cephfs" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.716267 3562 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.716278 3562 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.716291 3562 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.716307 3562 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.716318 3562 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.716567 3562 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.717122 3562 server.go:1262] "Started kubelet" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.717384 3562 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.717546 3562 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.719238 3562 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp 38.102.83.198:6443: connect: connection refused Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.721852 3562 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.721871 3562 server.go:461] "Adding debug handlers to kubelet server" Dec 13 00:06:03 crc systemd[1]: Started Kubernetes Kubelet. Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.724320 3562 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.724343 3562 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.724417 3562 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-06-27 13:05:20 +0000 UTC, rotation deadline is 2026-03-27 10:58:19.525795188 +0000 UTC Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.724452 3562 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 2506h52m15.80134526s for next certificate rotation Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.724556 3562 volume_manager.go:289] "The desired_state_of_world populator starts" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.724613 3562 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 00:06:03 crc kubenswrapper[3562]: E1213 00:06:03.739725 3562 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.198:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18809d9fd4571d31 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-13 00:06:03.717074225 +0000 UTC m=+0.200304758,LastTimestamp:2025-12-13 00:06:03.717074225 +0000 UTC m=+0.200304758,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.739894 3562 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.198:6443: connect: connection refused Dec 13 00:06:03 crc kubenswrapper[3562]: E1213 00:06:03.740056 3562 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.198:6443: connect: connection refused Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.740517 3562 factory.go:153] Registering CRI-O factory Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.740545 3562 factory.go:221] Registration of the crio container factory successfully Dec 13 00:06:03 crc kubenswrapper[3562]: E1213 00:06:03.740609 3562 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.198:6443: connect: connection refused" interval="200ms" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.740642 3562 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.740658 3562 factory.go:55] Registering systemd factory Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.740671 3562 factory.go:221] Registration of the systemd container factory successfully Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.740700 3562 factory.go:103] Registering Raw factory Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.740722 3562 manager.go:1196] Started watching for new ooms in manager Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.741558 3562 manager.go:319] Starting recovery of all containers Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.742006 3562 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.766532 3562 manager.go:324] Recovery completed Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.774714 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.774762 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.774780 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="410cf605-1970-4691-9c95-53fdc123b1f3" volumeName="kubernetes.io/configmap/410cf605-1970-4691-9c95-53fdc123b1f3-env-overrides" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.774795 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="41e8708a-e40d-4d28-846b-c52eda4d1755" volumeName="kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.774809 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="51a02bbf-2d40-4f84-868a-d399ea18a846" volumeName="kubernetes.io/secret/51a02bbf-2d40-4f84-868a-d399ea18a846-webhook-cert" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.774825 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" volumeName="kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-bound-sa-token" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.774840 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="12e733dd-0939-4f1b-9cbb-13897e093787" volumeName="kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.774885 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="5bacb25d-97b6-4491-8fb4-99feae1d802a" volumeName="kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.774920 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c085412c-b875-46c9-ae3e-e6b0d8067091" volumeName="kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.774955 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="6268b7fe-8910-4505-b404-6f1df638105c" volumeName="kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.774970 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c085412c-b875-46c9-ae3e-e6b0d8067091" volumeName="kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.774986 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="297ab9b6-2186-4d5b-a952-2bfd59af63c4" volumeName="kubernetes.io/configmap/297ab9b6-2186-4d5b-a952-2bfd59af63c4-mcc-auth-proxy-config" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.775004 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="410cf605-1970-4691-9c95-53fdc123b1f3" volumeName="kubernetes.io/configmap/410cf605-1970-4691-9c95-53fdc123b1f3-ovnkube-config" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.775018 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" volumeName="kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.775034 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d0dcce3-d96e-48cb-9b9f-362105911589" volumeName="kubernetes.io/secret/9d0dcce3-d96e-48cb-9b9f-362105911589-proxy-tls" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.775048 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" volumeName="kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.775063 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="1a3e81c3-c292-4130-9436-f94062c91efd" volumeName="kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.775077 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3f4dca86-e6ee-4ec9-8324-86aff960225e" volumeName="kubernetes.io/empty-dir/3f4dca86-e6ee-4ec9-8324-86aff960225e-catalog-content" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.775092 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" volumeName="kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.775129 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" volumeName="kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.775145 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b5c38ff-1fa8-4219-994d-15776acd4a4d" volumeName="kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.775160 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="6d67253e-2acd-4bc1-8185-793587da4f17" volumeName="kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.775174 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" volumeName="kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.775207 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="5bacb25d-97b6-4491-8fb4-99feae1d802a" volumeName="kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.775225 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa90b3c2-febd-4588-a063-7fbbe82f00c1" volumeName="kubernetes.io/secret/aa90b3c2-febd-4588-a063-7fbbe82f00c1-stats-auth" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.775241 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf1a8b70-3856-486f-9912-a2de1d57c3fb" volumeName="kubernetes.io/projected/bf1a8b70-3856-486f-9912-a2de1d57c3fb-kube-api-access-6z2n9" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.775256 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="5bacb25d-97b6-4491-8fb4-99feae1d802a" volumeName="kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.775270 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" volumeName="kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.775285 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" volumeName="kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.775300 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" volumeName="kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.775315 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="475321a1-8b7e-4033-8f72-b05a8b377347" volumeName="kubernetes.io/projected/475321a1-8b7e-4033-8f72-b05a8b377347-kube-api-access-c2f8t" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.775345 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="b54e8941-2fc4-432a-9e51-39684df9089e" volumeName="kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.775381 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="10603adc-d495-423c-9459-4caa405960bb" volumeName="kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.775396 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="59748b9b-c309-4712-aa85-bb38d71c4915" volumeName="kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.775424 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a5ae51d-d173-4531-8975-f164c975ce1f" volumeName="kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.775440 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd556935-a077-45df-ba3f-d42c39326ccd" volumeName="kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.775455 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.775469 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d0dcce3-d96e-48cb-9b9f-362105911589" volumeName="kubernetes.io/projected/9d0dcce3-d96e-48cb-9b9f-362105911589-kube-api-access-xkzjk" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.775484 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" volumeName="kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.775498 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" volumeName="kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.775513 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c782cf62-a827-4677-b3c2-6f82c5f09cbb" volumeName="kubernetes.io/empty-dir/c782cf62-a827-4677-b3c2-6f82c5f09cbb-catalog-content" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.775528 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="297ab9b6-2186-4d5b-a952-2bfd59af63c4" volumeName="kubernetes.io/projected/297ab9b6-2186-4d5b-a952-2bfd59af63c4-kube-api-access-vtgqn" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.775542 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3e19f9e8-9a37-4ca8-9790-c219750ab482" volumeName="kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-env-overrides" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.775558 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="7d51f445-054a-4e4f-a67b-a828f5a32511" volumeName="kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.775577 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" volumeName="kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.775591 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" volumeName="kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.775605 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" volumeName="kubernetes.io/projected/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-kube-api-access-rkkfv" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.775620 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" volumeName="kubernetes.io/empty-dir/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-catalog-content" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.775641 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.775658 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b5c38ff-1fa8-4219-994d-15776acd4a4d" volumeName="kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.775673 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="41e8708a-e40d-4d28-846b-c52eda4d1755" volumeName="kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.775692 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" volumeName="kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.775715 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ed024e5d-8fc2-4c22-803d-73f3c9795f19" volumeName="kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.775733 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" volumeName="kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.775748 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3e19f9e8-9a37-4ca8-9790-c219750ab482" volumeName="kubernetes.io/secret/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovn-node-metrics-cert" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.775762 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="530553aa-0a1d-423e-8a22-f5eb4bdbb883" volumeName="kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.775776 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="7d51f445-054a-4e4f-a67b-a828f5a32511" volumeName="kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.775791 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c782cf62-a827-4677-b3c2-6f82c5f09cbb" volumeName="kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.775808 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.776430 3562 reconstruct_new.go:149] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6ea5f9a7192af1960ec8c50a86fd2d9a756dbf85695798868f611e04a03ec009/globalmount" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.776456 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="5bacb25d-97b6-4491-8fb4-99feae1d802a" volumeName="kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.776472 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="71af81a9-7d43-49b2-9287-c375900aa905" volumeName="kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.776490 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec1bae8b-3200-4ad9-b33b-cf8701f3027c" volumeName="kubernetes.io/projected/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-kube-api-access-4qr9t" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.776504 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ed024e5d-8fc2-4c22-803d-73f3c9795f19" volumeName="kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.776519 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8" volumeName="kubernetes.io/projected/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-kube-api-access-bwbqm" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.776533 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc291782-27d2-4a74-af79-c7dcb31535d2" volumeName="kubernetes.io/secret/cc291782-27d2-4a74-af79-c7dcb31535d2-metrics-tls" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.776548 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" volumeName="kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.776563 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.776579 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="120b38dc-8236-4fa6-a452-642b8ad738ee" volumeName="kubernetes.io/projected/120b38dc-8236-4fa6-a452-642b8ad738ee-kube-api-access-bwvjb" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.776595 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="7d51f445-054a-4e4f-a67b-a828f5a32511" volumeName="kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-bound-sa-token" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.776609 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c782cf62-a827-4677-b3c2-6f82c5f09cbb" volumeName="kubernetes.io/empty-dir/c782cf62-a827-4677-b3c2-6f82c5f09cbb-utilities" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.776623 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec1bae8b-3200-4ad9-b33b-cf8701f3027c" volumeName="kubernetes.io/secret/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-machine-approver-tls" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.776638 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="120b38dc-8236-4fa6-a452-642b8ad738ee" volumeName="kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-auth-proxy-config" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.776652 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="7d51f445-054a-4e4f-a67b-a828f5a32511" volumeName="kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.776665 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" volumeName="kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.776680 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9fb762d1-812f-43f1-9eac-68034c1ecec7" volumeName="kubernetes.io/projected/9fb762d1-812f-43f1-9eac-68034c1ecec7-kube-api-access" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.776694 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" volumeName="kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.776708 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" volumeName="kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.776726 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3e19f9e8-9a37-4ca8-9790-c219750ab482" volumeName="kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovnkube-config" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.776748 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="51a02bbf-2d40-4f84-868a-d399ea18a846" volumeName="kubernetes.io/configmap/51a02bbf-2d40-4f84-868a-d399ea18a846-env-overrides" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.776765 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="5bacb25d-97b6-4491-8fb4-99feae1d802a" volumeName="kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.776779 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" volumeName="kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.776793 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e" volumeName="kubernetes.io/configmap/f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e-serviceca" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.776809 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" volumeName="kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.776822 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.776837 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.776853 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a5ae51d-d173-4531-8975-f164c975ce1f" volumeName="kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.776875 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" volumeName="kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.776890 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="71af81a9-7d43-49b2-9287-c375900aa905" volumeName="kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.776917 3562 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.776947 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd556935-a077-45df-ba3f-d42c39326ccd" volumeName="kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.776964 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.776979 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" volumeName="kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.776993 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.777007 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" volumeName="kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.777024 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf1a8b70-3856-486f-9912-a2de1d57c3fb" volumeName="kubernetes.io/secret/bf1a8b70-3856-486f-9912-a2de1d57c3fb-node-bootstrap-token" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.777039 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" volumeName="kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.777053 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" volumeName="kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.777074 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="34a48baf-1bee-4921-8bb2-9b7320e76f79" volumeName="kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.777091 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="41e8708a-e40d-4d28-846b-c52eda4d1755" volumeName="kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.777107 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa90b3c2-febd-4588-a063-7fbbe82f00c1" volumeName="kubernetes.io/configmap/aa90b3c2-febd-4588-a063-7fbbe82f00c1-service-ca-bundle" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.777123 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="5bacb25d-97b6-4491-8fb4-99feae1d802a" volumeName="kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.777141 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" volumeName="kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.777158 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="530553aa-0a1d-423e-8a22-f5eb4bdbb883" volumeName="kubernetes.io/empty-dir/530553aa-0a1d-423e-8a22-f5eb4bdbb883-available-featuregates" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.777174 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa90b3c2-febd-4588-a063-7fbbe82f00c1" volumeName="kubernetes.io/projected/aa90b3c2-febd-4588-a063-7fbbe82f00c1-kube-api-access-v45vm" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.777192 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa90b3c2-febd-4588-a063-7fbbe82f00c1" volumeName="kubernetes.io/secret/aa90b3c2-febd-4588-a063-7fbbe82f00c1-metrics-certs" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.777208 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0f394926-bdb9-425c-b36e-264d7fd34550" volumeName="kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.777224 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" volumeName="kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.777238 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" volumeName="kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.777254 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b5c38ff-1fa8-4219-994d-15776acd4a4d" volumeName="kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.777269 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="887d596e-c519-4bfa-af90-3edd9e1b2f0f" volumeName="kubernetes.io/empty-dir/887d596e-c519-4bfa-af90-3edd9e1b2f0f-catalog-content" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.777285 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" volumeName="kubernetes.io/empty-dir/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-ca-trust-extracted" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.777299 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="21d29937-debd-4407-b2b1-d1053cb0f342" volumeName="kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.777314 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3e19f9e8-9a37-4ca8-9790-c219750ab482" volumeName="kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovnkube-script-lib" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.777329 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" volumeName="kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.777345 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="475321a1-8b7e-4033-8f72-b05a8b377347" volumeName="kubernetes.io/configmap/475321a1-8b7e-4033-8f72-b05a8b377347-multus-daemon-config" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.777683 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="475321a1-8b7e-4033-8f72-b05a8b377347" volumeName="kubernetes.io/configmap/475321a1-8b7e-4033-8f72-b05a8b377347-cni-binary-copy" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.777741 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0f394926-bdb9-425c-b36e-264d7fd34550" volumeName="kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.777770 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="b54e8941-2fc4-432a-9e51-39684df9089e" volumeName="kubernetes.io/projected/b54e8941-2fc4-432a-9e51-39684df9089e-kube-api-access-9x6dp" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.777784 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" volumeName="kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-certificates" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.777796 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0f394926-bdb9-425c-b36e-264d7fd34550" volumeName="kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.777813 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="59748b9b-c309-4712-aa85-bb38d71c4915" volumeName="kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.777826 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="887d596e-c519-4bfa-af90-3edd9e1b2f0f" volumeName="kubernetes.io/empty-dir/887d596e-c519-4bfa-af90-3edd9e1b2f0f-utilities" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.777885 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ed024e5d-8fc2-4c22-803d-73f3c9795f19" volumeName="kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.777898 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="1a3e81c3-c292-4130-9436-f94062c91efd" volumeName="kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.777970 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="41e8708a-e40d-4d28-846b-c52eda4d1755" volumeName="kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.778970 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="5bacb25d-97b6-4491-8fb4-99feae1d802a" volumeName="kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.779014 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="4092a9f8-5acc-4932-9e90-ef962eeb301a" volumeName="kubernetes.io/empty-dir/4092a9f8-5acc-4932-9e90-ef962eeb301a-catalog-content" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.779052 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.779078 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b5c38ff-1fa8-4219-994d-15776acd4a4d" volumeName="kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.779179 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="21d29937-debd-4407-b2b1-d1053cb0f342" volumeName="kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.779221 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec1bae8b-3200-4ad9-b33b-cf8701f3027c" volumeName="kubernetes.io/configmap/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-config" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.779244 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e" volumeName="kubernetes.io/projected/f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e-kube-api-access-d7jw8" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.779278 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="10603adc-d495-423c-9459-4caa405960bb" volumeName="kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.779307 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="41e8708a-e40d-4d28-846b-c52eda4d1755" volumeName="kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.779399 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="b54e8941-2fc4-432a-9e51-39684df9089e" volumeName="kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.779426 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="13045510-8717-4a71-ade4-be95a76440a7" volumeName="kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.779460 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="1a3e81c3-c292-4130-9436-f94062c91efd" volumeName="kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.779489 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="13045510-8717-4a71-ade4-be95a76440a7" volumeName="kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.779568 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="1a3e81c3-c292-4130-9436-f94062c91efd" volumeName="kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.779607 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd556935-a077-45df-ba3f-d42c39326ccd" volumeName="kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.779631 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b5c38ff-1fa8-4219-994d-15776acd4a4d" volumeName="kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.779663 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" volumeName="kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.779736 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a23c0ee-5648-448c-b772-83dced2891ce" volumeName="kubernetes.io/projected/6a23c0ee-5648-448c-b772-83dced2891ce-kube-api-access-gsxd9" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.779780 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="6d67253e-2acd-4bc1-8185-793587da4f17" volumeName="kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.779806 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8" volumeName="kubernetes.io/configmap/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-cni-sysctl-allowlist" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.779838 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="b54e8941-2fc4-432a-9e51-39684df9089e" volumeName="kubernetes.io/projected/b54e8941-2fc4-432a-9e51-39684df9089e-bound-sa-token" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.779862 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="d0f40333-c860-4c04-8058-a0bf572dcf12" volumeName="kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.779957 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="21d29937-debd-4407-b2b1-d1053cb0f342" volumeName="kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.779995 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" volumeName="kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.780021 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="41e8708a-e40d-4d28-846b-c52eda4d1755" volumeName="kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.780056 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="51a02bbf-2d40-4f84-868a-d399ea18a846" volumeName="kubernetes.io/configmap/51a02bbf-2d40-4f84-868a-d399ea18a846-ovnkube-identity-cm" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.780136 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3f4dca86-e6ee-4ec9-8324-86aff960225e" volumeName="kubernetes.io/empty-dir/3f4dca86-e6ee-4ec9-8324-86aff960225e-utilities" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.780179 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" volumeName="kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.780205 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" volumeName="kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-kube-api-access-scpwv" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.780502 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.780533 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.780553 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.780680 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="21d29937-debd-4407-b2b1-d1053cb0f342" volumeName="kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.780728 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" volumeName="kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.780764 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.780785 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="120b38dc-8236-4fa6-a452-642b8ad738ee" volumeName="kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.780807 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="13045510-8717-4a71-ade4-be95a76440a7" volumeName="kubernetes.io/projected/13045510-8717-4a71-ade4-be95a76440a7-kube-api-access-dtjml" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.780828 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" volumeName="kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.780852 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="887d596e-c519-4bfa-af90-3edd9e1b2f0f" volumeName="kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.780870 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b5d722a-1123-4935-9740-52a08d018bc9" volumeName="kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.780893 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3f4dca86-e6ee-4ec9-8324-86aff960225e" volumeName="kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.780931 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="410cf605-1970-4691-9c95-53fdc123b1f3" volumeName="kubernetes.io/projected/410cf605-1970-4691-9c95-53fdc123b1f3-kube-api-access-cx4f9" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.780950 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d0dcce3-d96e-48cb-9b9f-362105911589" volumeName="kubernetes.io/configmap/9d0dcce3-d96e-48cb-9b9f-362105911589-mcd-auth-proxy-config" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.780972 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec1bae8b-3200-4ad9-b33b-cf8701f3027c" volumeName="kubernetes.io/configmap/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-auth-proxy-config" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.780989 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.781010 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="4092a9f8-5acc-4932-9e90-ef962eeb301a" volumeName="kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.781028 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9fb762d1-812f-43f1-9eac-68034c1ecec7" volumeName="kubernetes.io/secret/9fb762d1-812f-43f1-9eac-68034c1ecec7-serving-cert" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.781049 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="2b6d14a5-ca00-40c7-af7a-051a98a24eed" volumeName="kubernetes.io/projected/2b6d14a5-ca00-40c7-af7a-051a98a24eed-kube-api-access-j4qn7" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.781067 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="4092a9f8-5acc-4932-9e90-ef962eeb301a" volumeName="kubernetes.io/empty-dir/4092a9f8-5acc-4932-9e90-ef962eeb301a-utilities" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.781088 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b5c38ff-1fa8-4219-994d-15776acd4a4d" volumeName="kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.781107 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="41e8708a-e40d-4d28-846b-c52eda4d1755" volumeName="kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.781147 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" volumeName="kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.781169 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" volumeName="kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.781186 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="41e8708a-e40d-4d28-846b-c52eda4d1755" volumeName="kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.781211 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" volumeName="kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.781230 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf1a8b70-3856-486f-9912-a2de1d57c3fb" volumeName="kubernetes.io/secret/bf1a8b70-3856-486f-9912-a2de1d57c3fb-certs" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.781252 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c085412c-b875-46c9-ae3e-e6b0d8067091" volumeName="kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.781270 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" volumeName="kubernetes.io/projected/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-kube-api-access-8svnk" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.781291 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="e4a7de23-6134-4044-902a-0900dc04a501" volumeName="kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.781308 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.781324 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="120b38dc-8236-4fa6-a452-642b8ad738ee" volumeName="kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.781347 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="297ab9b6-2186-4d5b-a952-2bfd59af63c4" volumeName="kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.781367 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="41e8708a-e40d-4d28-846b-c52eda4d1755" volumeName="kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.781383 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="6d67253e-2acd-4bc1-8185-793587da4f17" volumeName="kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.781403 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc291782-27d2-4a74-af79-c7dcb31535d2" volumeName="kubernetes.io/projected/cc291782-27d2-4a74-af79-c7dcb31535d2-kube-api-access-4sfhc" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.781419 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" volumeName="kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.781440 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="530553aa-0a1d-423e-8a22-f5eb4bdbb883" volumeName="kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.781456 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="71af81a9-7d43-49b2-9287-c375900aa905" volumeName="kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.781476 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" volumeName="kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.781532 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" volumeName="kubernetes.io/empty-dir/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-utilities" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.781552 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a5ae51d-d173-4531-8975-f164c975ce1f" volumeName="kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.781582 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" volumeName="kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.781610 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3e19f9e8-9a37-4ca8-9790-c219750ab482" volumeName="kubernetes.io/projected/3e19f9e8-9a37-4ca8-9790-c219750ab482-kube-api-access-f9495" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.781637 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="51a02bbf-2d40-4f84-868a-d399ea18a846" volumeName="kubernetes.io/projected/51a02bbf-2d40-4f84-868a-d399ea18a846-kube-api-access-zjg2w" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.781657 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8" volumeName="kubernetes.io/configmap/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-cni-binary-copy" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.781688 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd556935-a077-45df-ba3f-d42c39326ccd" volumeName="kubernetes.io/empty-dir/bd556935-a077-45df-ba3f-d42c39326ccd-tmpfs" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.781710 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="2b6d14a5-ca00-40c7-af7a-051a98a24eed" volumeName="kubernetes.io/configmap/2b6d14a5-ca00-40c7-af7a-051a98a24eed-iptables-alerter-script" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.781731 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="410cf605-1970-4691-9c95-53fdc123b1f3" volumeName="kubernetes.io/secret/410cf605-1970-4691-9c95-53fdc123b1f3-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.781754 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9fb762d1-812f-43f1-9eac-68034c1ecec7" volumeName="kubernetes.io/configmap/9fb762d1-812f-43f1-9eac-68034c1ecec7-service-ca" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.781777 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" volumeName="kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.781797 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" volumeName="kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.781819 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="1a3e81c3-c292-4130-9436-f94062c91efd" volumeName="kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.781842 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" volumeName="kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.781863 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa90b3c2-febd-4588-a063-7fbbe82f00c1" volumeName="kubernetes.io/secret/aa90b3c2-febd-4588-a063-7fbbe82f00c1-default-certificate" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.781880 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="cf1a8966-f594-490a-9fbb-eec5bafd13d3" volumeName="kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.781917 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="e4a7de23-6134-4044-902a-0900dc04a501" volumeName="kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.781939 3562 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="e4a7de23-6134-4044-902a-0900dc04a501" volumeName="kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r" seLinuxMountContext="" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.781951 3562 reconstruct_new.go:102] "Volume reconstruction finished" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.781965 3562 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.787316 3562 cpu_manager.go:215] "Starting CPU manager" policy="none" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.787336 3562 cpu_manager.go:216] "Reconciling" reconcilePeriod="10s" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.787355 3562 state_mem.go:36] "Initialized new in-memory state store" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.789200 3562 policy_none.go:49] "None policy: Start" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.790054 3562 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.790121 3562 state_mem.go:35] "Initializing new in-memory state store" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.827987 3562 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.830305 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.830579 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.830692 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.830813 3562 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 13 00:06:03 crc kubenswrapper[3562]: E1213 00:06:03.834356 3562 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.198:6443: connect: connection refused" node="crc" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.854379 3562 manager.go:296] "Starting Device Plugin manager" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.854767 3562 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.854783 3562 server.go:79] "Starting device plugin registration server" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.855321 3562 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.855409 3562 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.855418 3562 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.866185 3562 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.868146 3562 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.873829 3562 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.873871 3562 kubelet.go:2343] "Starting kubelet main sync loop" Dec 13 00:06:03 crc kubenswrapper[3562]: E1213 00:06:03.873931 3562 kubelet.go:2367] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Dec 13 00:06:03 crc kubenswrapper[3562]: W1213 00:06:03.875627 3562 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.198:6443: connect: connection refused Dec 13 00:06:03 crc kubenswrapper[3562]: E1213 00:06:03.875671 3562 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.198:6443: connect: connection refused Dec 13 00:06:03 crc kubenswrapper[3562]: E1213 00:06:03.906156 3562 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 13 00:06:03 crc kubenswrapper[3562]: E1213 00:06:03.942093 3562 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.198:6443: connect: connection refused" interval="400ms" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.974274 3562 kubelet.go:2429] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.974364 3562 topology_manager.go:215] "Topology Admit Handler" podUID="d3ae206906481b4831fd849b559269c8" podNamespace="openshift-machine-config-operator" podName="kube-rbac-proxy-crio-crc" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.974415 3562 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.975963 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.975998 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.976012 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.976079 3562 topology_manager.go:215] "Topology Admit Handler" podUID="b2a6a3b2ca08062d24afa4c01aaf9e4f" podNamespace="openshift-etcd" podName="etcd-crc" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.976104 3562 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.976296 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.976369 3562 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.976694 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.976738 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.976754 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.976849 3562 topology_manager.go:215] "Topology Admit Handler" podUID="ae85115fdc231b4002b57317b41a6400" podNamespace="openshift-kube-apiserver" podName="kube-apiserver-crc" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.976890 3562 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.977005 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.977039 3562 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.977278 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.977314 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.977368 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.977694 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.977711 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.977719 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.977780 3562 topology_manager.go:215] "Topology Admit Handler" podUID="bd6a3a59e513625ca0ae3724df2686bc" podNamespace="openshift-kube-controller-manager" podName="kube-controller-manager-crc" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.977801 3562 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.978128 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.978153 3562 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.978631 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.978647 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.978646 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.978676 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.978692 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.978793 3562 topology_manager.go:215] "Topology Admit Handler" podUID="6a57a7fb1944b43a6bd11a349520d301" podNamespace="openshift-kube-scheduler" podName="openshift-kube-scheduler-crc" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.978834 3562 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.978656 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.979186 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.979327 3562 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.979565 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.979596 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.979612 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.979762 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.979802 3562 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.980321 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.980432 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.980519 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.980633 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.980649 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.980667 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.980678 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.980684 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:06:03 crc kubenswrapper[3562]: I1213 00:06:03.980694 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.034694 3562 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.035735 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.035775 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.035792 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.035821 3562 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 13 00:06:04 crc kubenswrapper[3562]: E1213 00:06:04.038661 3562 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.198:6443: connect: connection refused" node="crc" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.087701 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.088128 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-static-pod-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.088163 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-cert-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.088191 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.088215 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/bd6a3a59e513625ca0ae3724df2686bc-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"bd6a3a59e513625ca0ae3724df2686bc\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.088237 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/6a57a7fb1944b43a6bd11a349520d301-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"6a57a7fb1944b43a6bd11a349520d301\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.088457 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d3ae206906481b4831fd849b559269c8-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d3ae206906481b4831fd849b559269c8\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.088478 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-resource-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.088499 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-log-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.088519 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/bd6a3a59e513625ca0ae3724df2686bc-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"bd6a3a59e513625ca0ae3724df2686bc\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.088539 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d3ae206906481b4831fd849b559269c8-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d3ae206906481b4831fd849b559269c8\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.088558 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-data-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.088578 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-usr-local-bin\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.088616 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.088637 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/6a57a7fb1944b43a6bd11a349520d301-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"6a57a7fb1944b43a6bd11a349520d301\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.190805 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-log-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.190877 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-log-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.190939 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/bd6a3a59e513625ca0ae3724df2686bc-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"bd6a3a59e513625ca0ae3724df2686bc\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.190964 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d3ae206906481b4831fd849b559269c8-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d3ae206906481b4831fd849b559269c8\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.190996 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-resource-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.191008 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d3ae206906481b4831fd849b559269c8-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d3ae206906481b4831fd849b559269c8\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.191016 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-usr-local-bin\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.191043 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-resource-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.191061 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.191069 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-usr-local-bin\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.191071 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/bd6a3a59e513625ca0ae3724df2686bc-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"bd6a3a59e513625ca0ae3724df2686bc\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.191083 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/6a57a7fb1944b43a6bd11a349520d301-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"6a57a7fb1944b43a6bd11a349520d301\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.191190 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d3ae206906481b4831fd849b559269c8-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d3ae206906481b4831fd849b559269c8\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.191093 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.191215 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-data-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.191104 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/6a57a7fb1944b43a6bd11a349520d301-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"6a57a7fb1944b43a6bd11a349520d301\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.191234 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.191254 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d3ae206906481b4831fd849b559269c8-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d3ae206906481b4831fd849b559269c8\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.191258 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.191278 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.191289 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/bd6a3a59e513625ca0ae3724df2686bc-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"bd6a3a59e513625ca0ae3724df2686bc\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.191305 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-data-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.191310 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/6a57a7fb1944b43a6bd11a349520d301-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"6a57a7fb1944b43a6bd11a349520d301\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.191357 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-static-pod-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.191361 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.191381 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-cert-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.191391 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/6a57a7fb1944b43a6bd11a349520d301-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"6a57a7fb1944b43a6bd11a349520d301\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.191422 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/bd6a3a59e513625ca0ae3724df2686bc-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"bd6a3a59e513625ca0ae3724df2686bc\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.191468 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-static-pod-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.191525 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-cert-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.339497 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 13 00:06:04 crc kubenswrapper[3562]: E1213 00:06:04.347749 3562 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.198:6443: connect: connection refused" interval="800ms" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.347893 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 13 00:06:04 crc kubenswrapper[3562]: W1213 00:06:04.359090 3562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2a6a3b2ca08062d24afa4c01aaf9e4f.slice/crio-6aa21d907f01bbab666885a1cda796083aa4cd8840eeb5bfdb2736af6be2f539 WatchSource:0}: Error finding container 6aa21d907f01bbab666885a1cda796083aa4cd8840eeb5bfdb2736af6be2f539: Status 404 returned error can't find the container with id 6aa21d907f01bbab666885a1cda796083aa4cd8840eeb5bfdb2736af6be2f539 Dec 13 00:06:04 crc kubenswrapper[3562]: W1213 00:06:04.360090 3562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3ae206906481b4831fd849b559269c8.slice/crio-2331b9b00ed6949e93c203ee51bcd97406c091ba0a02369de4aa94cb1b65b209 WatchSource:0}: Error finding container 2331b9b00ed6949e93c203ee51bcd97406c091ba0a02369de4aa94cb1b65b209: Status 404 returned error can't find the container with id 2331b9b00ed6949e93c203ee51bcd97406c091ba0a02369de4aa94cb1b65b209 Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.372221 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.379298 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 00:06:04 crc kubenswrapper[3562]: W1213 00:06:04.386409 3562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a57a7fb1944b43a6bd11a349520d301.slice/crio-74f5daf4abb13c4e176b1a0659a8a2023f2da3409be8ca275e0207b03b378c6b WatchSource:0}: Error finding container 74f5daf4abb13c4e176b1a0659a8a2023f2da3409be8ca275e0207b03b378c6b: Status 404 returned error can't find the container with id 74f5daf4abb13c4e176b1a0659a8a2023f2da3409be8ca275e0207b03b378c6b Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.392639 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 00:06:04 crc kubenswrapper[3562]: W1213 00:06:04.393103 3562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd6a3a59e513625ca0ae3724df2686bc.slice/crio-d395f77fb181a2206c88bf49d44549409e40b1e106871eb4286db4c34b802175 WatchSource:0}: Error finding container d395f77fb181a2206c88bf49d44549409e40b1e106871eb4286db4c34b802175: Status 404 returned error can't find the container with id d395f77fb181a2206c88bf49d44549409e40b1e106871eb4286db4c34b802175 Dec 13 00:06:04 crc kubenswrapper[3562]: W1213 00:06:04.412418 3562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae85115fdc231b4002b57317b41a6400.slice/crio-b950e13007c25be13322176af8b57a904bc7e50a8d398e51f166e8d5b9a8df1a WatchSource:0}: Error finding container b950e13007c25be13322176af8b57a904bc7e50a8d398e51f166e8d5b9a8df1a: Status 404 returned error can't find the container with id b950e13007c25be13322176af8b57a904bc7e50a8d398e51f166e8d5b9a8df1a Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.438931 3562 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.440782 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.440866 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.440886 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.440940 3562 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 13 00:06:04 crc kubenswrapper[3562]: E1213 00:06:04.441967 3562 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.198:6443: connect: connection refused" node="crc" Dec 13 00:06:04 crc kubenswrapper[3562]: W1213 00:06:04.608124 3562 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.198:6443: connect: connection refused Dec 13 00:06:04 crc kubenswrapper[3562]: E1213 00:06:04.608190 3562 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.198:6443: connect: connection refused Dec 13 00:06:04 crc kubenswrapper[3562]: W1213 00:06:04.704787 3562 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.198:6443: connect: connection refused Dec 13 00:06:04 crc kubenswrapper[3562]: E1213 00:06:04.704883 3562 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.198:6443: connect: connection refused Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.721060 3562 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp 38.102.83.198:6443: connect: connection refused Dec 13 00:06:04 crc kubenswrapper[3562]: W1213 00:06:04.757048 3562 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.198:6443: connect: connection refused Dec 13 00:06:04 crc kubenswrapper[3562]: E1213 00:06:04.757530 3562 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.198:6443: connect: connection refused Dec 13 00:06:04 crc kubenswrapper[3562]: E1213 00:06:04.795223 3562 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.198:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18809d9fd4571d31 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-13 00:06:03.717074225 +0000 UTC m=+0.200304758,LastTimestamp:2025-12-13 00:06:03.717074225 +0000 UTC m=+0.200304758,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.878662 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerStarted","Data":"c6e9ba1bf1bd3177fe795558bc6feed1a2c0a824249812472c2a530048b2bc9b"} Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.878701 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerStarted","Data":"d395f77fb181a2206c88bf49d44549409e40b1e106871eb4286db4c34b802175"} Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.880555 3562 generic.go:334] "Generic (PLEG): container finished" podID="6a57a7fb1944b43a6bd11a349520d301" containerID="f102192e131585e3aa0101339d20c0fb8810cf02fc30b0cacf0226320bff2121" exitCode=0 Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.880612 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"6a57a7fb1944b43a6bd11a349520d301","Type":"ContainerDied","Data":"f102192e131585e3aa0101339d20c0fb8810cf02fc30b0cacf0226320bff2121"} Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.880629 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"6a57a7fb1944b43a6bd11a349520d301","Type":"ContainerStarted","Data":"74f5daf4abb13c4e176b1a0659a8a2023f2da3409be8ca275e0207b03b378c6b"} Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.880704 3562 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.881458 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.881499 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.881516 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.881970 3562 generic.go:334] "Generic (PLEG): container finished" podID="b2a6a3b2ca08062d24afa4c01aaf9e4f" containerID="cc3197dfdeecafca4083be69c7be9d278ec564b7da4b46e6aff97eafc552cdd6" exitCode=0 Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.882045 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerDied","Data":"cc3197dfdeecafca4083be69c7be9d278ec564b7da4b46e6aff97eafc552cdd6"} Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.882084 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerStarted","Data":"6aa21d907f01bbab666885a1cda796083aa4cd8840eeb5bfdb2736af6be2f539"} Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.882178 3562 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.883055 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.883081 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.883090 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.883484 3562 generic.go:334] "Generic (PLEG): container finished" podID="d3ae206906481b4831fd849b559269c8" containerID="d25731c743b17f30917bb1194cad0b8dbdf5fb58787af71bc746fb12a140f23c" exitCode=0 Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.883562 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d3ae206906481b4831fd849b559269c8","Type":"ContainerDied","Data":"d25731c743b17f30917bb1194cad0b8dbdf5fb58787af71bc746fb12a140f23c"} Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.883590 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d3ae206906481b4831fd849b559269c8","Type":"ContainerStarted","Data":"2331b9b00ed6949e93c203ee51bcd97406c091ba0a02369de4aa94cb1b65b209"} Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.883667 3562 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.886074 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.886112 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.886123 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.888769 3562 generic.go:334] "Generic (PLEG): container finished" podID="ae85115fdc231b4002b57317b41a6400" containerID="6b714d9cc211928a69778273896d7209694b3a4d32a602ad86f5ee50b893ffab" exitCode=0 Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.888810 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"ae85115fdc231b4002b57317b41a6400","Type":"ContainerDied","Data":"6b714d9cc211928a69778273896d7209694b3a4d32a602ad86f5ee50b893ffab"} Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.888831 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"ae85115fdc231b4002b57317b41a6400","Type":"ContainerStarted","Data":"b950e13007c25be13322176af8b57a904bc7e50a8d398e51f166e8d5b9a8df1a"} Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.888983 3562 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.890159 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.890195 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.890205 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.898572 3562 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.899451 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.899492 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:06:04 crc kubenswrapper[3562]: I1213 00:06:04.899507 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:06:05 crc kubenswrapper[3562]: E1213 00:06:05.149755 3562 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.198:6443: connect: connection refused" interval="1.6s" Dec 13 00:06:05 crc kubenswrapper[3562]: W1213 00:06:05.233412 3562 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 38.102.83.198:6443: connect: connection refused Dec 13 00:06:05 crc kubenswrapper[3562]: E1213 00:06:05.233475 3562 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 38.102.83.198:6443: connect: connection refused Dec 13 00:06:05 crc kubenswrapper[3562]: I1213 00:06:05.243294 3562 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:06:05 crc kubenswrapper[3562]: I1213 00:06:05.247052 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:06:05 crc kubenswrapper[3562]: I1213 00:06:05.247089 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:06:05 crc kubenswrapper[3562]: I1213 00:06:05.247104 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:06:05 crc kubenswrapper[3562]: I1213 00:06:05.247136 3562 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 13 00:06:05 crc kubenswrapper[3562]: E1213 00:06:05.248023 3562 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.198:6443: connect: connection refused" node="crc" Dec 13 00:06:05 crc kubenswrapper[3562]: I1213 00:06:05.893514 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerStarted","Data":"52d20f9ab887817f249c65d311a43b4ddf8e6f25c40d131af51e43d68c13b8a5"} Dec 13 00:06:05 crc kubenswrapper[3562]: I1213 00:06:05.893559 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerStarted","Data":"be21891361be37b2bb3a7cc9ff2887447fa4e15f09d28dde970934db72e7075e"} Dec 13 00:06:05 crc kubenswrapper[3562]: I1213 00:06:05.893570 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerStarted","Data":"d76e96e2997d635e1afbf2b04172f3fbe83b593dca14c71c7c64258b695939ca"} Dec 13 00:06:05 crc kubenswrapper[3562]: I1213 00:06:05.893684 3562 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:06:05 crc kubenswrapper[3562]: I1213 00:06:05.895365 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:06:05 crc kubenswrapper[3562]: I1213 00:06:05.895388 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:06:05 crc kubenswrapper[3562]: I1213 00:06:05.895401 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:06:05 crc kubenswrapper[3562]: I1213 00:06:05.895787 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"6a57a7fb1944b43a6bd11a349520d301","Type":"ContainerStarted","Data":"56cabbb337eb531e7c5b391e0c4d71d3b18e5d9486d8b8436b4ffaba95d06b1a"} Dec 13 00:06:05 crc kubenswrapper[3562]: I1213 00:06:05.895811 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"6a57a7fb1944b43a6bd11a349520d301","Type":"ContainerStarted","Data":"6899c3f02391ba66dd28d9e6a3e635c42f1041342ceb0dbc19fab61d023f1cd9"} Dec 13 00:06:05 crc kubenswrapper[3562]: I1213 00:06:05.895825 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"6a57a7fb1944b43a6bd11a349520d301","Type":"ContainerStarted","Data":"36c959ba0814815304378252246375b98efbb75aa0d370c3f7a558c11c3c694d"} Dec 13 00:06:05 crc kubenswrapper[3562]: I1213 00:06:05.895976 3562 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:06:05 crc kubenswrapper[3562]: I1213 00:06:05.896537 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:06:05 crc kubenswrapper[3562]: I1213 00:06:05.896551 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:06:05 crc kubenswrapper[3562]: I1213 00:06:05.896565 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:06:05 crc kubenswrapper[3562]: I1213 00:06:05.900952 3562 generic.go:334] "Generic (PLEG): container finished" podID="b2a6a3b2ca08062d24afa4c01aaf9e4f" containerID="6535e5f4fba462e2ba9f914573103766a941ff07f54aac5e9ede85ef996c25a3" exitCode=0 Dec 13 00:06:05 crc kubenswrapper[3562]: I1213 00:06:05.901020 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerDied","Data":"6535e5f4fba462e2ba9f914573103766a941ff07f54aac5e9ede85ef996c25a3"} Dec 13 00:06:05 crc kubenswrapper[3562]: I1213 00:06:05.901187 3562 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:06:05 crc kubenswrapper[3562]: I1213 00:06:05.902025 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:06:05 crc kubenswrapper[3562]: I1213 00:06:05.902081 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:06:05 crc kubenswrapper[3562]: I1213 00:06:05.902092 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:06:05 crc kubenswrapper[3562]: I1213 00:06:05.903579 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d3ae206906481b4831fd849b559269c8","Type":"ContainerStarted","Data":"4d6eebb14ec01238ec01feb97bef74e8890199677e317be96b05f080f6c10de3"} Dec 13 00:06:05 crc kubenswrapper[3562]: I1213 00:06:05.903674 3562 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:06:05 crc kubenswrapper[3562]: I1213 00:06:05.904408 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:06:05 crc kubenswrapper[3562]: I1213 00:06:05.904437 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:06:05 crc kubenswrapper[3562]: I1213 00:06:05.904447 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:06:05 crc kubenswrapper[3562]: I1213 00:06:05.906218 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"ae85115fdc231b4002b57317b41a6400","Type":"ContainerStarted","Data":"d8eb425328e224404d5ed33dee54bc752129804bab6d33fec60a1e7d4b86f246"} Dec 13 00:06:05 crc kubenswrapper[3562]: I1213 00:06:05.906282 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"ae85115fdc231b4002b57317b41a6400","Type":"ContainerStarted","Data":"4ce7dfdcabe0ad2eec941a3cf6bee306e49c16470f3d8addbbd2944392fb77ec"} Dec 13 00:06:05 crc kubenswrapper[3562]: I1213 00:06:05.906323 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"ae85115fdc231b4002b57317b41a6400","Type":"ContainerStarted","Data":"ded8bd9cf1fd51afd473a88657f94e48f3a4edec471438cfcf4356047ac64fe9"} Dec 13 00:06:05 crc kubenswrapper[3562]: I1213 00:06:05.906375 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"ae85115fdc231b4002b57317b41a6400","Type":"ContainerStarted","Data":"4d133fa5757238da3b7fe8dd7c27de06e8aa00eb048d2818fad45877fd1377a4"} Dec 13 00:06:06 crc kubenswrapper[3562]: I1213 00:06:06.466162 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 00:06:06 crc kubenswrapper[3562]: I1213 00:06:06.849016 3562 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:06:06 crc kubenswrapper[3562]: I1213 00:06:06.850739 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:06:06 crc kubenswrapper[3562]: I1213 00:06:06.850769 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:06:06 crc kubenswrapper[3562]: I1213 00:06:06.850779 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:06:06 crc kubenswrapper[3562]: I1213 00:06:06.850802 3562 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 13 00:06:06 crc kubenswrapper[3562]: I1213 00:06:06.911794 3562 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:06:06 crc kubenswrapper[3562]: I1213 00:06:06.911798 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"ae85115fdc231b4002b57317b41a6400","Type":"ContainerStarted","Data":"49c6a556089813d09a464ae0020d48fc2ecced0e755f8f8a63664f0c7a6148cc"} Dec 13 00:06:06 crc kubenswrapper[3562]: I1213 00:06:06.912846 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:06:06 crc kubenswrapper[3562]: I1213 00:06:06.912881 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:06:06 crc kubenswrapper[3562]: I1213 00:06:06.912891 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:06:06 crc kubenswrapper[3562]: I1213 00:06:06.922524 3562 generic.go:334] "Generic (PLEG): container finished" podID="b2a6a3b2ca08062d24afa4c01aaf9e4f" containerID="12efdefb13a8046f3f2058e297982e45b9d5f80089f5fdec1af9450a2a75b728" exitCode=0 Dec 13 00:06:06 crc kubenswrapper[3562]: I1213 00:06:06.922612 3562 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:06:06 crc kubenswrapper[3562]: I1213 00:06:06.922620 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerDied","Data":"12efdefb13a8046f3f2058e297982e45b9d5f80089f5fdec1af9450a2a75b728"} Dec 13 00:06:06 crc kubenswrapper[3562]: I1213 00:06:06.922817 3562 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:06:06 crc kubenswrapper[3562]: I1213 00:06:06.923438 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:06:06 crc kubenswrapper[3562]: I1213 00:06:06.923474 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:06:06 crc kubenswrapper[3562]: I1213 00:06:06.923487 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:06:06 crc kubenswrapper[3562]: I1213 00:06:06.923729 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:06:06 crc kubenswrapper[3562]: I1213 00:06:06.923756 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:06:06 crc kubenswrapper[3562]: I1213 00:06:06.923768 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:06:07 crc kubenswrapper[3562]: I1213 00:06:07.660511 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 00:06:07 crc kubenswrapper[3562]: I1213 00:06:07.673804 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 00:06:07 crc kubenswrapper[3562]: I1213 00:06:07.730619 3562 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 00:06:07 crc kubenswrapper[3562]: I1213 00:06:07.930836 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerStarted","Data":"1bf755a1d3d98e0e3c16a81daab59d53eadaafa42810408769b452b81c32400d"} Dec 13 00:06:07 crc kubenswrapper[3562]: I1213 00:06:07.930985 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 00:06:07 crc kubenswrapper[3562]: I1213 00:06:07.930879 3562 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:06:07 crc kubenswrapper[3562]: I1213 00:06:07.931017 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerStarted","Data":"e479ebd3ee5b319ccda807e3ceb18bd04566fb715742bf07c05aec805f970f1b"} Dec 13 00:06:07 crc kubenswrapper[3562]: I1213 00:06:07.931027 3562 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:06:07 crc kubenswrapper[3562]: I1213 00:06:07.931037 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerStarted","Data":"00e10baab6903e1f66e930ffdafc672528c9026a1c3de9e77115eb13f755a1b5"} Dec 13 00:06:07 crc kubenswrapper[3562]: I1213 00:06:07.932631 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:06:07 crc kubenswrapper[3562]: I1213 00:06:07.932709 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:06:07 crc kubenswrapper[3562]: I1213 00:06:07.932723 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:06:07 crc kubenswrapper[3562]: I1213 00:06:07.932637 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:06:07 crc kubenswrapper[3562]: I1213 00:06:07.932793 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:06:07 crc kubenswrapper[3562]: I1213 00:06:07.932803 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:06:08 crc kubenswrapper[3562]: I1213 00:06:08.937271 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerStarted","Data":"084110415c32452ba5a335177fcbc6aba953e510ff09bdd9312f0725ffa2e771"} Dec 13 00:06:08 crc kubenswrapper[3562]: I1213 00:06:08.937340 3562 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:06:08 crc kubenswrapper[3562]: I1213 00:06:08.937439 3562 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:06:08 crc kubenswrapper[3562]: I1213 00:06:08.937537 3562 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:06:08 crc kubenswrapper[3562]: I1213 00:06:08.938531 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:06:08 crc kubenswrapper[3562]: I1213 00:06:08.938588 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:06:08 crc kubenswrapper[3562]: I1213 00:06:08.938609 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:06:08 crc kubenswrapper[3562]: I1213 00:06:08.940974 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:06:08 crc kubenswrapper[3562]: I1213 00:06:08.941057 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:06:08 crc kubenswrapper[3562]: I1213 00:06:08.941060 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:06:08 crc kubenswrapper[3562]: I1213 00:06:08.941076 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:06:08 crc kubenswrapper[3562]: I1213 00:06:08.941095 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:06:08 crc kubenswrapper[3562]: I1213 00:06:08.941106 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:06:09 crc kubenswrapper[3562]: I1213 00:06:09.775145 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Dec 13 00:06:09 crc kubenswrapper[3562]: I1213 00:06:09.940453 3562 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:06:09 crc kubenswrapper[3562]: I1213 00:06:09.941863 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:06:09 crc kubenswrapper[3562]: I1213 00:06:09.941936 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:06:09 crc kubenswrapper[3562]: I1213 00:06:09.941949 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:06:10 crc kubenswrapper[3562]: I1213 00:06:10.941853 3562 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:06:10 crc kubenswrapper[3562]: I1213 00:06:10.942848 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:06:10 crc kubenswrapper[3562]: I1213 00:06:10.942964 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:06:10 crc kubenswrapper[3562]: I1213 00:06:10.942981 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:06:12 crc kubenswrapper[3562]: I1213 00:06:12.647617 3562 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 13 00:06:12 crc kubenswrapper[3562]: I1213 00:06:12.647825 3562 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:06:12 crc kubenswrapper[3562]: I1213 00:06:12.649803 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:06:12 crc kubenswrapper[3562]: I1213 00:06:12.649841 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:06:12 crc kubenswrapper[3562]: I1213 00:06:12.649857 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:06:12 crc kubenswrapper[3562]: I1213 00:06:12.964254 3562 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 00:06:12 crc kubenswrapper[3562]: I1213 00:06:12.964397 3562 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:06:12 crc kubenswrapper[3562]: I1213 00:06:12.965508 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:06:12 crc kubenswrapper[3562]: I1213 00:06:12.965555 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:06:12 crc kubenswrapper[3562]: I1213 00:06:12.965569 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:06:13 crc kubenswrapper[3562]: I1213 00:06:13.041376 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 13 00:06:13 crc kubenswrapper[3562]: I1213 00:06:13.041579 3562 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:06:13 crc kubenswrapper[3562]: I1213 00:06:13.042796 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:06:13 crc kubenswrapper[3562]: I1213 00:06:13.042870 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:06:13 crc kubenswrapper[3562]: I1213 00:06:13.042890 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:06:13 crc kubenswrapper[3562]: I1213 00:06:13.249265 3562 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 00:06:13 crc kubenswrapper[3562]: I1213 00:06:13.253783 3562 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 00:06:13 crc kubenswrapper[3562]: E1213 00:06:13.906885 3562 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 13 00:06:13 crc kubenswrapper[3562]: I1213 00:06:13.946928 3562 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:06:13 crc kubenswrapper[3562]: I1213 00:06:13.947752 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:06:13 crc kubenswrapper[3562]: I1213 00:06:13.947790 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:06:13 crc kubenswrapper[3562]: I1213 00:06:13.947806 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:06:14 crc kubenswrapper[3562]: I1213 00:06:14.948775 3562 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:06:14 crc kubenswrapper[3562]: I1213 00:06:14.949609 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:06:14 crc kubenswrapper[3562]: I1213 00:06:14.949655 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:06:14 crc kubenswrapper[3562]: I1213 00:06:14.949666 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:06:14 crc kubenswrapper[3562]: I1213 00:06:14.953396 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 00:06:15 crc kubenswrapper[3562]: I1213 00:06:15.721860 3562 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": net/http: TLS handshake timeout Dec 13 00:06:15 crc kubenswrapper[3562]: I1213 00:06:15.951694 3562 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:06:15 crc kubenswrapper[3562]: I1213 00:06:15.952884 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:06:15 crc kubenswrapper[3562]: I1213 00:06:15.952979 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:06:15 crc kubenswrapper[3562]: I1213 00:06:15.953006 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:06:15 crc kubenswrapper[3562]: I1213 00:06:15.964613 3562 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 13 00:06:15 crc kubenswrapper[3562]: I1213 00:06:15.964842 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 13 00:06:16 crc kubenswrapper[3562]: I1213 00:06:16.397725 3562 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 13 00:06:16 crc kubenswrapper[3562]: I1213 00:06:16.397827 3562 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 13 00:06:16 crc kubenswrapper[3562]: E1213 00:06:16.751382 3562 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Dec 13 00:06:16 crc kubenswrapper[3562]: E1213 00:06:16.852570 3562 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Dec 13 00:06:17 crc kubenswrapper[3562]: I1213 00:06:17.108155 3562 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403} Dec 13 00:06:17 crc kubenswrapper[3562]: I1213 00:06:17.108278 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 13 00:06:17 crc kubenswrapper[3562]: I1213 00:06:17.113049 3562 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Dec 13 00:06:17 crc kubenswrapper[3562]: I1213 00:06:17.113127 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 13 00:06:17 crc kubenswrapper[3562]: I1213 00:06:17.736357 3562 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 13 00:06:17 crc kubenswrapper[3562]: [+]log ok Dec 13 00:06:17 crc kubenswrapper[3562]: [+]etcd ok Dec 13 00:06:17 crc kubenswrapper[3562]: [+]poststarthook/openshift.io-startkubeinformers ok Dec 13 00:06:17 crc kubenswrapper[3562]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Dec 13 00:06:17 crc kubenswrapper[3562]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Dec 13 00:06:17 crc kubenswrapper[3562]: [+]poststarthook/start-kube-apiserver-admission-initializer ok Dec 13 00:06:17 crc kubenswrapper[3562]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 13 00:06:17 crc kubenswrapper[3562]: [+]poststarthook/openshift.io-api-request-count-filter ok Dec 13 00:06:17 crc kubenswrapper[3562]: [+]poststarthook/generic-apiserver-start-informers ok Dec 13 00:06:17 crc kubenswrapper[3562]: [+]poststarthook/priority-and-fairness-config-consumer ok Dec 13 00:06:17 crc kubenswrapper[3562]: [+]poststarthook/priority-and-fairness-filter ok Dec 13 00:06:17 crc kubenswrapper[3562]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 13 00:06:17 crc kubenswrapper[3562]: [+]poststarthook/start-apiextensions-informers ok Dec 13 00:06:17 crc kubenswrapper[3562]: [+]poststarthook/start-apiextensions-controllers ok Dec 13 00:06:17 crc kubenswrapper[3562]: [+]poststarthook/crd-informer-synced ok Dec 13 00:06:17 crc kubenswrapper[3562]: [+]poststarthook/start-service-ip-repair-controllers ok Dec 13 00:06:17 crc kubenswrapper[3562]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Dec 13 00:06:17 crc kubenswrapper[3562]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Dec 13 00:06:17 crc kubenswrapper[3562]: [+]poststarthook/priority-and-fairness-config-producer ok Dec 13 00:06:17 crc kubenswrapper[3562]: [+]poststarthook/start-system-namespaces-controller ok Dec 13 00:06:17 crc kubenswrapper[3562]: [+]poststarthook/bootstrap-controller ok Dec 13 00:06:17 crc kubenswrapper[3562]: [+]poststarthook/start-cluster-authentication-info-controller ok Dec 13 00:06:17 crc kubenswrapper[3562]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Dec 13 00:06:17 crc kubenswrapper[3562]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Dec 13 00:06:17 crc kubenswrapper[3562]: [+]poststarthook/start-legacy-token-tracking-controller ok Dec 13 00:06:17 crc kubenswrapper[3562]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Dec 13 00:06:17 crc kubenswrapper[3562]: [+]poststarthook/start-kube-aggregator-informers ok Dec 13 00:06:17 crc kubenswrapper[3562]: [+]poststarthook/apiservice-registration-controller ok Dec 13 00:06:17 crc kubenswrapper[3562]: [+]poststarthook/apiservice-status-available-controller ok Dec 13 00:06:17 crc kubenswrapper[3562]: [+]poststarthook/apiservice-wait-for-first-sync ok Dec 13 00:06:17 crc kubenswrapper[3562]: [+]poststarthook/kube-apiserver-autoregistration ok Dec 13 00:06:17 crc kubenswrapper[3562]: [+]autoregister-completion ok Dec 13 00:06:17 crc kubenswrapper[3562]: [+]poststarthook/apiservice-openapi-controller ok Dec 13 00:06:17 crc kubenswrapper[3562]: [+]poststarthook/apiservice-openapiv3-controller ok Dec 13 00:06:17 crc kubenswrapper[3562]: [+]poststarthook/apiservice-discovery-controller ok Dec 13 00:06:17 crc kubenswrapper[3562]: healthz check failed Dec 13 00:06:17 crc kubenswrapper[3562]: I1213 00:06:17.736426 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:06:20 crc kubenswrapper[3562]: I1213 00:06:20.052683 3562 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:06:20 crc kubenswrapper[3562]: I1213 00:06:20.053795 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:06:20 crc kubenswrapper[3562]: I1213 00:06:20.053835 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:06:20 crc kubenswrapper[3562]: I1213 00:06:20.053851 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:06:20 crc kubenswrapper[3562]: I1213 00:06:20.053876 3562 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 13 00:06:20 crc kubenswrapper[3562]: E1213 00:06:20.058551 3562 kubelet_node_status.go:100] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.031124 3562 trace.go:236] Trace[1937681765]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Dec-2025 00:06:07.090) (total time: 14940ms): Dec 13 00:06:22 crc kubenswrapper[3562]: Trace[1937681765]: ---"Objects listed" error: 14940ms (00:06:22.031) Dec 13 00:06:22 crc kubenswrapper[3562]: Trace[1937681765]: [14.940417346s] [14.940417346s] END Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.031153 3562 reflector.go:351] Caches populated for *v1.RuntimeClass from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.031154 3562 trace.go:236] Trace[1903268029]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Dec-2025 00:06:07.281) (total time: 14749ms): Dec 13 00:06:22 crc kubenswrapper[3562]: Trace[1903268029]: ---"Objects listed" error: 14749ms (00:06:22.031) Dec 13 00:06:22 crc kubenswrapper[3562]: Trace[1903268029]: [14.749911757s] [14.749911757s] END Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.031202 3562 reflector.go:351] Caches populated for *v1.CSIDriver from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.032194 3562 trace.go:236] Trace[192122248]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Dec-2025 00:06:07.529) (total time: 14503ms): Dec 13 00:06:22 crc kubenswrapper[3562]: Trace[192122248]: ---"Objects listed" error: 14503ms (00:06:22.032) Dec 13 00:06:22 crc kubenswrapper[3562]: Trace[192122248]: [14.503054815s] [14.503054815s] END Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.032229 3562 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.032956 3562 reconstruct_new.go:210] "DevicePaths of reconstructed volumes updated" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.032988 3562 trace.go:236] Trace[1222310535]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Dec-2025 00:06:08.151) (total time: 13881ms): Dec 13 00:06:22 crc kubenswrapper[3562]: Trace[1222310535]: ---"Objects listed" error: 13881ms (00:06:22.032) Dec 13 00:06:22 crc kubenswrapper[3562]: Trace[1222310535]: [13.881718523s] [13.881718523s] END Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.033004 3562 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.673653 3562 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.687310 3562 kubelet.go:1935] "Failed creating a mirror pod for" err="pods \"etcd-crc\" already exists" pod="openshift-etcd/etcd-crc" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.689317 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.724298 3562 apiserver.go:52] "Watching apiserver" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.735712 3562 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.739397 3562 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.743282 3562 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-marketplace/marketplace-operator-8b455464d-f9xdt","openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j","openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b","openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg","openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc","openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw","openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs","openshift-apiserver/apiserver-7fc54b8dd7-d2bhp","openshift-dns-operator/dns-operator-75f687757b-nz2xb","openshift-kube-scheduler/installer-8-crc","openshift-marketplace/redhat-operators-f4jkp","openshift-dns/dns-default-gbw49","openshift-kube-controller-manager/revision-pruner-10-crc","openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh","openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm","openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z","openshift-kube-apiserver/installer-12-crc","openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb","openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2","openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7","openshift-kube-controller-manager/revision-pruner-11-crc","openshift-machine-config-operator/machine-config-server-v65wr","openshift-kube-controller-manager/revision-pruner-9-crc","openshift-network-operator/iptables-alerter-wwpnd","openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd","openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz","hostpath-provisioner/csi-hostpathplugin-hvm8g","openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv","openshift-ingress-canary/ingress-canary-2vhcn","openshift-network-diagnostics/network-check-target-v54bt","openshift-network-node-identity/network-node-identity-7xghp","openshift-console/console-644bb77b49-5x5xk","openshift-machine-config-operator/machine-config-daemon-zpnhg","openshift-multus/multus-additional-cni-plugins-bzj2p","openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd","openshift-kube-scheduler/installer-7-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-marketplace/community-operators-sdddl","openshift-kube-controller-manager/revision-pruner-8-crc","openshift-marketplace/redhat-marketplace-8s8pc","openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh","openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz","openshift-ingress/router-default-5c9bf7bc58-6jctv","openshift-kube-controller-manager/installer-10-crc","openshift-kube-controller-manager/installer-10-retry-1-crc","openshift-kube-apiserver/installer-9-crc","openshift-kube-controller-manager/installer-11-crc","openshift-marketplace/community-operators-8jhz6","openshift-multus/multus-q88th","openshift-multus/network-metrics-daemon-qdfr4","openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b","openshift-controller-manager/controller-manager-778975cc4f-x5vcf","openshift-image-registry/image-registry-75779c45fd-v2j2v","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9","openshift-etcd/etcd-crc","openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2","openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7","openshift-ovn-kubernetes/ovnkube-node-44qcg","openshift-dns/node-resolver-dn27q","openshift-etcd-operator/etcd-operator-768d5b5d86-722mg","openshift-multus/multus-admission-controller-6c7c885997-4hbbc","openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv","openshift-service-ca/service-ca-666f99b6f-kk8kg","openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m","openshift-console/downloads-65476884b9-9wcvx","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-image-registry/node-ca-l92hr","openshift-marketplace/certified-operators-7287f","openshift-network-operator/network-operator-767c585db5-zd56b","openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf","openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8","openshift-console-operator/console-conversion-webhook-595f9969b-l6z49","openshift-console-operator/console-operator-5dbbc74dc9-cp5cd","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr","openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb","openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46","openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7"] Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.743390 3562 topology_manager.go:215] "Topology Admit Handler" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" podNamespace="openshift-machine-config-operator" podName="machine-config-operator-76788bff89-wkjgm" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.743523 3562 topology_manager.go:215] "Topology Admit Handler" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" podNamespace="openshift-etcd-operator" podName="etcd-operator-768d5b5d86-722mg" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.743576 3562 topology_manager.go:215] "Topology Admit Handler" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" podNamespace="openshift-operator-lifecycle-manager" podName="catalog-operator-857456c46-7f5wf" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.743634 3562 topology_manager.go:215] "Topology Admit Handler" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" podNamespace="openshift-machine-api" podName="machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.743692 3562 topology_manager.go:215] "Topology Admit Handler" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" podNamespace="openshift-kube-apiserver-operator" podName="kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.743749 3562 topology_manager.go:215] "Topology Admit Handler" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" podNamespace="openshift-marketplace" podName="marketplace-operator-8b455464d-f9xdt" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.743838 3562 topology_manager.go:215] "Topology Admit Handler" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" podNamespace="openshift-service-ca-operator" podName="service-ca-operator-546b4f8984-pwccz" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.743934 3562 topology_manager.go:215] "Topology Admit Handler" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" podNamespace="openshift-network-operator" podName="network-operator-767c585db5-zd56b" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.744022 3562 topology_manager.go:215] "Topology Admit Handler" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" podNamespace="openshift-operator-lifecycle-manager" podName="package-server-manager-84d578d794-jw7r2" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.744080 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.744141 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.744167 3562 topology_manager.go:215] "Topology Admit Handler" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" podNamespace="openshift-operator-lifecycle-manager" podName="olm-operator-6d8474f75f-x54mh" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.744209 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.744208 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.744250 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.744258 3562 topology_manager.go:215] "Topology Admit Handler" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" podNamespace="openshift-config-operator" podName="openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.744282 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.744288 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.744335 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.744348 3562 topology_manager.go:215] "Topology Admit Handler" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" podNamespace="openshift-apiserver-operator" podName="openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.744375 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.744492 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.744504 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.744558 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.744601 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.744614 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.744705 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.744708 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-767c585db5-zd56b" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.744779 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.744742 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.744838 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.744880 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.744937 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.744977 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.745043 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.745073 3562 topology_manager.go:215] "Topology Admit Handler" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" podNamespace="openshift-authentication-operator" podName="authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.745274 3562 topology_manager.go:215] "Topology Admit Handler" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" podNamespace="openshift-machine-api" podName="control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.745408 3562 topology_manager.go:215] "Topology Admit Handler" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" podNamespace="openshift-kube-storage-version-migrator-operator" podName="kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.745457 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.745504 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.745541 3562 topology_manager.go:215] "Topology Admit Handler" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" podNamespace="openshift-controller-manager-operator" podName="openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.745622 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.745651 3562 topology_manager.go:215] "Topology Admit Handler" podUID="10603adc-d495-423c-9459-4caa405960bb" podNamespace="openshift-dns-operator" podName="dns-operator-75f687757b-nz2xb" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.745672 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.745710 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.745740 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.745770 3562 topology_manager.go:215] "Topology Admit Handler" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" podNamespace="openshift-kube-controller-manager-operator" podName="kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.745838 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.745877 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.745948 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.745972 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.746029 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.746049 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.746051 3562 topology_manager.go:215] "Topology Admit Handler" podUID="71af81a9-7d43-49b2-9287-c375900aa905" podNamespace="openshift-kube-scheduler-operator" podName="openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.746189 3562 topology_manager.go:215] "Topology Admit Handler" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" podNamespace="openshift-image-registry" podName="cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.746309 3562 topology_manager.go:215] "Topology Admit Handler" podUID="475321a1-8b7e-4033-8f72-b05a8b377347" podNamespace="openshift-multus" podName="multus-q88th" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.746436 3562 topology_manager.go:215] "Topology Admit Handler" podUID="7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8" podNamespace="openshift-multus" podName="multus-additional-cni-plugins-bzj2p" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.746591 3562 topology_manager.go:215] "Topology Admit Handler" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" podNamespace="openshift-multus" podName="network-metrics-daemon-qdfr4" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.746789 3562 topology_manager.go:215] "Topology Admit Handler" podUID="410cf605-1970-4691-9c95-53fdc123b1f3" podNamespace="openshift-ovn-kubernetes" podName="ovnkube-control-plane-77c846df58-6l97b" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.746949 3562 topology_manager.go:215] "Topology Admit Handler" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" podNamespace="openshift-network-diagnostics" podName="network-check-source-5c5478f8c-vqvt7" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.747033 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-q88th" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.747075 3562 topology_manager.go:215] "Topology Admit Handler" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" podNamespace="openshift-network-diagnostics" podName="network-check-target-v54bt" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.747082 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.747361 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.747430 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.747488 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.747538 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.747669 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.747224 3562 topology_manager.go:215] "Topology Admit Handler" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" podNamespace="openshift-network-node-identity" podName="network-node-identity-7xghp" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.747840 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.747917 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.747974 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.748037 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.748082 3562 topology_manager.go:215] "Topology Admit Handler" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" podNamespace="openshift-ovn-kubernetes" podName="ovnkube-node-44qcg" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.748162 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.748388 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.748384 3562 topology_manager.go:215] "Topology Admit Handler" podUID="2b6d14a5-ca00-40c7-af7a-051a98a24eed" podNamespace="openshift-network-operator" podName="iptables-alerter-wwpnd" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.748549 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.748618 3562 topology_manager.go:215] "Topology Admit Handler" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" podNamespace="openshift-kube-storage-version-migrator" podName="migrator-f7c6d88df-q2fnv" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.748731 3562 topology_manager.go:215] "Topology Admit Handler" podUID="13045510-8717-4a71-ade4-be95a76440a7" podNamespace="openshift-dns" podName="dns-default-gbw49" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.748741 3562 kubelet.go:1935] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.748866 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.748950 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.749034 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-7xghp" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.749064 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-wwpnd" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.749255 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.749270 3562 topology_manager.go:215] "Topology Admit Handler" podUID="6a23c0ee-5648-448c-b772-83dced2891ce" podNamespace="openshift-dns" podName="node-resolver-dn27q" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.749307 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.749361 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.749407 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.749480 3562 topology_manager.go:215] "Topology Admit Handler" podUID="9fb762d1-812f-43f1-9eac-68034c1ecec7" podNamespace="openshift-cluster-version" podName="cluster-version-operator-6d5d9649f6-x6d46" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.749635 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.749968 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-dn27q" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.750176 3562 topology_manager.go:215] "Topology Admit Handler" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" podNamespace="openshift-oauth-apiserver" podName="apiserver-69c565c9b6-vbdpd" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.750492 3562 topology_manager.go:215] "Topology Admit Handler" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" podNamespace="openshift-operator-lifecycle-manager" podName="packageserver-8464bcc55b-sjnqz" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.749035 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.750679 3562 topology_manager.go:215] "Topology Admit Handler" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" podNamespace="openshift-ingress-operator" podName="ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.750795 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.750813 3562 topology_manager.go:215] "Topology Admit Handler" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" podNamespace="openshift-cluster-samples-operator" podName="cluster-samples-operator-bc474d5d6-wshwg" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.750868 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.750893 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.750952 3562 topology_manager.go:215] "Topology Admit Handler" podUID="ec1bae8b-3200-4ad9-b33b-cf8701f3027c" podNamespace="openshift-cluster-machine-approver" podName="machine-approver-7874c8775-kh4j9" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.751103 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.751147 3562 topology_manager.go:215] "Topology Admit Handler" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" podNamespace="openshift-ingress" podName="router-default-5c9bf7bc58-6jctv" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.751398 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.751446 3562 topology_manager.go:215] "Topology Admit Handler" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" podNamespace="openshift-machine-config-operator" podName="machine-config-daemon-zpnhg" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.751638 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.751683 3562 topology_manager.go:215] "Topology Admit Handler" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" podNamespace="openshift-console-operator" podName="console-conversion-webhook-595f9969b-l6z49" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.751883 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.751939 3562 topology_manager.go:215] "Topology Admit Handler" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" podNamespace="openshift-console-operator" podName="console-operator-5dbbc74dc9-cp5cd" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.751958 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.752110 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.752149 3562 topology_manager.go:215] "Topology Admit Handler" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" podNamespace="openshift-machine-config-operator" podName="machine-config-controller-6df6df6b6b-58shh" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.752342 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.752397 3562 topology_manager.go:215] "Topology Admit Handler" podUID="6268b7fe-8910-4505-b404-6f1df638105c" podNamespace="openshift-console" podName="downloads-65476884b9-9wcvx" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.752597 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.752665 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.752721 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.752763 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.752806 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.752867 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.753176 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.753525 3562 topology_manager.go:215] "Topology Admit Handler" podUID="bf1a8b70-3856-486f-9912-a2de1d57c3fb" podNamespace="openshift-machine-config-operator" podName="machine-config-server-v65wr" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.753833 3562 topology_manager.go:215] "Topology Admit Handler" podUID="f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e" podNamespace="openshift-image-registry" podName="node-ca-l92hr" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.754064 3562 topology_manager.go:215] "Topology Admit Handler" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" podNamespace="openshift-ingress-canary" podName="ingress-canary-2vhcn" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.754230 3562 topology_manager.go:215] "Topology Admit Handler" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" podNamespace="openshift-multus" podName="multus-admission-controller-6c7c885997-4hbbc" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.754417 3562 topology_manager.go:215] "Topology Admit Handler" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" podNamespace="hostpath-provisioner" podName="csi-hostpathplugin-hvm8g" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.754615 3562 topology_manager.go:215] "Topology Admit Handler" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" podNamespace="openshift-marketplace" podName="certified-operators-7287f" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.754826 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.754928 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-v65wr" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.754942 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.755018 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.755109 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.755167 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.755224 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.755224 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-l92hr" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.755276 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.755297 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.755314 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.755332 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.755359 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.755373 3562 topology_manager.go:215] "Topology Admit Handler" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" podNamespace="openshift-marketplace" podName="community-operators-8jhz6" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.755400 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.755560 3562 topology_manager.go:215] "Topology Admit Handler" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" podNamespace="openshift-marketplace" podName="redhat-operators-f4jkp" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.755657 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.755739 3562 topology_manager.go:215] "Topology Admit Handler" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" podNamespace="openshift-marketplace" podName="redhat-marketplace-8s8pc" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.755832 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.755880 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.756040 3562 topology_manager.go:215] "Topology Admit Handler" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" podNamespace="openshift-kube-controller-manager" podName="revision-pruner-8-crc" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.756228 3562 topology_manager.go:215] "Topology Admit Handler" podUID="e4a7de23-6134-4044-902a-0900dc04a501" podNamespace="openshift-service-ca" podName="service-ca-666f99b6f-kk8kg" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.756370 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.756391 3562 topology_manager.go:215] "Topology Admit Handler" podUID="deaee4f4-7b7a-442d-99b7-c8ac62ef5f27" podNamespace="openshift-operator-lifecycle-manager" podName="collect-profiles-29251920-wcws2" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.756426 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.756468 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.756514 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.756559 3562 topology_manager.go:215] "Topology Admit Handler" podUID="a0453d24-e872-43af-9e7a-86227c26d200" podNamespace="openshift-kube-controller-manager" podName="revision-pruner-9-crc" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.756599 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.756641 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.756683 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.756799 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.756560 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.757312 3562 topology_manager.go:215] "Topology Admit Handler" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" podNamespace="openshift-kube-apiserver" podName="installer-9-crc" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.757594 3562 topology_manager.go:215] "Topology Admit Handler" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" podNamespace="openshift-image-registry" podName="image-registry-75779c45fd-v2j2v" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.757758 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.758236 3562 topology_manager.go:215] "Topology Admit Handler" podUID="2f155735-a9be-4621-a5f2-5ab4b6957acd" podNamespace="openshift-kube-controller-manager" podName="revision-pruner-10-crc" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.758352 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.758432 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.758603 3562 topology_manager.go:215] "Topology Admit Handler" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" podNamespace="openshift-authentication" podName="oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.758736 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-10-crc" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.759192 3562 topology_manager.go:215] "Topology Admit Handler" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" podNamespace="openshift-kube-scheduler" podName="installer-7-crc" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.759608 3562 topology_manager.go:215] "Topology Admit Handler" podUID="79050916-d488-4806-b556-1b0078b31e53" podNamespace="openshift-kube-controller-manager" podName="installer-10-crc" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.759871 3562 topology_manager.go:215] "Topology Admit Handler" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" podNamespace="openshift-console" podName="console-644bb77b49-5x5xk" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.760143 3562 topology_manager.go:215] "Topology Admit Handler" podUID="dc02677d-deed-4cc9-bb8c-0dd300f83655" podNamespace="openshift-kube-controller-manager" podName="installer-10-retry-1-crc" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.760393 3562 topology_manager.go:215] "Topology Admit Handler" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" podNamespace="openshift-apiserver" podName="apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.760751 3562 topology_manager.go:215] "Topology Admit Handler" podUID="1784282a-268d-4e44-a766-43281414e2dc" podNamespace="openshift-kube-controller-manager" podName="revision-pruner-11-crc" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.761135 3562 topology_manager.go:215] "Topology Admit Handler" podUID="aca1f9ff-a685-4a78-b461-3931b757f754" podNamespace="openshift-kube-scheduler" podName="installer-8-crc" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.761435 3562 topology_manager.go:215] "Topology Admit Handler" podUID="a45bfab9-f78b-4d72-b5b7-903e60401124" podNamespace="openshift-kube-controller-manager" podName="installer-11-crc" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.761721 3562 topology_manager.go:215] "Topology Admit Handler" podUID="3557248c-8f70-4165-aa66-8df983e7e01a" podNamespace="openshift-kube-apiserver" podName="installer-12-crc" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.762180 3562 topology_manager.go:215] "Topology Admit Handler" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" podNamespace="openshift-route-controller-manager" podName="route-controller-manager-776b8b7477-sfpvs" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.762360 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.762499 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-10-crc" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.762699 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.762749 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-11-crc" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.762807 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-11-crc" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.762923 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.763089 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.763113 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.763168 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-10-retry-1-crc" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.763192 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-8-crc" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.763446 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.764599 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-7-crc" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.764680 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.764783 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.765051 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.765458 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.765655 3562 topology_manager.go:215] "Topology Admit Handler" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" podNamespace="openshift-controller-manager" podName="controller-manager-778975cc4f-x5vcf" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.766551 3562 topology_manager.go:215] "Topology Admit Handler" podUID="51936587-a4af-470d-ad92-8ab9062cbc72" podNamespace="openshift-operator-lifecycle-manager" podName="collect-profiles-29251935-d7x6j" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.767170 3562 topology_manager.go:215] "Topology Admit Handler" podUID="ad171c4b-8408-4370-8e86-502999788ddb" podNamespace="openshift-operator-lifecycle-manager" podName="collect-profiles-29251950-x8jjd" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.767414 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.767573 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.767733 3562 topology_manager.go:215] "Topology Admit Handler" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" podNamespace="openshift-marketplace" podName="community-operators-sdddl" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.768152 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.768282 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.768361 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.768495 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.769370 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.769731 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.770956 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.770975 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.771049 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.771055 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.771062 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.771090 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.771100 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.771116 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.771146 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.771200 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.771222 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.771237 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.771316 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.771334 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.771357 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.771342 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.771499 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.771544 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.771587 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.771701 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.771863 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.772043 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.772133 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.772605 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.772609 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.772686 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.772773 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.772810 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.772833 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.772894 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.772965 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.773030 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.773057 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.773115 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.774199 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.774219 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.774998 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.775072 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.777343 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.780515 3562 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T00:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T00:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T00:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.793564 3562 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T00:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T00:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T00:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.804525 3562 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T00:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T00:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T00:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba42ad15bc6c92353d4b7ae95deb709fa5499a0d5b16b9c9c6153679fed8f077\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T20:04:50Z\\\",\\\"message\\\":\\\"time=\\\\\\\"2025-08-13T20:04:49Z\\\\\\\" level=info msg=\\\\\\\"Go Version: go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime\\\\\\\"\\\\ntime=\\\\\\\"2025-08-13T20:04:49Z\\\\\\\" level=info msg=\\\\\\\"Go OS/Arch: linux/amd64\\\\\\\"\\\\ntime=\\\\\\\"2025-08-13T20:04:49Z\\\\\\\" level=info msg=\\\\\\\"[metrics] Registering marketplace metrics\\\\\\\"\\\\ntime=\\\\\\\"2025-08-13T20:04:49Z\\\\\\\" level=info msg=\\\\\\\"[metrics] Serving marketplace metrics\\\\\\\"\\\\ntime=\\\\\\\"2025-08-13T20:04:49Z\\\\\\\" level=info msg=\\\\\\\"TLS keys set, using https for metrics\\\\\\\"\\\\ntime=\\\\\\\"2025-08-13T20:04:50Z\\\\\\\" level=warning msg=\\\\\\\"Config API is not available\\\\\\\"\\\\ntime=\\\\\\\"2025-08-13T20:04:50Z\\\\\\\" level=info msg=\\\\\\\"setting up scheme\\\\\\\"\\\\ntime=\\\\\\\"2025-08-13T20:04:50Z\\\\\\\" level=fatal msg=\\\\\\\"failed to determine if *v1.ConfigMap is namespaced: failed to get restmapping: failed to get server groups: Get \\\\\\\\\\\\\\\"https://10.217.4.1:443/api\\\\\\\\\\\\\\\": dial tcp 10.217.4.1:443: connect: connection refused\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T20:04:47Z\\\"}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.817883 3562 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T00:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T00:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T00:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://346fc13eab4a6442e7eb6bb7019dac9a1216274ae59cd519b5e7474a1dd1b4e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T20:00:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:59:10Z\\\"}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.827884 3562 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T00:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T00:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T00:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://346fc13eab4a6442e7eb6bb7019dac9a1216274ae59cd519b5e7474a1dd1b4e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T20:00:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:59:10Z\\\"}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.840341 3562 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T00:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T00:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T00:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T20:01:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.851777 3562 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.855847 3562 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T00:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T00:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T00:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.868928 3562 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T00:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T00:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T00:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.877783 3562 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0453d24-e872-43af-9e7a-86227c26d200\\\"},\\\"status\\\":{\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-9-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.888092 3562 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51936587-a4af-470d-ad92-8ab9062cbc72\\\"},\\\"status\\\":{\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"collect-profiles-29251935-d7x6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.918477 3562 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1b73e61-d8d2-4892-8a19-005929c9d4e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T00:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T00:06:04Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T00:06:22Z\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T00:06:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T00:06:03Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d133fa5757238da3b7fe8dd7c27de06e8aa00eb048d2818fad45877fd1377a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T00:06:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ce7dfdcabe0ad2eec941a3cf6bee306e49c16470f3d8addbbd2944392fb77ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T00:06:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ded8bd9cf1fd51afd473a88657f94e48f3a4edec471438cfcf4356047ac64fe9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T00:06:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://49c6a556089813d09a464ae0020d48fc2ecced0e755f8f8a63664f0c7a6148cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T00:06:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d8eb425328e224404d5ed33dee54bc752129804bab6d33fec60a1e7d4b86f246\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T00:06:05Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b714d9cc211928a69778273896d7209694b3a4d32a602ad86f5ee50b893ffab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b714d9cc211928a69778273896d7209694b3a4d32a602ad86f5ee50b893ffab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T00:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T00:06:04Z\\\"}}}],\\\"startTime\\\":\\\"2025-12-13T00:06:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.931433 3562 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T00:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T00:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T00:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cacbc14e2522c21376a7d66a61a079d962c7b38a2d0f39522c7854c8ae5956a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T20:05:17Z\\\",\\\"message\\\":\\\"] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1.ClusterVersion: Get \\\\\\\"https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500\\\\u0026resourceVersion=0\\\\\\\": dial tcp 10.217.4.1:443: connect: connection refused\\\\nE0813 20:04:36.668906 1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get \\\\\\\"https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500\\\\u0026resourceVersion=0\\\\\\\": dial tcp 10.217.4.1:443: connect: connection refused\\\\nW0813 20:04:50.884304 1 reflector.go:539] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1.FeatureGate: Get \\\\\\\"https://10.217.4.1:443/apis/config.openshift.io/v1/featuregates?limit=500\\\\u0026resourceVersion=0\\\\\\\": dial tcp 10.217.4.1:443: connect: connection refused\\\\nE0813 20:04:50.918193 1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.FeatureGate: failed to list *v1.FeatureGate: Get \\\\\\\"https://10.217.4.1:443/apis/config.openshift.io/v1/featuregates?limit=500\\\\u0026resourceVersion=0\\\\\\\": dial tcp 10.217.4.1:443: connect: connection refused\\\\nW0813 20:04:52.839119 1 reflector.go:539] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1.ClusterVersion: Get \\\\\\\"https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500\\\\u0026resourceVersion=0\\\\\\\": dial tcp 10.217.4.1:443: connect: connection refused\\\\nE0813 20:04:52.839544 1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get \\\\\\\"https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500\\\\u0026resourceVersion=0\\\\\\\": dial tcp 10.217.4.1:443: connect: connection refused\\\\nF0813 20:05:17.755149 1 main.go:175] timed out waiting for FeatureGate detection\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T20:04:16Z\\\"}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.939442 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa90b3c2-febd-4588-a063-7fbbe82f00c1-service-ca-bundle\") pod \"router-default-5c9bf7bc58-6jctv\" (UID: \"aa90b3c2-febd-4588-a063-7fbbe82f00c1\") " pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.939508 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-multus-conf-dir\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.939537 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-plugins-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.939564 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-csi-data-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.939591 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.939620 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tvc4r\" (UniqueName: \"kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.939649 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.939677 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.939707 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8svnk\" (UniqueName: \"kubernetes.io/projected/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-kube-api-access-8svnk\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.940075 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.940120 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.940149 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-slash\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.940179 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/51a02bbf-2d40-4f84-868a-d399ea18a846-ovnkube-identity-cm\") pod \"network-node-identity-7xghp\" (UID: \"51a02bbf-2d40-4f84-868a-d399ea18a846\") " pod="openshift-network-node-identity/network-node-identity-7xghp" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.940210 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-fqnmc\" (UniqueName: \"kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.940242 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/2b6d14a5-ca00-40c7-af7a-051a98a24eed-iptables-alerter-script\") pod \"iptables-alerter-wwpnd\" (UID: \"2b6d14a5-ca00-40c7-af7a-051a98a24eed\") " pod="openshift-network-operator/iptables-alerter-wwpnd" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.940270 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-run-multus-certs\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.940503 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.940551 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-7ggjm\" (UniqueName: \"kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.940581 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.940609 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.940647 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.940678 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9d0dcce3-d96e-48cb-9b9f-362105911589-mcd-auth-proxy-config\") pod \"machine-config-daemon-zpnhg\" (UID: \"9d0dcce3-d96e-48cb-9b9f-362105911589\") " pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.940707 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-cni-binary-copy\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.940736 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2h9\" (UniqueName: \"kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.940818 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/410cf605-1970-4691-9c95-53fdc123b1f3-env-overrides\") pod \"ovnkube-control-plane-77c846df58-6l97b\" (UID: \"410cf605-1970-4691-9c95-53fdc123b1f3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.940850 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/887d596e-c519-4bfa-af90-3edd9e1b2f0f-catalog-content\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.940878 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e-host\") pod \"node-ca-l92hr\" (UID: \"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\") " pod="openshift-image-registry/node-ca-l92hr" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.940929 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-serving-cert\" (UniqueName: \"kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.940965 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.940995 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.941024 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-multus-socket-dir-parent\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.941051 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-run-netns\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.941080 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.941152 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.941190 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c782cf62-a827-4677-b3c2-6f82c5f09cbb-utilities\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.941218 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-systemd-units\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.941246 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-dir\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.941276 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9p8gt\" (UniqueName: \"kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt\") pod \"community-operators-sdddl\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " pod="openshift-marketplace/community-operators-sdddl" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.941303 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aa90b3c2-febd-4588-a063-7fbbe82f00c1-metrics-certs\") pod \"router-default-5c9bf7bc58-6jctv\" (UID: \"aa90b3c2-febd-4588-a063-7fbbe82f00c1\") " pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.941332 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-kubelet\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.941369 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.941400 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.941431 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovnkube-script-lib\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.941461 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.941489 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/41e8708a-e40d-4d28-846b-c52eda4d1755-audit-dir\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.941534 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bm986\" (UniqueName: \"kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.941562 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/9d0dcce3-d96e-48cb-9b9f-362105911589-rootfs\") pod \"machine-config-daemon-zpnhg\" (UID: \"9d0dcce3-d96e-48cb-9b9f-362105911589\") " pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.941588 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-run-netns\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.941618 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f4dca86-e6ee-4ec9-8324-86aff960225e-catalog-content\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.941647 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-etc-openvswitch\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.941677 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6z2n9\" (UniqueName: \"kubernetes.io/projected/bf1a8b70-3856-486f-9912-a2de1d57c3fb-kube-api-access-6z2n9\") pod \"machine-config-server-v65wr\" (UID: \"bf1a8b70-3856-486f-9912-a2de1d57c3fb\") " pod="openshift-machine-config-operator/machine-config-server-v65wr" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.941706 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.941737 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.941767 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bwvjb\" (UniqueName: \"kubernetes.io/projected/120b38dc-8236-4fa6-a452-642b8ad738ee-kube-api-access-bwvjb\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.941795 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/aa90b3c2-febd-4588-a063-7fbbe82f00c1-stats-auth\") pod \"router-default-5c9bf7bc58-6jctv\" (UID: \"aa90b3c2-febd-4588-a063-7fbbe82f00c1\") " pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.941823 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.941851 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.941880 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-catalog-content\") pod \"community-operators-sdddl\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " pod="openshift-marketplace/community-operators-sdddl" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.941930 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-run-ovn-kubernetes\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.941959 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-cni-bin\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.941989 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.942018 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.942047 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/cc291782-27d2-4a74-af79-c7dcb31535d2-host-etc-kube\") pod \"network-operator-767c585db5-zd56b\" (UID: \"cc291782-27d2-4a74-af79-c7dcb31535d2\") " pod="openshift-network-operator/network-operator-767c585db5-zd56b" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.942104 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j7zrh\" (UniqueName: \"kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.942132 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.942166 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-scpwv\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-kube-api-access-scpwv\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.942196 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.942224 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.942255 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-hostroot\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.942283 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/bf1a8b70-3856-486f-9912-a2de1d57c3fb-certs\") pod \"machine-config-server-v65wr\" (UID: \"bf1a8b70-3856-486f-9912-a2de1d57c3fb\") " pod="openshift-machine-config-operator/machine-config-server-v65wr" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.942315 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxcp\" (UniqueName: \"kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp\") pod \"network-check-source-5c5478f8c-vqvt7\" (UID: \"d0f40333-c860-4c04-8058-a0bf572dcf12\") " pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.942346 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f4dca86-e6ee-4ec9-8324-86aff960225e-utilities\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.942374 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.942402 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c782cf62-a827-4677-b3c2-6f82c5f09cbb-catalog-content\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.942429 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-run-k8s-cni-cncf-io\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.942456 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-var-lib-cni-multus\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.942486 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rkkfv\" (UniqueName: \"kubernetes.io/projected/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-kube-api-access-rkkfv\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.942516 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/aa90b3c2-febd-4588-a063-7fbbe82f00c1-default-certificate\") pod \"router-default-5c9bf7bc58-6jctv\" (UID: \"aa90b3c2-febd-4588-a063-7fbbe82f00c1\") " pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.942548 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.942577 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.942606 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-run-openvswitch\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.942639 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.942671 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.942702 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-registration-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.942733 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-x5d97\" (UniqueName: \"kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.942761 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-env-overrides\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.942790 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-bound-sa-token\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.942821 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-c2f8t\" (UniqueName: \"kubernetes.io/projected/475321a1-8b7e-4033-8f72-b05a8b377347-kube-api-access-c2f8t\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.942851 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.942880 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.942929 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-xkzjk\" (UniqueName: \"kubernetes.io/projected/9d0dcce3-d96e-48cb-9b9f-362105911589-kube-api-access-xkzjk\") pod \"machine-config-daemon-zpnhg\" (UID: \"9d0dcce3-d96e-48cb-9b9f-362105911589\") " pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.942962 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.942993 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.943024 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.943054 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.943084 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-gsxd9\" (UniqueName: \"kubernetes.io/projected/6a23c0ee-5648-448c-b772-83dced2891ce-kube-api-access-gsxd9\") pod \"node-resolver-dn27q\" (UID: \"6a23c0ee-5648-448c-b772-83dced2891ce\") " pod="openshift-dns/node-resolver-dn27q" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.943125 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-auth-proxy-config\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.943157 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d7jw8\" (UniqueName: \"kubernetes.io/projected/f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e-kube-api-access-d7jw8\") pod \"node-ca-l92hr\" (UID: \"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\") " pod="openshift-image-registry/node-ca-l92hr" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.943189 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8dcvj\" (UniqueName: \"kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.943225 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovnkube-config\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.943257 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-f9495\" (UniqueName: \"kubernetes.io/projected/3e19f9e8-9a37-4ca8-9790-c219750ab482-kube-api-access-f9495\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.943287 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.943315 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.943344 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.943375 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2nz92\" (UniqueName: \"kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.943404 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-var-lib-kubelet\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.943436 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kgvs\" (UniqueName: \"kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.943464 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.943494 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtrv\" (UniqueName: \"kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.943523 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.943554 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9x6dp\" (UniqueName: \"kubernetes.io/projected/b54e8941-2fc4-432a-9e51-39684df9089e-kube-api-access-9x6dp\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.943585 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b54e8941-2fc4-432a-9e51-39684df9089e-bound-sa-token\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.943616 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.943648 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.943677 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.943709 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.943744 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.943775 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.943805 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9d0dcce3-d96e-48cb-9b9f-362105911589-proxy-tls\") pod \"machine-config-daemon-zpnhg\" (UID: \"9d0dcce3-d96e-48cb-9b9f-362105911589\") " pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.943836 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.943865 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.943894 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-bound-sa-token\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.943945 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/475321a1-8b7e-4033-8f72-b05a8b377347-cni-binary-copy\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.943978 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9fb762d1-812f-43f1-9eac-68034c1ecec7-serving-cert\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.944010 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.944042 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nf4t2\" (UniqueName: \"kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.944071 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4092a9f8-5acc-4932-9e90-ef962eeb301a-utilities\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.944103 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/297ab9b6-2186-4d5b-a952-2bfd59af63c4-mcc-auth-proxy-config\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.944133 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.944168 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4qr9t\" (UniqueName: \"kubernetes.io/projected/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-kube-api-access-4qr9t\") pod \"machine-approver-7874c8775-kh4j9\" (UID: \"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\") " pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.944201 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-v45vm\" (UniqueName: \"kubernetes.io/projected/aa90b3c2-febd-4588-a063-7fbbe82f00c1-kube-api-access-v45vm\") pod \"router-default-5c9bf7bc58-6jctv\" (UID: \"aa90b3c2-febd-4588-a063-7fbbe82f00c1\") " pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.944232 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-cnibin\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.944260 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-run-ovn\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.944291 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hjlhw\" (UniqueName: \"kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.944324 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.944356 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.944395 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.944426 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.944455 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-log-socket\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.944484 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4092a9f8-5acc-4932-9e90-ef962eeb301a-catalog-content\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.944517 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.944551 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-76gl8\" (UniqueName: \"kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8\") pod \"network-check-target-v54bt\" (UID: \"34a48baf-1bee-4921-8bb2-9b7320e76f79\") " pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.944580 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-var-lib-openvswitch\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.944649 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e-serviceca\") pod \"node-ca-l92hr\" (UID: \"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\") " pod="openshift-image-registry/node-ca-l92hr" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.944681 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.944714 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.944744 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.944775 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9fb762d1-812f-43f1-9eac-68034c1ecec7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.944803 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-mountpoint-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.944833 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.944863 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.944898 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bxr\" (UniqueName: \"kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.944948 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-zjg2w\" (UniqueName: \"kubernetes.io/projected/51a02bbf-2d40-4f84-868a-d399ea18a846-kube-api-access-zjg2w\") pod \"network-node-identity-7xghp\" (UID: \"51a02bbf-2d40-4f84-868a-d399ea18a846\") " pod="openshift-network-node-identity/network-node-identity-7xghp" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.944980 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.945012 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-dir\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.945041 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-multus-cni-dir\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.945072 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.945140 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.945174 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-certificates\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.945206 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.945239 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-ca-trust-extracted\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.945274 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.945305 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dt5cx\" (UniqueName: \"kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx\") pod \"ingress-canary-2vhcn\" (UID: \"0b5d722a-1123-4935-9740-52a08d018bc9\") " pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.945337 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.945367 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.945398 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9724w\" (UniqueName: \"kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.945428 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dtjml\" (UniqueName: \"kubernetes.io/projected/13045510-8717-4a71-ade4-be95a76440a7-kube-api-access-dtjml\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.945459 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9fb762d1-812f-43f1-9eac-68034c1ecec7-service-ca\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.945492 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-cni-netd\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.945523 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/51a02bbf-2d40-4f84-868a-d399ea18a846-env-overrides\") pod \"network-node-identity-7xghp\" (UID: \"51a02bbf-2d40-4f84-868a-d399ea18a846\") " pod="openshift-network-node-identity/network-node-identity-7xghp" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.945555 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.945587 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.945615 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-node-log\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.945648 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4w8wh\" (UniqueName: \"kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.945679 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8hpxx\" (UniqueName: \"kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.945712 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-auth-proxy-config\") pod \"machine-approver-7874c8775-kh4j9\" (UID: \"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\") " pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.945743 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kp86\" (UniqueName: \"kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.945777 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-v7vkr\" (UniqueName: \"kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.945809 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.945841 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/bf1a8b70-3856-486f-9912-a2de1d57c3fb-node-bootstrap-token\") pod \"machine-config-server-v65wr\" (UID: \"bf1a8b70-3856-486f-9912-a2de1d57c3fb\") " pod="openshift-machine-config-operator/machine-config-server-v65wr" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.945871 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-system-cni-dir\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.945917 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-utilities\") pod \"community-operators-sdddl\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " pod="openshift-marketplace/community-operators-sdddl" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.945952 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.945983 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.946189 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.946236 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.946413 3562 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.946516 3562 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.946851 3562 configmap.go:199] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.946975 3562 secret.go:194] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.947018 3562 secret.go:194] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.947074 3562 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.947030 3562 configmap.go:199] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.947193 3562 secret.go:194] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.947291 3562 secret.go:194] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.947363 3562 configmap.go:199] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.947402 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c782cf62-a827-4677-b3c2-6f82c5f09cbb-catalog-content\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.947409 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/410cf605-1970-4691-9c95-53fdc123b1f3-env-overrides\") pod \"ovnkube-control-plane-77c846df58-6l97b\" (UID: \"410cf605-1970-4691-9c95-53fdc123b1f3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.947472 3562 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.947523 3562 secret.go:194] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.947540 3562 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.947583 3562 secret.go:194] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.947649 3562 secret.go:194] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.947802 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-ca-trust-extracted\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.947529 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f4dca86-e6ee-4ec9-8324-86aff960225e-utilities\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.948055 3562 configmap.go:199] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.948060 3562 configmap.go:199] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.948121 3562 configmap.go:199] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.948146 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f4dca86-e6ee-4ec9-8324-86aff960225e-catalog-content\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.948185 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa90b3c2-febd-4588-a063-7fbbe82f00c1-service-ca-bundle\") pod \"router-default-5c9bf7bc58-6jctv\" (UID: \"aa90b3c2-febd-4588-a063-7fbbe82f00c1\") " pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.948244 3562 configmap.go:199] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.948267 3562 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.948305 3562 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.948411 3562 configmap.go:199] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.948438 3562 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.948547 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c782cf62-a827-4677-b3c2-6f82c5f09cbb-utilities\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.948625 3562 configmap.go:199] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.948793 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/51a02bbf-2d40-4f84-868a-d399ea18a846-env-overrides\") pod \"network-node-identity-7xghp\" (UID: \"51a02bbf-2d40-4f84-868a-d399ea18a846\") " pod="openshift-network-node-identity/network-node-identity-7xghp" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.948848 3562 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.948885 3562 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T00:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T00:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T00:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd7033f12f10dfa562ecc04746779666b1a34bddfcb245d6e2353cc2c05cc540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T20:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:59:07Z\\\"}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.948981 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9fb762d1-812f-43f1-9eac-68034c1ecec7-service-ca\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.949006 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-utilities\") pod \"community-operators-sdddl\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " pod="openshift-marketplace/community-operators-sdddl" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.949136 3562 configmap.go:199] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.949279 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4092a9f8-5acc-4932-9e90-ef962eeb301a-utilities\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.949338 3562 secret.go:194] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.949387 3562 configmap.go:199] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.949392 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovnkube-script-lib\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.949392 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/887d596e-c519-4bfa-af90-3edd9e1b2f0f-catalog-content\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.949438 3562 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.949468 3562 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.949545 3562 secret.go:194] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.949648 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-cni-binary-copy\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.949711 3562 configmap.go:199] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.949759 3562 configmap.go:199] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.949774 3562 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.949790 3562 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.949830 3562 secret.go:194] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.950252 3562 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.950287 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/297ab9b6-2186-4d5b-a952-2bfd59af63c4-mcc-auth-proxy-config\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.950338 3562 configmap.go:199] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.949139 3562 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.950513 3562 secret.go:194] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.950560 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e-serviceca\") pod \"node-ca-l92hr\" (UID: \"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\") " pod="openshift-image-registry/node-ca-l92hr" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.950564 3562 configmap.go:199] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.950566 3562 secret.go:194] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.950599 3562 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.950621 3562 secret.go:194] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.950812 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-auth-proxy-config\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.950954 3562 configmap.go:199] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.951088 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-auth-proxy-config\") pod \"machine-approver-7874c8775-kh4j9\" (UID: \"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\") " pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.951224 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bwbqm\" (UniqueName: \"kubernetes.io/projected/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-kube-api-access-bwbqm\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.951340 3562 secret.go:194] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.951370 3562 secret.go:194] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.951441 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.451411075 +0000 UTC m=+19.934641598 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : object "openshift-service-ca"/"signing-key" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.951498 3562 configmap.go:199] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.951531 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.451521808 +0000 UTC m=+19.934752331 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.951743 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9d0dcce3-d96e-48cb-9b9f-362105911589-mcd-auth-proxy-config\") pod \"machine-config-daemon-zpnhg\" (UID: \"9d0dcce3-d96e-48cb-9b9f-362105911589\") " pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.951893 3562 configmap.go:199] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.951946 3562 secret.go:194] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.951952 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.451941246 +0000 UTC m=+19.935171769 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.951958 3562 secret.go:194] Couldn't get secret openshift-console-operator/webhook-serving-cert: object "openshift-console-operator"/"webhook-serving-cert" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.951983 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.451972546 +0000 UTC m=+19.935203069 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"serving-cert" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.952002 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.451992857 +0000 UTC m=+19.935223380 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-serving-cert" (UniqueName: "kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : object "openshift-console-operator"/"webhook-serving-cert" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.952012 3562 secret.go:194] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.952027 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.452012798 +0000 UTC m=+19.935243331 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"trusted-ca" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.952038 3562 secret.go:194] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.951986 3562 secret.go:194] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.948415 3562 secret.go:194] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.952059 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.452041869 +0000 UTC m=+19.935272382 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-oauth-config" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.952086 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.452077119 +0000 UTC m=+19.935307642 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"config" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.952103 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.4520956 +0000 UTC m=+19.935326123 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.952122 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.45211527 +0000 UTC m=+19.935345793 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-session" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.952134 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.45212739 +0000 UTC m=+19.935357913 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.952147 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.452140421 +0000 UTC m=+19.935370944 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.952158 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.452152801 +0000 UTC m=+19.935383314 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"encryption-config-1" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.952170 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.452163641 +0000 UTC m=+19.935394164 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"serving-cert" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.952194 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.452181471 +0000 UTC m=+19.935412234 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"audit" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.952228 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.452219462 +0000 UTC m=+19.935450205 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.952241 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/475321a1-8b7e-4033-8f72-b05a8b377347-cni-binary-copy\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.951915 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/51a02bbf-2d40-4f84-868a-d399ea18a846-ovnkube-identity-cm\") pod \"network-node-identity-7xghp\" (UID: \"51a02bbf-2d40-4f84-868a-d399ea18a846\") " pod="openshift-network-node-identity/network-node-identity-7xghp" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.952247 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.452234632 +0000 UTC m=+19.935465395 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"oauth-serving-cert" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.952293 3562 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.952373 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4092a9f8-5acc-4932-9e90-ef962eeb301a-catalog-content\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.952400 3562 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.952261 3562 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.952475 3562 configmap.go:199] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.952580 3562 secret.go:194] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.952630 3562 projected.go:269] Couldn't get secret openshift-image-registry/image-registry-tls: object "openshift-image-registry"/"image-registry-tls" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.952639 3562 projected.go:200] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-75779c45fd-v2j2v: object "openshift-image-registry"/"image-registry-tls" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.952739 3562 configmap.go:199] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.952755 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.452272813 +0000 UTC m=+19.935970685 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-config" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.952776 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.452766783 +0000 UTC m=+19.935997526 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.952795 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.452784763 +0000 UTC m=+19.936015286 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.952816 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.452802764 +0000 UTC m=+19.936033497 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.952832 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 podName: nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.452822914 +0000 UTC m=+19.936053657 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.952837 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-catalog-content\") pod \"community-operators-sdddl\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " pod="openshift-marketplace/community-operators-sdddl" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.952850 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.452840224 +0000 UTC m=+19.936070957 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.952891 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.452876965 +0000 UTC m=+19.936107678 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.952945 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.452924766 +0000 UTC m=+19.936155489 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.952986 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.452962167 +0000 UTC m=+19.936192970 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.953016 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.453006117 +0000 UTC m=+19.936236850 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"trusted-ca" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.953033 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.453024128 +0000 UTC m=+19.936254851 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"etcd-serving-ca" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.953051 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.453041838 +0000 UTC m=+19.936272591 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"serving-cert" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.953120 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.453090089 +0000 UTC m=+19.936320782 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"client-ca" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.953139 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.45313048 +0000 UTC m=+19.936361003 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-client" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.953157 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.45314646 +0000 UTC m=+19.936377173 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.953176 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.453166761 +0000 UTC m=+19.936397484 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.953194 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.453184771 +0000 UTC m=+19.936415504 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.953212 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.453203581 +0000 UTC m=+19.936434304 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"trusted-ca-bundle" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.953227 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.453219862 +0000 UTC m=+19.936450625 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : object "openshift-service-ca"/"signing-cabundle" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.953244 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.453235022 +0000 UTC m=+19.936465745 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.953246 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-certificates\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.953259 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.453251022 +0000 UTC m=+19.936481725 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.953275 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.453266733 +0000 UTC m=+19.936497456 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"serving-cert" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.953292 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.453284073 +0000 UTC m=+19.936514806 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.953307 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.453299163 +0000 UTC m=+19.936529906 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"audit-1" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.953325 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.453315014 +0000 UTC m=+19.936545777 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.953341 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.453332804 +0000 UTC m=+19.936563537 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.953356 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.453347854 +0000 UTC m=+19.936578587 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"service-ca" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.953374 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.453364104 +0000 UTC m=+19.936594857 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.953357 3562 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.953390 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.453382085 +0000 UTC m=+19.936612808 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"serving-cert" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.953405 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.453396305 +0000 UTC m=+19.936627038 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"serving-cert" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.953425 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.453411595 +0000 UTC m=+19.936642318 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.953445 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls podName:297ab9b6-2186-4d5b-a952-2bfd59af63c4 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.453436756 +0000 UTC m=+19.936667489 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls") pod "machine-config-controller-6df6df6b6b-58shh" (UID: "297ab9b6-2186-4d5b-a952-2bfd59af63c4") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.953460 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.453452746 +0000 UTC m=+19.936683489 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.953476 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.453468607 +0000 UTC m=+19.936699360 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.953489 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.453482697 +0000 UTC m=+19.936713430 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : object "openshift-dns-operator"/"metrics-tls" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.953503 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.453496077 +0000 UTC m=+19.936726820 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"metrics-tls" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.953521 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.453513257 +0000 UTC m=+19.936744000 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.953523 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/2b6d14a5-ca00-40c7-af7a-051a98a24eed-iptables-alerter-script\") pod \"iptables-alerter-wwpnd\" (UID: \"2b6d14a5-ca00-40c7-af7a-051a98a24eed\") " pod="openshift-network-operator/iptables-alerter-wwpnd" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.953537 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.453528778 +0000 UTC m=+19.936759511 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.953554 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.453546938 +0000 UTC m=+19.936777671 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.953571 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.453561888 +0000 UTC m=+19.936792631 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.953589 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.453581319 +0000 UTC m=+19.936812052 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-images" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.953605 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.453596379 +0000 UTC m=+19.936827102 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"image-registry-tls" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.953618 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.453611349 +0000 UTC m=+19.936842082 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default-metrics-tls" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.953636 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.45362936 +0000 UTC m=+19.936860103 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"config" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.953651 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.45364415 +0000 UTC m=+19.936874933 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"serving-cert" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.953691 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.453682201 +0000 UTC m=+19.936912944 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.953729 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2zpsk\" (UniqueName: \"kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk\") pod \"downloads-65476884b9-9wcvx\" (UID: \"6268b7fe-8910-4505-b404-6f1df638105c\") " pod="openshift-console/downloads-65476884b9-9wcvx" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.953744 3562 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.953766 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.953792 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.453772182 +0000 UTC m=+19.937002705 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"client-ca" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.953831 3562 secret.go:194] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.953864 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.453856394 +0000 UTC m=+19.937087117 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-client" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.953881 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.953928 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.953934 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9d0dcce3-d96e-48cb-9b9f-362105911589-proxy-tls\") pod \"machine-config-daemon-zpnhg\" (UID: \"9d0dcce3-d96e-48cb-9b9f-362105911589\") " pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.953949 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.453939286 +0000 UTC m=+19.937169809 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"audit-1" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.953981 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.954008 3562 configmap.go:199] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.954023 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmhq\" (UniqueName: \"kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq\") pod \"migrator-f7c6d88df-q2fnv\" (UID: \"cf1a8966-f594-490a-9fbb-eec5bafd13d3\") " pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.954043 3562 secret.go:194] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.954047 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.454035048 +0000 UTC m=+19.937265791 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.954079 3562 configmap.go:199] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.954084 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/bf1a8b70-3856-486f-9912-a2de1d57c3fb-certs\") pod \"machine-config-server-v65wr\" (UID: \"bf1a8b70-3856-486f-9912-a2de1d57c3fb\") " pod="openshift-machine-config-operator/machine-config-server-v65wr" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.954093 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.954008 3562 secret.go:194] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.954112 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.454102559 +0000 UTC m=+19.937333292 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.954123 3562 secret.go:194] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.954132 3562 configmap.go:199] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.954139 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.454128009 +0000 UTC m=+19.937358532 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.954144 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.954156 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.45414701 +0000 UTC m=+19.937377773 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.954176 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.45416777 +0000 UTC m=+19.937398523 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.954181 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9fb762d1-812f-43f1-9eac-68034c1ecec7-etc-ssl-certs\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.954195 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.454186531 +0000 UTC m=+19.937417054 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.954223 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.954252 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-socket-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.954251 3562 secret.go:194] Couldn't get secret openshift-image-registry/installation-pull-secrets: object "openshift-image-registry"/"installation-pull-secrets" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.954292 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.454283302 +0000 UTC m=+19.937514025 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "installation-pull-secrets" (UniqueName: "kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"installation-pull-secrets" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.954313 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-env-overrides\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.954343 3562 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-idp-0-file-data: object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.954372 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.454364124 +0000 UTC m=+19.937594647 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-user-idp-0-file-data" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.954346 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2b6d14a5-ca00-40c7-af7a-051a98a24eed-host-slash\") pod \"iptables-alerter-wwpnd\" (UID: \"2b6d14a5-ca00-40c7-af7a-051a98a24eed\") " pod="openshift-network-operator/iptables-alerter-wwpnd" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.954410 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-os-release\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.954443 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9fb762d1-812f-43f1-9eac-68034c1ecec7-kube-api-access\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.954472 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wrd8h\" (UniqueName: \"kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.954500 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/bd556935-a077-45df-ba3f-d42c39326ccd-tmpfs\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.954533 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/530553aa-0a1d-423e-8a22-f5eb4bdbb883-available-featuregates\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.954804 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/530553aa-0a1d-423e-8a22-f5eb4bdbb883-available-featuregates\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.954811 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovnkube-config\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.954826 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/bd556935-a077-45df-ba3f-d42c39326ccd-tmpfs\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.954863 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.954932 3562 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.955016 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.455003467 +0000 UTC m=+19.938234180 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.955027 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/aa90b3c2-febd-4588-a063-7fbbe82f00c1-default-certificate\") pod \"router-default-5c9bf7bc58-6jctv\" (UID: \"aa90b3c2-febd-4588-a063-7fbbe82f00c1\") " pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.955190 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9fb762d1-812f-43f1-9eac-68034c1ecec7-serving-cert\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.955502 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.955554 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.955585 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.955606 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-os-release\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.955628 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.955652 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vhj\" (UniqueName: \"kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.955722 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aa90b3c2-febd-4588-a063-7fbbe82f00c1-metrics-certs\") pod \"router-default-5c9bf7bc58-6jctv\" (UID: \"aa90b3c2-febd-4588-a063-7fbbe82f00c1\") " pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.955750 3562 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.955807 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.455796113 +0000 UTC m=+19.939026646 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.955919 3562 configmap.go:199] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.955952 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.955986 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.955989 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.455974866 +0000 UTC m=+19.939205389 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"console-operator-config" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.956022 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovn-node-metrics-cert\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.956034 3562 configmap.go:199] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.956054 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-config\") pod \"machine-approver-7874c8775-kh4j9\" (UID: \"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\") " pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.956068 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.456057298 +0000 UTC m=+19.939287821 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"openshift-global-ca" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.956077 3562 secret.go:194] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.956096 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.956113 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.456103679 +0000 UTC m=+19.939334202 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-serving-cert" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.956137 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.956156 3562 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.956195 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.45618245 +0000 UTC m=+19.939413183 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.956162 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tl5kg\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.956230 3562 configmap.go:199] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.956234 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cc291782-27d2-4a74-af79-c7dcb31535d2-metrics-tls\") pod \"network-operator-767c585db5-zd56b\" (UID: \"cc291782-27d2-4a74-af79-c7dcb31535d2\") " pod="openshift-network-operator/network-operator-767c585db5-zd56b" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.956257 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.456250282 +0000 UTC m=+19.939480805 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"trusted-ca" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.956274 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-system-cni-dir\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.956296 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.956318 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.956338 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.956357 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.956379 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-var-lib-cni-bin\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.956399 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/41e8708a-e40d-4d28-846b-c52eda4d1755-node-pullsecrets\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.956419 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/887d596e-c519-4bfa-af90-3edd9e1b2f0f-utilities\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.956441 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-cx4f9\" (UniqueName: \"kubernetes.io/projected/410cf605-1970-4691-9c95-53fdc123b1f3-kube-api-access-cx4f9\") pod \"ovnkube-control-plane-77c846df58-6l97b\" (UID: \"410cf605-1970-4691-9c95-53fdc123b1f3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.956475 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.956521 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-etc-kubernetes\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.956525 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-config\") pod \"machine-approver-7874c8775-kh4j9\" (UID: \"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\") " pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.956545 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5rpl7\" (UniqueName: \"kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.956570 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.956592 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.956616 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/410cf605-1970-4691-9c95-53fdc123b1f3-ovnkube-config\") pod \"ovnkube-control-plane-77c846df58-6l97b\" (UID: \"410cf605-1970-4691-9c95-53fdc123b1f3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.956637 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-machine-approver-tls\") pod \"machine-approver-7874c8775-kh4j9\" (UID: \"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\") " pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.956660 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pkhl4\" (UniqueName: \"kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.956683 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.956705 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-js87r\" (UniqueName: \"kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.956728 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/410cf605-1970-4691-9c95-53fdc123b1f3-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-77c846df58-6l97b\" (UID: \"410cf605-1970-4691-9c95-53fdc123b1f3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.956752 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4sfhc\" (UniqueName: \"kubernetes.io/projected/cc291782-27d2-4a74-af79-c7dcb31535d2-kube-api-access-4sfhc\") pod \"network-operator-767c585db5-zd56b\" (UID: \"cc291782-27d2-4a74-af79-c7dcb31535d2\") " pod="openshift-network-operator/network-operator-767c585db5-zd56b" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.956772 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/475321a1-8b7e-4033-8f72-b05a8b377347-multus-daemon-config\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.956794 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.956817 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.956838 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6a23c0ee-5648-448c-b772-83dced2891ce-hosts-file\") pod \"node-resolver-dn27q\" (UID: \"6a23c0ee-5648-448c-b772-83dced2891ce\") " pod="openshift-dns/node-resolver-dn27q" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.956861 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j4qn7\" (UniqueName: \"kubernetes.io/projected/2b6d14a5-ca00-40c7-af7a-051a98a24eed-kube-api-access-j4qn7\") pod \"iptables-alerter-wwpnd\" (UID: \"2b6d14a5-ca00-40c7-af7a-051a98a24eed\") " pod="openshift-network-operator/iptables-alerter-wwpnd" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.956898 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.956956 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.956993 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.957029 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.957057 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-cnibin\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.957083 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vtgqn\" (UniqueName: \"kubernetes.io/projected/297ab9b6-2186-4d5b-a952-2bfd59af63c4-kube-api-access-vtgqn\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.957136 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.957160 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.957203 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/51a02bbf-2d40-4f84-868a-d399ea18a846-webhook-cert\") pod \"network-node-identity-7xghp\" (UID: \"51a02bbf-2d40-4f84-868a-d399ea18a846\") " pod="openshift-network-node-identity/network-node-identity-7xghp" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.957225 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.957246 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.957413 3562 secret.go:194] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.957442 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.457434265 +0000 UTC m=+19.940664788 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.958046 3562 secret.go:194] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.958059 3562 secret.go:194] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.958091 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.458078908 +0000 UTC m=+19.941309431 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"etcd-client" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.958105 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.458099139 +0000 UTC m=+19.941329662 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.958157 3562 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.958209 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/887d596e-c519-4bfa-af90-3edd9e1b2f0f-utilities\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.958268 3562 secret.go:194] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.958298 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs podName:a702c6d2-4dde-4077-ab8c-0f8df804bf7a nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.458287822 +0000 UTC m=+19.941518585 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs") pod "network-metrics-daemon-qdfr4" (UID: "a702c6d2-4dde-4077-ab8c-0f8df804bf7a") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.958333 3562 configmap.go:199] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.958358 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.458352384 +0000 UTC m=+19.941582907 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.958418 3562 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.958440 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.458434145 +0000 UTC m=+19.941664668 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.958459 3562 configmap.go:199] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.958482 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.458475286 +0000 UTC m=+19.941706029 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"image-import-ca" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.958523 3562 secret.go:194] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.958546 3562 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.958585 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.458573228 +0000 UTC m=+19.941803941 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.958622 3562 configmap.go:199] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.958651 3562 secret.go:194] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.958716 3562 configmap.go:199] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.958653 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.458643519 +0000 UTC m=+19.941874252 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.958781 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.458738531 +0000 UTC m=+19.941969054 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.958818 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.458807543 +0000 UTC m=+19.942038286 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.958994 3562 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.959090 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.459055617 +0000 UTC m=+19.942286350 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.959671 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/475321a1-8b7e-4033-8f72-b05a8b377347-multus-daemon-config\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.959774 3562 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.959828 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.459816862 +0000 UTC m=+19.943047555 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"config" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.959958 3562 configmap.go:199] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.960003 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.459992646 +0000 UTC m=+19.943223169 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.960038 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs podName:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.460029906 +0000 UTC m=+19.943260639 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs") pod "multus-admission-controller-6c7c885997-4hbbc" (UID: "d5025cb4-ddb0-4107-88c1-bcbcdb779ac0") : object "openshift-multus"/"multus-admission-controller-secret" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.960084 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.460075377 +0000 UTC m=+19.943306120 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.968790 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/410cf605-1970-4691-9c95-53fdc123b1f3-ovnkube-config\") pod \"ovnkube-control-plane-77c846df58-6l97b\" (UID: \"410cf605-1970-4691-9c95-53fdc123b1f3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.969652 3562 secret.go:194] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.969754 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.46972487 +0000 UTC m=+19.952955543 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.971346 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/410cf605-1970-4691-9c95-53fdc123b1f3-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-77c846df58-6l97b\" (UID: \"410cf605-1970-4691-9c95-53fdc123b1f3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.972243 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-machine-approver-tls\") pod \"machine-approver-7874c8775-kh4j9\" (UID: \"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\") " pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.974845 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/bf1a8b70-3856-486f-9912-a2de1d57c3fb-node-bootstrap-token\") pod \"machine-config-server-v65wr\" (UID: \"bf1a8b70-3856-486f-9912-a2de1d57c3fb\") " pod="openshift-machine-config-operator/machine-config-server-v65wr" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.974921 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/51a02bbf-2d40-4f84-868a-d399ea18a846-webhook-cert\") pod \"network-node-identity-7xghp\" (UID: \"51a02bbf-2d40-4f84-868a-d399ea18a846\") " pod="openshift-network-node-identity/network-node-identity-7xghp" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.976405 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cc291782-27d2-4a74-af79-c7dcb31535d2-metrics-tls\") pod \"network-operator-767c585db5-zd56b\" (UID: \"cc291782-27d2-4a74-af79-c7dcb31535d2\") " pod="openshift-network-operator/network-operator-767c585db5-zd56b" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.976505 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovn-node-metrics-cert\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.976523 3562 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.976555 3562 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.976571 3562 projected.go:200] Error preparing data for projected volume kube-api-access-ncrf5 for pod openshift-marketplace/certified-operators-7287f: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.976634 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5 podName:887d596e-c519-4bfa-af90-3edd9e1b2f0f nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.476612806 +0000 UTC m=+19.959843409 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ncrf5" (UniqueName: "kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5") pod "certified-operators-7287f" (UID: "887d596e-c519-4bfa-af90-3edd9e1b2f0f") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.976693 3562 projected.go:294] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.976708 3562 projected.go:294] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.976717 3562 projected.go:200] Error preparing data for projected volume kube-api-access-lx2h9 for pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.976749 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9 podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.476738778 +0000 UTC m=+19.959969521 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lx2h9" (UniqueName: "kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.976778 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/aa90b3c2-febd-4588-a063-7fbbe82f00c1-stats-auth\") pod \"router-default-5c9bf7bc58-6jctv\" (UID: \"aa90b3c2-febd-4588-a063-7fbbe82f00c1\") " pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.977051 3562 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.977067 3562 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.977079 3562 projected.go:200] Error preparing data for projected volume kube-api-access-76gl8 for pod openshift-network-diagnostics/network-check-target-v54bt: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.977147 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8 podName:34a48baf-1bee-4921-8bb2-9b7320e76f79 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.477130936 +0000 UTC m=+19.960361629 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-76gl8" (UniqueName: "kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8") pod "network-check-target-v54bt" (UID: "34a48baf-1bee-4921-8bb2-9b7320e76f79") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.977684 3562 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.977725 3562 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.977744 3562 projected.go:200] Error preparing data for projected volume kube-api-access-4w8wh for pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.977811 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.47778974 +0000 UTC m=+19.961020453 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4w8wh" (UniqueName: "kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.978880 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.992435 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b54e8941-2fc4-432a-9e51-39684df9089e-bound-sa-token\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.992864 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkkfv\" (UniqueName: \"kubernetes.io/projected/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-kube-api-access-rkkfv\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.993584 3562 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.993617 3562 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.993632 3562 projected.go:200] Error preparing data for projected volume kube-api-access-tvc4r for pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.993687 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.493667525 +0000 UTC m=+19.976898258 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tvc4r" (UniqueName: "kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.993758 3562 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.993771 3562 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.993826 3562 projected.go:200] Error preparing data for projected volume kube-api-access-bm986 for pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.993857 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986 podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.493847638 +0000 UTC m=+19.977078391 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bm986" (UniqueName: "kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.993976 3562 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.994007 3562 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.994022 3562 projected.go:200] Error preparing data for projected volume kube-api-access-qcxcp for pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.994026 3562 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.994043 3562 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.994053 3562 projected.go:200] Error preparing data for projected volume kube-api-access-l8bxr for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.994069 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp podName:d0f40333-c860-4c04-8058-a0bf572dcf12 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.494054033 +0000 UTC m=+19.977284556 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qcxcp" (UniqueName: "kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp") pod "network-check-source-5c5478f8c-vqvt7" (UID: "d0f40333-c860-4c04-8058-a0bf572dcf12") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.994093 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.494080464 +0000 UTC m=+19.977310987 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l8bxr" (UniqueName: "kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.994117 3562 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.994133 3562 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.994143 3562 projected.go:200] Error preparing data for projected volume kube-api-access-6kp86 for pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.994146 3562 projected.go:294] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.994160 3562 projected.go:294] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.994170 3562 projected.go:200] Error preparing data for projected volume kube-api-access-8hpxx for pod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.994178 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86 podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.494168795 +0000 UTC m=+19.977399528 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6kp86" (UniqueName: "kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.994202 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.494191966 +0000 UTC m=+19.977422699 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8hpxx" (UniqueName: "kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.994209 3562 projected.go:294] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.994224 3562 projected.go:294] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.994232 3562 projected.go:200] Error preparing data for projected volume kube-api-access-9724w for pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.994261 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.494253567 +0000 UTC m=+19.977484090 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9724w" (UniqueName: "kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.998102 3562 projected.go:294] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.998126 3562 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.998169 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.498155874 +0000 UTC m=+19.981386397 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.998248 3562 projected.go:294] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.998264 3562 projected.go:294] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.998274 3562 projected.go:200] Error preparing data for projected volume kube-api-access-8dcvj for pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.998307 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.498298187 +0000 UTC m=+19.981528720 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8dcvj" (UniqueName: "kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.998357 3562 projected.go:294] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.998371 3562 projected.go:294] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.998380 3562 projected.go:200] Error preparing data for projected volume kube-api-access-v7vkr for pod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:22 crc kubenswrapper[3562]: E1213 00:06:22.998411 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.498402189 +0000 UTC m=+19.981632722 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-v7vkr" (UniqueName: "kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:22 crc kubenswrapper[3562]: I1213 00:06:22.998674 3562 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:22.999080 3562 projected.go:294] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:22.999116 3562 projected.go:294] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:22.999127 3562 projected.go:200] Error preparing data for projected volume kube-api-access-nf4t2 for pod openshift-dns-operator/dns-operator-75f687757b-nz2xb: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:22.999160 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-6z2n9\" (UniqueName: \"kubernetes.io/projected/bf1a8b70-3856-486f-9912-a2de1d57c3fb-kube-api-access-6z2n9\") pod \"machine-config-server-v65wr\" (UID: \"bf1a8b70-3856-486f-9912-a2de1d57c3fb\") " pod="openshift-machine-config-operator/machine-config-server-v65wr" Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:22.999185 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2 podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.499166984 +0000 UTC m=+19.982397507 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nf4t2" (UniqueName: "kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.000002 3562 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.000013 3562 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.000037 3562 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.000045 3562 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.000057 3562 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.000065 3562 projected.go:200] Error preparing data for projected volume kube-api-access-rg2zg for pod openshift-marketplace/marketplace-operator-8b455464d-f9xdt: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.000049 3562 projected.go:200] Error preparing data for projected volume kube-api-access-x5d97 for pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.000019 3562 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.000080 3562 projected.go:200] Error preparing data for projected volume kube-api-access-hjlhw for pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.001141 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.501108103 +0000 UTC m=+19.984338776 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rg2zg" (UniqueName: "kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.001192 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97 podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.501182514 +0000 UTC m=+19.984413197 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-x5d97" (UniqueName: "kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.001226 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.501218285 +0000 UTC m=+19.984449008 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hjlhw" (UniqueName: "kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.002007 3562 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.002034 3562 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.002045 3562 projected.go:200] Error preparing data for projected volume kube-api-access-fqnmc for pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.002082 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.502070112 +0000 UTC m=+19.985300845 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-fqnmc" (UniqueName: "kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.002669 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-bound-sa-token\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.002857 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qr9t\" (UniqueName: \"kubernetes.io/projected/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-kube-api-access-4qr9t\") pod \"machine-approver-7874c8775-kh4j9\" (UID: \"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\") " pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.003165 3562 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.003190 3562 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.003201 3562 projected.go:200] Error preparing data for projected volume kube-api-access-9p8gt for pod openshift-marketplace/community-operators-sdddl: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.003236 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt podName:fc9c9ba0-fcbb-4e78-8cf5-a059ec435760 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.503224985 +0000 UTC m=+19.986455708 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9p8gt" (UniqueName: "kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt") pod "community-operators-sdddl" (UID: "fc9c9ba0-fcbb-4e78-8cf5-a059ec435760") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.003940 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-8svnk\" (UniqueName: \"kubernetes.io/projected/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-kube-api-access-8svnk\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.004272 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.005894 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsxd9\" (UniqueName: \"kubernetes.io/projected/6a23c0ee-5648-448c-b772-83dced2891ce-kube-api-access-gsxd9\") pod \"node-resolver-dn27q\" (UID: \"6a23c0ee-5648-448c-b772-83dced2891ce\") " pod="openshift-dns/node-resolver-dn27q" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.006278 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x6dp\" (UniqueName: \"kubernetes.io/projected/b54e8941-2fc4-432a-9e51-39684df9089e-kube-api-access-9x6dp\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.006822 3562 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.006931 3562 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.006994 3562 projected.go:200] Error preparing data for projected volume kube-api-access-ptdrb for pod openshift-marketplace/redhat-operators-f4jkp: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.007110 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb podName:4092a9f8-5acc-4932-9e90-ef962eeb301a nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.507093932 +0000 UTC m=+19.990324455 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ptdrb" (UniqueName: "kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb") pod "redhat-operators-f4jkp" (UID: "4092a9f8-5acc-4932-9e90-ef962eeb301a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.007741 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-scpwv\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-kube-api-access-scpwv\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.008472 3562 projected.go:294] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.008502 3562 projected.go:294] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.008516 3562 projected.go:200] Error preparing data for projected volume kube-api-access-7ggjm for pod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.008571 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.508555221 +0000 UTC m=+19.991785744 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7ggjm" (UniqueName: "kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.009380 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7jw8\" (UniqueName: \"kubernetes.io/projected/f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e-kube-api-access-d7jw8\") pod \"node-ca-l92hr\" (UID: \"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\") " pod="openshift-image-registry/node-ca-l92hr" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.011508 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtjml\" (UniqueName: \"kubernetes.io/projected/13045510-8717-4a71-ade4-be95a76440a7-kube-api-access-dtjml\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.012337 3562 projected.go:294] Couldn't get configMap hostpath-provisioner/kube-root-ca.crt: object "hostpath-provisioner"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.012357 3562 projected.go:294] Couldn't get configMap hostpath-provisioner/openshift-service-ca.crt: object "hostpath-provisioner"/"openshift-service-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.012367 3562 projected.go:200] Error preparing data for projected volume kube-api-access-vvtrv for pod hostpath-provisioner/csi-hostpathplugin-hvm8g: [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.012402 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv podName:12e733dd-0939-4f1b-9cbb-13897e093787 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.512391127 +0000 UTC m=+19.995621650 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vvtrv" (UniqueName: "kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv") pod "csi-hostpathplugin-hvm8g" (UID: "12e733dd-0939-4f1b-9cbb-13897e093787") : [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.013430 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjg2w\" (UniqueName: \"kubernetes.io/projected/51a02bbf-2d40-4f84-868a-d399ea18a846-kube-api-access-zjg2w\") pod \"network-node-identity-7xghp\" (UID: \"51a02bbf-2d40-4f84-868a-d399ea18a846\") " pod="openshift-network-node-identity/network-node-identity-7xghp" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.015414 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkzjk\" (UniqueName: \"kubernetes.io/projected/9d0dcce3-d96e-48cb-9b9f-362105911589-kube-api-access-xkzjk\") pod \"machine-config-daemon-zpnhg\" (UID: \"9d0dcce3-d96e-48cb-9b9f-362105911589\") " pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.020683 3562 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.020715 3562 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.020727 3562 projected.go:200] Error preparing data for projected volume kube-api-access-2nz92 for pod openshift-console/console-644bb77b49-5x5xk: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.020788 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92 podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.520771393 +0000 UTC m=+20.004001916 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2nz92" (UniqueName: "kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.037856 3562 projected.go:294] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.037957 3562 projected.go:294] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.037974 3562 projected.go:200] Error preparing data for projected volume kube-api-access-j7zrh for pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.038063 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.538043476 +0000 UTC m=+20.021273999 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-j7zrh" (UniqueName: "kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.058160 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-etc-openvswitch\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.058226 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/cc291782-27d2-4a74-af79-c7dcb31535d2-host-etc-kube\") pod \"network-operator-767c585db5-zd56b\" (UID: \"cc291782-27d2-4a74-af79-c7dcb31535d2\") " pod="openshift-network-operator/network-operator-767c585db5-zd56b" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.058258 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-run-ovn-kubernetes\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.058273 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-etc-openvswitch\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.058279 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-cni-bin\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.058302 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-cni-bin\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.058365 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-hostroot\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.058415 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-run-k8s-cni-cncf-io\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.058420 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/cc291782-27d2-4a74-af79-c7dcb31535d2-host-etc-kube\") pod \"network-operator-767c585db5-zd56b\" (UID: \"cc291782-27d2-4a74-af79-c7dcb31535d2\") " pod="openshift-network-operator/network-operator-767c585db5-zd56b" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.058434 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-var-lib-cni-multus\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.058445 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-run-ovn-kubernetes\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.058469 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-hostroot\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.058471 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-run-openvswitch\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.058489 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-run-openvswitch\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.058517 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-registration-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.058703 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-var-lib-kubelet\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.058727 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-var-lib-kubelet\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.058752 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-run-k8s-cni-cncf-io\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.058779 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-var-lib-cni-multus\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.058844 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-run-ovn\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.058873 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-cnibin\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.058951 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-registration-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.058954 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-var-lib-openvswitch\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.058972 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-var-lib-openvswitch\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.058982 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-log-socket\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.059016 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-log-socket\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.059120 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9fb762d1-812f-43f1-9eac-68034c1ecec7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.059141 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-mountpoint-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.059332 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-multus-cni-dir\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.059360 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-dir\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.059375 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-mountpoint-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.059397 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9fb762d1-812f-43f1-9eac-68034c1ecec7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.059442 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-cni-netd\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.059457 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-multus-cni-dir\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.059487 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-cni-netd\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.059491 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-node-log\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.059470 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-dir\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.059553 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-node-log\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.059606 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-run-ovn\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.059668 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-system-cni-dir\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.059700 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-system-cni-dir\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.059781 3562 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.059807 3562 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.059822 3562 projected.go:200] Error preparing data for projected volume kube-api-access-6kgvs for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.059832 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9fb762d1-812f-43f1-9eac-68034c1ecec7-etc-ssl-certs\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.059883 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9fb762d1-812f-43f1-9eac-68034c1ecec7-etc-ssl-certs\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.059883 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.55986246 +0000 UTC m=+20.043093213 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6kgvs" (UniqueName: "kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.059981 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2b6d14a5-ca00-40c7-af7a-051a98a24eed-host-slash\") pod \"iptables-alerter-wwpnd\" (UID: \"2b6d14a5-ca00-40c7-af7a-051a98a24eed\") " pod="openshift-network-operator/iptables-alerter-wwpnd" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.060033 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-socket-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.060080 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-os-release\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.060123 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.060202 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-os-release\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.060270 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-system-cni-dir\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.060386 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-var-lib-cni-bin\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.060418 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/41e8708a-e40d-4d28-846b-c52eda4d1755-node-pullsecrets\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.060498 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-etc-kubernetes\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.060590 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6a23c0ee-5648-448c-b772-83dced2891ce-hosts-file\") pod \"node-resolver-dn27q\" (UID: \"6a23c0ee-5648-448c-b772-83dced2891ce\") " pod="openshift-dns/node-resolver-dn27q" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.060640 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-var-lib-cni-bin\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.060674 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/41e8708a-e40d-4d28-846b-c52eda4d1755-node-pullsecrets\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.060687 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2b6d14a5-ca00-40c7-af7a-051a98a24eed-host-slash\") pod \"iptables-alerter-wwpnd\" (UID: \"2b6d14a5-ca00-40c7-af7a-051a98a24eed\") " pod="openshift-network-operator/iptables-alerter-wwpnd" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.060693 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-os-release\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.060707 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-system-cni-dir\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.060731 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-etc-kubernetes\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.060736 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-os-release\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.060773 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-socket-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.060957 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6a23c0ee-5648-448c-b772-83dced2891ce-hosts-file\") pod \"node-resolver-dn27q\" (UID: \"6a23c0ee-5648-448c-b772-83dced2891ce\") " pod="openshift-dns/node-resolver-dn27q" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.061016 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-cnibin\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.061110 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-cnibin\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.061135 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-multus-conf-dir\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.061192 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-plugins-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.061218 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-csi-data-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.061230 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-multus-conf-dir\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.061273 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-slash\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.061283 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-plugins-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.061311 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-slash\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.061357 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-run-multus-certs\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.061478 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-run-multus-certs\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.061404 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-csi-data-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.061843 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e-host\") pod \"node-ca-l92hr\" (UID: \"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\") " pod="openshift-image-registry/node-ca-l92hr" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.061971 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-multus-socket-dir-parent\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.062003 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-run-netns\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.062065 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-systemd-units\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.062088 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-dir\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.062115 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-kubelet\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.062149 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.062175 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/9d0dcce3-d96e-48cb-9b9f-362105911589-rootfs\") pod \"machine-config-daemon-zpnhg\" (UID: \"9d0dcce3-d96e-48cb-9b9f-362105911589\") " pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.062200 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-run-netns\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.062250 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/41e8708a-e40d-4d28-846b-c52eda4d1755-audit-dir\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.062350 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/41e8708a-e40d-4d28-846b-c52eda4d1755-audit-dir\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.061876 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e-host\") pod \"node-ca-l92hr\" (UID: \"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\") " pod="openshift-image-registry/node-ca-l92hr" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.061121 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.062744 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-cnibin\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.062890 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-multus-socket-dir-parent\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.062937 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-run-netns\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.062961 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-systemd-units\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.062983 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-dir\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.063004 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-kubelet\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.063051 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.063073 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/9d0dcce3-d96e-48cb-9b9f-362105911589-rootfs\") pod \"machine-config-daemon-zpnhg\" (UID: \"9d0dcce3-d96e-48cb-9b9f-362105911589\") " pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.063095 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-run-netns\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.083142 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwbqm\" (UniqueName: \"kubernetes.io/projected/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-kube-api-access-bwbqm\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.099224 3562 projected.go:294] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.099445 3562 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.099555 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.599538108 +0000 UTC m=+20.082768631 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.117892 3562 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.117945 3562 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.117959 3562 projected.go:200] Error preparing data for projected volume kube-api-access-55f7t for pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.118026 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.618005335 +0000 UTC m=+20.101235858 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-55f7t" (UniqueName: "kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.129321 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-7xghp" Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.137669 3562 projected.go:294] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.137832 3562 projected.go:294] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.137949 3562 projected.go:200] Error preparing data for projected volume kube-api-access-dt5cx for pod openshift-ingress-canary/ingress-canary-2vhcn: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.138091 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx podName:0b5d722a-1123-4935-9740-52a08d018bc9 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.638072603 +0000 UTC m=+20.121303116 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-dt5cx" (UniqueName: "kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx") pod "ingress-canary-2vhcn" (UID: "0b5d722a-1123-4935-9740-52a08d018bc9") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.138218 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.138352 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-dn27q" Dec 13 00:06:23 crc kubenswrapper[3562]: W1213 00:06:23.154016 3562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7dbadf0a_ba02_47d6_96a9_0995c1e8e4a8.slice/crio-e5ca30db4af6dbbd34a04893505b873f96498bc58c6a3ed5f6872b6d33e5e963 WatchSource:0}: Error finding container e5ca30db4af6dbbd34a04893505b873f96498bc58c6a3ed5f6872b6d33e5e963: Status 404 returned error can't find the container with id e5ca30db4af6dbbd34a04893505b873f96498bc58c6a3ed5f6872b6d33e5e963 Dec 13 00:06:23 crc kubenswrapper[3562]: W1213 00:06:23.158661 3562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a23c0ee_5648_448c_b772_83dced2891ce.slice/crio-d35f811370995fa191df09ad947a2908e2db137c04201ceaee9c27f71f563738 WatchSource:0}: Error finding container d35f811370995fa191df09ad947a2908e2db137c04201ceaee9c27f71f563738: Status 404 returned error can't find the container with id d35f811370995fa191df09ad947a2908e2db137c04201ceaee9c27f71f563738 Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.159466 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-bound-sa-token\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.161374 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.177017 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.180237 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwvjb\" (UniqueName: \"kubernetes.io/projected/120b38dc-8236-4fa6-a452-642b8ad738ee-kube-api-access-bwvjb\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.185631 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-v65wr" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.193735 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-l92hr" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.200947 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2f8t\" (UniqueName: \"kubernetes.io/projected/475321a1-8b7e-4033-8f72-b05a8b377347-kube-api-access-c2f8t\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Dec 13 00:06:23 crc kubenswrapper[3562]: W1213 00:06:23.205752 3562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf1a8b70_3856_486f_9912_a2de1d57c3fb.slice/crio-5734fc3a6ab7719df74d7651f95408483ac5a46384eb167915e1f5bf8e694203 WatchSource:0}: Error finding container 5734fc3a6ab7719df74d7651f95408483ac5a46384eb167915e1f5bf8e694203: Status 404 returned error can't find the container with id 5734fc3a6ab7719df74d7651f95408483ac5a46384eb167915e1f5bf8e694203 Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.223627 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-v45vm\" (UniqueName: \"kubernetes.io/projected/aa90b3c2-febd-4588-a063-7fbbe82f00c1-kube-api-access-v45vm\") pod \"router-default-5c9bf7bc58-6jctv\" (UID: \"aa90b3c2-febd-4588-a063-7fbbe82f00c1\") " pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.250014 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9495\" (UniqueName: \"kubernetes.io/projected/3e19f9e8-9a37-4ca8-9790-c219750ab482-kube-api-access-f9495\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.258824 3562 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.258856 3562 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.258870 3562 projected.go:200] Error preparing data for projected volume kube-api-access-2zpsk for pod openshift-console/downloads-65476884b9-9wcvx: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.258947 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk podName:6268b7fe-8910-4505-b404-6f1df638105c nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.758926054 +0000 UTC m=+20.242156577 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2zpsk" (UniqueName: "kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk") pod "downloads-65476884b9-9wcvx" (UID: "6268b7fe-8910-4505-b404-6f1df638105c") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.281297 3562 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.281329 3562 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.281362 3562 projected.go:200] Error preparing data for projected volume kube-api-access-hqmhq for pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.281429 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq podName:cf1a8966-f594-490a-9fbb-eec5bafd13d3 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.78140745 +0000 UTC m=+20.264637983 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hqmhq" (UniqueName: "kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq") pod "migrator-f7c6d88df-q2fnv" (UID: "cf1a8966-f594-490a-9fbb-eec5bafd13d3") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.301188 3562 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.301231 3562 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.301251 3562 projected.go:200] Error preparing data for projected volume kube-api-access-wrd8h for pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.301411 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.801385157 +0000 UTC m=+20.284615680 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wrd8h" (UniqueName: "kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.322160 3562 projected.go:294] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.322195 3562 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.322249 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.822230621 +0000 UTC m=+20.305461144 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.342918 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9fb762d1-812f-43f1-9eac-68034c1ecec7-kube-api-access\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.358431 3562 projected.go:294] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.358463 3562 projected.go:294] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.358475 3562 projected.go:200] Error preparing data for projected volume kube-api-access-d9vhj for pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.358537 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.858518942 +0000 UTC m=+20.341749465 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-d9vhj" (UniqueName: "kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.377826 3562 projected.go:294] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.377862 3562 projected.go:294] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.377876 3562 projected.go:200] Error preparing data for projected volume kube-api-access-tl5kg for pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.377942 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.877924218 +0000 UTC m=+20.361154741 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tl5kg" (UniqueName: "kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.401539 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-q88th" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.408438 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-cx4f9\" (UniqueName: \"kubernetes.io/projected/410cf605-1970-4691-9c95-53fdc123b1f3-kube-api-access-cx4f9\") pod \"ovnkube-control-plane-77c846df58-6l97b\" (UID: \"410cf605-1970-4691-9c95-53fdc123b1f3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.417627 3562 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.417669 3562 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.417684 3562 projected.go:200] Error preparing data for projected volume kube-api-access-n6sqt for pod openshift-marketplace/community-operators-8jhz6: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.417752 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt podName:3f4dca86-e6ee-4ec9-8324-86aff960225e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.917732158 +0000 UTC m=+20.400962681 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-n6sqt" (UniqueName: "kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt") pod "community-operators-8jhz6" (UID: "3f4dca86-e6ee-4ec9-8324-86aff960225e") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: W1213 00:06:23.421454 3562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod475321a1_8b7e_4033_8f72_b05a8b377347.slice/crio-00250378014cfa681a7b9e944b385e3c41bb9e64f5772d6afcc893a612ada1aa WatchSource:0}: Error finding container 00250378014cfa681a7b9e944b385e3c41bb9e64f5772d6afcc893a612ada1aa: Status 404 returned error can't find the container with id 00250378014cfa681a7b9e944b385e3c41bb9e64f5772d6afcc893a612ada1aa Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.428003 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.438353 3562 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.438395 3562 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.438408 3562 projected.go:200] Error preparing data for projected volume kube-api-access-5rpl7 for pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.438439 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.438474 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7 podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.938455189 +0000 UTC m=+20.421685712 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5rpl7" (UniqueName: "kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.452483 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.458652 3562 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.458682 3562 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.458697 3562 projected.go:200] Error preparing data for projected volume kube-api-access-tf29r for pod openshift-marketplace/redhat-marketplace-8s8pc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.458764 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r podName:c782cf62-a827-4677-b3c2-6f82c5f09cbb nodeName:}" failed. No retries permitted until 2025-12-13 00:06:23.958744742 +0000 UTC m=+20.441975265 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tf29r" (UniqueName: "kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r") pod "redhat-marketplace-8s8pc" (UID: "c782cf62-a827-4677-b3c2-6f82c5f09cbb") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: W1213 00:06:23.463754 3562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9fb762d1_812f_43f1_9eac_68034c1ecec7.slice/crio-bf2656b92b0ba84ecf3eeeb1a4c85d632224258156fc8341132ea67867e4f27c WatchSource:0}: Error finding container bf2656b92b0ba84ecf3eeeb1a4c85d632224258156fc8341132ea67867e4f27c: Status 404 returned error can't find the container with id bf2656b92b0ba84ecf3eeeb1a4c85d632224258156fc8341132ea67867e4f27c Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.468096 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.479963 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.480044 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.480075 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.480107 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.480143 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.480175 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.480204 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.480234 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.480273 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.480304 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.480346 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.480389 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.480433 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.480463 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.480493 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.480520 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.480549 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.480579 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.480619 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.480772 3562 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-idp-0-file-data: object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.480838 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.480818861 +0000 UTC m=+20.964049384 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-idp-0-file-data" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.480915 3562 configmap.go:199] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.480946 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.480936543 +0000 UTC m=+20.964167066 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.480988 3562 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.481016 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.481008314 +0000 UTC m=+20.964238837 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.481064 3562 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.481091 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.481083206 +0000 UTC m=+20.964313729 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-client" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.481131 3562 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.481158 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.481150507 +0000 UTC m=+20.964381030 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.481198 3562 secret.go:194] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.481226 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.481217508 +0000 UTC m=+20.964448031 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.481269 3562 secret.go:194] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.481295 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.48128703 +0000 UTC m=+20.964517553 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-client" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.481337 3562 secret.go:194] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.481363 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.481355501 +0000 UTC m=+20.964586024 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.481431 3562 secret.go:194] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.481458 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.481450323 +0000 UTC m=+20.964680846 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.481492 3562 configmap.go:199] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.481516 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.481509135 +0000 UTC m=+20.964739658 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.481551 3562 configmap.go:199] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.481573 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.481566656 +0000 UTC m=+20.964797179 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"openshift-global-ca" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.481607 3562 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.481633 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.481625507 +0000 UTC m=+20.964856030 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.481672 3562 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.481701 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.481692939 +0000 UTC m=+20.964923462 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.481737 3562 configmap.go:199] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.481763 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.48175486 +0000 UTC m=+20.964985383 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"console-operator-config" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.481804 3562 secret.go:194] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.481830 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.481821781 +0000 UTC m=+20.965052304 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-serving-cert" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.481870 3562 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.482227 3562 configmap.go:199] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.482302 3562 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.482365 3562 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.482522 3562 secret.go:194] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.483552 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4qn7\" (UniqueName: \"kubernetes.io/projected/2b6d14a5-ca00-40c7-af7a-051a98a24eed-kube-api-access-j4qn7\") pod \"iptables-alerter-wwpnd\" (UID: \"2b6d14a5-ca00-40c7-af7a-051a98a24eed\") " pod="openshift-network-operator/iptables-alerter-wwpnd" Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.484282 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.481890183 +0000 UTC m=+20.965120706 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.484319 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.484304271 +0000 UTC m=+20.967534794 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"trusted-ca" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.484337 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.484328261 +0000 UTC m=+20.967558784 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.484358 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.484349562 +0000 UTC m=+20.967580085 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.484374 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.484366042 +0000 UTC m=+20.967596565 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.484439 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.484607 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.484678 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.484745 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.484782 3562 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.484798 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.484835 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.484818661 +0000 UTC m=+20.968049374 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.484890 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.484945 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.484980 3562 secret.go:194] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.484992 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.485029 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs podName:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.485009464 +0000 UTC m=+20.968239987 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs") pod "multus-admission-controller-6c7c885997-4hbbc" (UID: "d5025cb4-ddb0-4107-88c1-bcbcdb779ac0") : object "openshift-multus"/"multus-admission-controller-secret" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.485049 3562 secret.go:194] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.485065 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.485081 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs podName:a702c6d2-4dde-4077-ab8c-0f8df804bf7a nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.485072076 +0000 UTC m=+20.968302779 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs") pod "network-metrics-daemon-qdfr4" (UID: "a702c6d2-4dde-4077-ab8c-0f8df804bf7a") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.485113 3562 configmap.go:199] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.485136 3562 configmap.go:199] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.485172 3562 configmap.go:199] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.485138 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.485131017 +0000 UTC m=+20.968361530 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.485197 3562 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.485212 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.485224 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.485216659 +0000 UTC m=+20.968447392 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.485252 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.485253 3562 secret.go:194] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.485279 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.48527292 +0000 UTC m=+20.968503443 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.485282 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.485310 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.485334 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.485356 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.485435 3562 configmap.go:199] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.485462 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.485454723 +0000 UTC m=+20.968685476 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.485477 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.485469933 +0000 UTC m=+20.968700686 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"image-import-ca" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.485490 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.485483784 +0000 UTC m=+20.968714307 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.485310 3562 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.485515 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.485509024 +0000 UTC m=+20.968739747 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"config" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.485540 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.485541 3562 configmap.go:199] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.485566 3562 configmap.go:199] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.485583 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.485575265 +0000 UTC m=+20.968805988 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.485614 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.485632 3562 secret.go:194] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.485638 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.485664 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.485688 3562 configmap.go:199] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.485750 3562 configmap.go:199] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.485814 3562 secret.go:194] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.485876 3562 secret.go:194] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.485897 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.485677707 +0000 UTC m=+20.968908230 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.485958 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.485950063 +0000 UTC m=+20.969180586 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"audit" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.485991 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.486016 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.486046 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2h9\" (UniqueName: \"kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.486070 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.486091 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.486117 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.486140 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-serving-cert\" (UniqueName: \"kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.486164 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.486189 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.486329 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.486372 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.486396 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.486429 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.486458 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.486481 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.486501 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.486532 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.486556 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.486578 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.486597 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.486619 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.486653 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.486675 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.486697 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.486721 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.486751 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.486780 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.486811 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.486832 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.486856 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.486877 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.487001 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.487050 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.487078 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.487102 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.487140 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.487171 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.487198 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.487218 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.487240 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.487269 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.487290 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.487314 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.487370 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.487412 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.487436 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.487457 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.487480 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.487501 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-76gl8\" (UniqueName: \"kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8\") pod \"network-check-target-v54bt\" (UID: \"34a48baf-1bee-4921-8bb2-9b7320e76f79\") " pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.487522 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.487543 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.487563 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.487583 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.487605 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.487634 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.487656 3562 secret.go:194] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.487705 3562 configmap.go:199] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.487719 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.487701538 +0000 UTC m=+20.970932261 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"etcd-client" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.487668 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.487783 3562 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.487735 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.487728299 +0000 UTC m=+20.970958822 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"trusted-ca-bundle" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.487807 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.48779595 +0000 UTC m=+20.971026733 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"config" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.487825 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.48781604 +0000 UTC m=+20.971046813 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.487839 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.487830801 +0000 UTC m=+20.971061564 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.487853 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.487845011 +0000 UTC m=+20.971075774 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"serving-cert" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.487883 3562 secret.go:194] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.487896 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.487887432 +0000 UTC m=+20.971118185 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.487932 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.487922412 +0000 UTC m=+20.971153155 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"serving-cert" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.487760 3562 configmap.go:199] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.487970 3562 secret.go:194] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.487979 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.487969673 +0000 UTC m=+20.971200416 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"oauth-serving-cert" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.487996 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.487986584 +0000 UTC m=+20.971217337 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.487855 3562 projected.go:294] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.488017 3562 projected.go:294] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.488029 3562 projected.go:200] Error preparing data for projected volume kube-api-access-lx2h9 for pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.488046 3562 secret.go:194] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.488054 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9 podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.488047265 +0000 UTC m=+20.971277788 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-lx2h9" (UniqueName: "kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.488076 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.488068655 +0000 UTC m=+20.971299388 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.487938 3562 configmap.go:199] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.488093 3562 secret.go:194] Couldn't get secret openshift-console-operator/webhook-serving-cert: object "openshift-console-operator"/"webhook-serving-cert" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.488119 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.488107646 +0000 UTC m=+20.971338369 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.488137 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.488127996 +0000 UTC m=+20.971358749 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-serving-cert" (UniqueName: "kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : object "openshift-console-operator"/"webhook-serving-cert" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.488191 3562 secret.go:194] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.488220 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.488211628 +0000 UTC m=+20.971442361 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : object "openshift-dns-operator"/"metrics-tls" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.488271 3562 secret.go:194] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.488301 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls podName:297ab9b6-2186-4d5b-a952-2bfd59af63c4 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.488291259 +0000 UTC m=+20.971522012 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls") pod "machine-config-controller-6df6df6b6b-58shh" (UID: "297ab9b6-2186-4d5b-a952-2bfd59af63c4") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.488321 3562 secret.go:194] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.488328 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 podName: nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.48831129 +0000 UTC m=+20.971542043 (durationBeforeRetry 1s). Error: MountVolume.MountDevice failed for volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.488351 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.48833867 +0000 UTC m=+20.971569193 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.488373 3562 configmap.go:199] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.488385 3562 secret.go:194] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.488404 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.488395322 +0000 UTC m=+20.971626075 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.488426 3562 projected.go:269] Couldn't get secret openshift-image-registry/image-registry-tls: object "openshift-image-registry"/"image-registry-tls" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.488434 3562 projected.go:200] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-75779c45fd-v2j2v: object "openshift-image-registry"/"image-registry-tls" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.488426 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.488417292 +0000 UTC m=+20.971647865 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : object "openshift-service-ca"/"signing-key" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.488458 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.488452393 +0000 UTC m=+20.971682916 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"image-registry-tls" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.488474 3562 secret.go:194] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.488486 3562 configmap.go:199] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.488508 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.488502924 +0000 UTC m=+20.971733447 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.488534 3562 configmap.go:199] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.488551 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.488545764 +0000 UTC m=+20.971776287 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"client-ca" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.488575 3562 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.488592 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.488587265 +0000 UTC m=+20.971817788 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.488616 3562 configmap.go:199] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.488634 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.488629016 +0000 UTC m=+20.971859539 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"audit-1" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.488658 3562 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.488679 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.488674457 +0000 UTC m=+20.971904980 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.488710 3562 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.488729 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.488724328 +0000 UTC m=+20.971954851 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.488756 3562 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.488776 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.488770749 +0000 UTC m=+20.972001272 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.488801 3562 configmap.go:199] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.488819 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.48881352 +0000 UTC m=+20.972044043 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : object "openshift-service-ca"/"signing-cabundle" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.488846 3562 configmap.go:199] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.488862 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.48885771 +0000 UTC m=+20.972088233 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"etcd-serving-ca" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.488889 3562 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.488932 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.488916292 +0000 UTC m=+20.972146815 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"audit-1" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.488964 3562 secret.go:194] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.488982 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.488976973 +0000 UTC m=+20.972207496 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.489009 3562 secret.go:194] Couldn't get secret openshift-image-registry/installation-pull-secrets: object "openshift-image-registry"/"installation-pull-secrets" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.489026 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.489020404 +0000 UTC m=+20.972250927 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "installation-pull-secrets" (UniqueName: "kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"installation-pull-secrets" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.489053 3562 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.489070 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.489065395 +0000 UTC m=+20.972295918 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.489107 3562 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.489117 3562 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.489127 3562 projected.go:200] Error preparing data for projected volume kube-api-access-76gl8 for pod openshift-network-diagnostics/network-check-target-v54bt: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.489137 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.489120416 +0000 UTC m=+20.972350939 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"serving-cert" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.489167 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8 podName:34a48baf-1bee-4921-8bb2-9b7320e76f79 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.489156666 +0000 UTC m=+20.972387389 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-76gl8" (UniqueName: "kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8") pod "network-check-target-v54bt" (UID: "34a48baf-1bee-4921-8bb2-9b7320e76f79") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.489199 3562 secret.go:194] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.489206 3562 configmap.go:199] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.489231 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.489222418 +0000 UTC m=+20.972453151 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"serving-cert" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.489248 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.489240088 +0000 UTC m=+20.972470841 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.489261 3562 secret.go:194] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.489290 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.489280999 +0000 UTC m=+20.972511752 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"serving-cert" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.489297 3562 secret.go:194] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.489327 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.4893188 +0000 UTC m=+20.972549563 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.489332 3562 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.489360 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.4893527 +0000 UTC m=+20.972583443 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.489415 3562 configmap.go:199] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.489436 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.489430422 +0000 UTC m=+20.972660945 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-images" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.489370 3562 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.489453 3562 configmap.go:199] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.489458 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.489453082 +0000 UTC m=+20.972683605 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-session" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.489481 3562 configmap.go:199] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.489496 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.489485443 +0000 UTC m=+20.972716186 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.489514 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.489505473 +0000 UTC m=+20.972736186 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"service-ca" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.489526 3562 secret.go:194] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.489545 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.489539454 +0000 UTC m=+20.972769977 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"serving-cert" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.487883 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.489554 3562 configmap.go:199] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.489593 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.489620 3562 secret.go:194] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.489647 3562 secret.go:194] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.489622 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.489651 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.489641906 +0000 UTC m=+20.972872649 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-oauth-config" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.489676 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.489670626 +0000 UTC m=+20.972901149 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.489693 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.489705 3562 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.489715 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.489747 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.489765 3562 secret.go:194] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.489771 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.489795 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4w8wh\" (UniqueName: \"kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.489824 3562 secret.go:194] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.489849 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.48984243 +0000 UTC m=+20.973072943 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"serving-cert" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.489862 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.48985615 +0000 UTC m=+20.973086673 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default-metrics-tls" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.489876 3562 configmap.go:199] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.489889 3562 configmap.go:199] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.489925 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.489898412 +0000 UTC m=+20.973129135 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"config" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.489944 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.489935273 +0000 UTC m=+20.973166036 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.489958 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.489951133 +0000 UTC m=+20.973181866 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.490006 3562 secret.go:194] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.490035 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.490026954 +0000 UTC m=+20.973257697 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"encryption-config-1" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.490083 3562 secret.go:194] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.490108 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.490100486 +0000 UTC m=+20.973331219 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"metrics-tls" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.490152 3562 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.490176 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.490169057 +0000 UTC m=+20.973399790 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.490214 3562 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.490240 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.490232278 +0000 UTC m=+20.973463011 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.490420 3562 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.490456 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.490446362 +0000 UTC m=+20.973677075 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.490532 3562 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.490548 3562 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.490562 3562 projected.go:200] Error preparing data for projected volume kube-api-access-ncrf5 for pod openshift-marketplace/certified-operators-7287f: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.490594 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5 podName:887d596e-c519-4bfa-af90-3edd9e1b2f0f nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.490584685 +0000 UTC m=+20.973815418 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-ncrf5" (UniqueName: "kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5") pod "certified-operators-7287f" (UID: "887d596e-c519-4bfa-af90-3edd9e1b2f0f") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.490638 3562 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.490668 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.490659557 +0000 UTC m=+20.973890280 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"client-ca" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.490710 3562 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.490737 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.490728548 +0000 UTC m=+20.973959301 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"trusted-ca" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.490782 3562 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.490812 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.490803529 +0000 UTC m=+20.974034292 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.490857 3562 secret.go:194] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.490885 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.490876681 +0000 UTC m=+20.974107414 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.491022 3562 configmap.go:199] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.491055 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.491045164 +0000 UTC m=+20.974275907 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.491071 3562 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.491085 3562 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.491095 3562 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.491101 3562 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.491111 3562 projected.go:200] Error preparing data for projected volume kube-api-access-4w8wh for pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.491027 3562 configmap.go:199] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.491116 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.491100465 +0000 UTC m=+20.974330988 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"trusted-ca" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.491152 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.491144446 +0000 UTC m=+20.974374969 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.491164 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.491158926 +0000 UTC m=+20.974389449 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.491179 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.491173247 +0000 UTC m=+20.974403770 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-4w8wh" (UniqueName: "kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.491191 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.491184907 +0000 UTC m=+20.974415430 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-config" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.502062 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sfhc\" (UniqueName: \"kubernetes.io/projected/cc291782-27d2-4a74-af79-c7dcb31535d2-kube-api-access-4sfhc\") pod \"network-operator-767c585db5-zd56b\" (UID: \"cc291782-27d2-4a74-af79-c7dcb31535d2\") " pod="openshift-network-operator/network-operator-767c585db5-zd56b" Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.520330 3562 projected.go:294] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.520520 3562 projected.go:294] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.520602 3562 projected.go:200] Error preparing data for projected volume kube-api-access-js87r for pod openshift-service-ca/service-ca-666f99b6f-kk8kg: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.520744 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.020720584 +0000 UTC m=+20.503951107 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-js87r" (UniqueName: "kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.544348 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtgqn\" (UniqueName: \"kubernetes.io/projected/297ab9b6-2186-4d5b-a952-2bfd59af63c4-kube-api-access-vtgqn\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.559461 3562 projected.go:294] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.559498 3562 projected.go:294] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.559511 3562 projected.go:200] Error preparing data for projected volume kube-api-access-pkhl4 for pod openshift-controller-manager/controller-manager-778975cc4f-x5vcf: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.559576 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4 podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.059557155 +0000 UTC m=+20.542787678 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pkhl4" (UniqueName: "kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.571870 3562 kubelet.go:1935] "Failed creating a mirror pod for" err="pods \"etcd-crc\" already exists" pod="openshift-etcd/etcd-crc" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.590716 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kp86\" (UniqueName: \"kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.590759 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-v7vkr\" (UniqueName: \"kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.590932 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.591311 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tvc4r\" (UniqueName: \"kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.591536 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-fqnmc\" (UniqueName: \"kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.591575 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-7ggjm\" (UniqueName: \"kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.591660 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9p8gt\" (UniqueName: \"kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt\") pod \"community-operators-sdddl\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " pod="openshift-marketplace/community-operators-sdddl" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.591701 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bm986\" (UniqueName: \"kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.591780 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j7zrh\" (UniqueName: \"kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.591833 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxcp\" (UniqueName: \"kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp\") pod \"network-check-source-5c5478f8c-vqvt7\" (UID: \"d0f40333-c860-4c04-8058-a0bf572dcf12\") " pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.591887 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.591941 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-x5d97\" (UniqueName: \"kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.592023 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8dcvj\" (UniqueName: \"kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.592078 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2nz92\" (UniqueName: \"kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.592102 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kgvs\" (UniqueName: \"kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.592131 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtrv\" (UniqueName: \"kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.592491 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nf4t2\" (UniqueName: \"kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.592551 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.592573 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hjlhw\" (UniqueName: \"kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.592760 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bxr\" (UniqueName: \"kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.592963 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9724w\" (UniqueName: \"kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.593376 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8hpxx\" (UniqueName: \"kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.593769 3562 projected.go:294] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.593796 3562 projected.go:294] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.593808 3562 projected.go:200] Error preparing data for projected volume kube-api-access-8hpxx for pod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.593863 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.593846216 +0000 UTC m=+21.077076739 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-8hpxx" (UniqueName: "kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.594012 3562 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.594027 3562 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.594037 3562 projected.go:200] Error preparing data for projected volume kube-api-access-6kp86 for pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.594068 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86 podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.594058391 +0000 UTC m=+21.077288914 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kp86" (UniqueName: "kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.594121 3562 projected.go:294] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.594135 3562 projected.go:294] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.594143 3562 projected.go:200] Error preparing data for projected volume kube-api-access-v7vkr for pod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.594171 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.594163033 +0000 UTC m=+21.077393556 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-v7vkr" (UniqueName: "kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.594221 3562 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.594233 3562 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.594241 3562 projected.go:200] Error preparing data for projected volume kube-api-access-ptdrb for pod openshift-marketplace/redhat-operators-f4jkp: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.594266 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb podName:4092a9f8-5acc-4932-9e90-ef962eeb301a nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.594258204 +0000 UTC m=+21.077488727 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-ptdrb" (UniqueName: "kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb") pod "redhat-operators-f4jkp" (UID: "4092a9f8-5acc-4932-9e90-ef962eeb301a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.594321 3562 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.594338 3562 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.594346 3562 projected.go:200] Error preparing data for projected volume kube-api-access-tvc4r for pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.594374 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.594365447 +0000 UTC m=+21.077595970 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-tvc4r" (UniqueName: "kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.594432 3562 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.594446 3562 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.594455 3562 projected.go:200] Error preparing data for projected volume kube-api-access-fqnmc for pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.594486 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.594476889 +0000 UTC m=+21.077707412 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-fqnmc" (UniqueName: "kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.594541 3562 projected.go:294] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.594555 3562 projected.go:294] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.594565 3562 projected.go:200] Error preparing data for projected volume kube-api-access-7ggjm for pod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.594592 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.594584111 +0000 UTC m=+21.077814634 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-7ggjm" (UniqueName: "kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.594648 3562 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.594661 3562 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.594669 3562 projected.go:200] Error preparing data for projected volume kube-api-access-9p8gt for pod openshift-marketplace/community-operators-sdddl: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.594724 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt podName:fc9c9ba0-fcbb-4e78-8cf5-a059ec435760 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.594715323 +0000 UTC m=+21.077945856 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-9p8gt" (UniqueName: "kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt") pod "community-operators-sdddl" (UID: "fc9c9ba0-fcbb-4e78-8cf5-a059ec435760") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.594784 3562 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.594797 3562 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.594805 3562 projected.go:200] Error preparing data for projected volume kube-api-access-bm986 for pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.594834 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986 podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.594824365 +0000 UTC m=+21.078054888 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-bm986" (UniqueName: "kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.594891 3562 projected.go:294] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.595140 3562 projected.go:294] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.595158 3562 projected.go:200] Error preparing data for projected volume kube-api-access-j7zrh for pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.595204 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.595180992 +0000 UTC m=+21.078411515 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-j7zrh" (UniqueName: "kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.595272 3562 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.595286 3562 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.595295 3562 projected.go:200] Error preparing data for projected volume kube-api-access-qcxcp for pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.595324 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp podName:d0f40333-c860-4c04-8058-a0bf572dcf12 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.595315935 +0000 UTC m=+21.078546458 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-qcxcp" (UniqueName: "kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp") pod "network-check-source-5c5478f8c-vqvt7" (UID: "d0f40333-c860-4c04-8058-a0bf572dcf12") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.595382 3562 projected.go:294] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.595394 3562 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.595422 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.595413347 +0000 UTC m=+21.078643870 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.595478 3562 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.595492 3562 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.595502 3562 projected.go:200] Error preparing data for projected volume kube-api-access-x5d97 for pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.595530 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97 podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.595521759 +0000 UTC m=+21.078752282 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-x5d97" (UniqueName: "kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.595590 3562 projected.go:294] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.595603 3562 projected.go:294] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.595611 3562 projected.go:200] Error preparing data for projected volume kube-api-access-8dcvj for pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.595640 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.595630741 +0000 UTC m=+21.078861264 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-8dcvj" (UniqueName: "kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.595696 3562 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.595709 3562 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.595717 3562 projected.go:200] Error preparing data for projected volume kube-api-access-2nz92 for pod openshift-console/console-644bb77b49-5x5xk: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.595745 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92 podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.595736893 +0000 UTC m=+21.078967416 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-2nz92" (UniqueName: "kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.595808 3562 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.595835 3562 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.595844 3562 projected.go:200] Error preparing data for projected volume kube-api-access-6kgvs for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.595871 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.595863256 +0000 UTC m=+21.079093789 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kgvs" (UniqueName: "kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.596102 3562 projected.go:294] Couldn't get configMap hostpath-provisioner/kube-root-ca.crt: object "hostpath-provisioner"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.596120 3562 projected.go:294] Couldn't get configMap hostpath-provisioner/openshift-service-ca.crt: object "hostpath-provisioner"/"openshift-service-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.596284 3562 projected.go:200] Error preparing data for projected volume kube-api-access-vvtrv for pod hostpath-provisioner/csi-hostpathplugin-hvm8g: [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.596323 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv podName:12e733dd-0939-4f1b-9cbb-13897e093787 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.596313584 +0000 UTC m=+21.079544107 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-vvtrv" (UniqueName: "kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv") pod "csi-hostpathplugin-hvm8g" (UID: "12e733dd-0939-4f1b-9cbb-13897e093787") : [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.596722 3562 projected.go:294] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.596750 3562 projected.go:294] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.596896 3562 projected.go:200] Error preparing data for projected volume kube-api-access-nf4t2 for pod openshift-dns-operator/dns-operator-75f687757b-nz2xb: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.596987 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2 podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.596949018 +0000 UTC m=+21.080179541 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-nf4t2" (UniqueName: "kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.597082 3562 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.597099 3562 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.597109 3562 projected.go:200] Error preparing data for projected volume kube-api-access-rg2zg for pod openshift-marketplace/marketplace-operator-8b455464d-f9xdt: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.597170 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.597159382 +0000 UTC m=+21.080389915 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-rg2zg" (UniqueName: "kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.597264 3562 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.597310 3562 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.597325 3562 projected.go:200] Error preparing data for projected volume kube-api-access-hjlhw for pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.597358 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.597347046 +0000 UTC m=+21.080577569 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-hjlhw" (UniqueName: "kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.597495 3562 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.597518 3562 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.597528 3562 projected.go:200] Error preparing data for projected volume kube-api-access-l8bxr for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.597587 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.5975768 +0000 UTC m=+21.080807323 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l8bxr" (UniqueName: "kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.597684 3562 projected.go:294] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.597700 3562 projected.go:294] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.597738 3562 projected.go:200] Error preparing data for projected volume kube-api-access-9724w for pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.597770 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.597760684 +0000 UTC m=+21.080991207 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-9724w" (UniqueName: "kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.598229 3562 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T00:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T00:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T00:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.639330 3562 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41e8708a-e40d-4d28-846b-c52eda4d1755\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T00:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T00:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T00:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-7fc54b8dd7-d2bhp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.661218 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-767c585db5-zd56b" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.673774 3562 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T00:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T00:06:04Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T00:06:05Z\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T00:06:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T00:06:03Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6eebb14ec01238ec01feb97bef74e8890199677e317be96b05f080f6c10de3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T00:06:05Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d25731c743b17f30917bb1194cad0b8dbdf5fb58787af71bc746fb12a140f23c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d25731c743b17f30917bb1194cad0b8dbdf5fb58787af71bc746fb12a140f23c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T00:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T00:06:04Z\\\"}}}],\\\"startTime\\\":\\\"2025-12-13T00:06:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 00:06:23 crc kubenswrapper[3562]: W1213 00:06:23.692337 3562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc291782_27d2_4a74_af79_c7dcb31535d2.slice/crio-aee64bc00007aa027bae355f350ba7c284e5956f3047314a80af0d9e943f0cd9 WatchSource:0}: Error finding container aee64bc00007aa027bae355f350ba7c284e5956f3047314a80af0d9e943f0cd9: Status 404 returned error can't find the container with id aee64bc00007aa027bae355f350ba7c284e5956f3047314a80af0d9e943f0cd9 Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.694833 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.695027 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.695054 3562 projected.go:294] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.695082 3562 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.695179 3562 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.695200 3562 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.695213 3562 projected.go:200] Error preparing data for projected volume kube-api-access-55f7t for pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.695224 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.695202169 +0000 UTC m=+21.178432842 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.695262 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.69525073 +0000 UTC m=+21.178481433 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-55f7t" (UniqueName: "kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.695370 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dt5cx\" (UniqueName: \"kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx\") pod \"ingress-canary-2vhcn\" (UID: \"0b5d722a-1123-4935-9740-52a08d018bc9\") " pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.695564 3562 projected.go:294] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.695600 3562 projected.go:294] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.695614 3562 projected.go:200] Error preparing data for projected volume kube-api-access-dt5cx for pod openshift-ingress-canary/ingress-canary-2vhcn: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.695663 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx podName:0b5d722a-1123-4935-9740-52a08d018bc9 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.695645998 +0000 UTC m=+21.178876701 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-dt5cx" (UniqueName: "kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx") pod "ingress-canary-2vhcn" (UID: "0b5d722a-1123-4935-9740-52a08d018bc9") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.715227 3562 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T00:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T00:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T00:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.730881 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-wwpnd" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.753097 3562 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T00:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T00:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T00:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 00:06:23 crc kubenswrapper[3562]: W1213 00:06:23.755013 3562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b6d14a5_ca00_40c7_af7a_051a98a24eed.slice/crio-3b921022deb43c374aacc6519f48b6aadc1dd7ce727bf21a3a2cd6860860fefb WatchSource:0}: Error finding container 3b921022deb43c374aacc6519f48b6aadc1dd7ce727bf21a3a2cd6860860fefb: Status 404 returned error can't find the container with id 3b921022deb43c374aacc6519f48b6aadc1dd7ce727bf21a3a2cd6860860fefb Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.794570 3562 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T00:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T00:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-approver-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T00:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-approver-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T20:05:09Z\\\",\\\"message\\\":\\\"ck openshift-cluster-machine-approver/cluster-machine-approver-leader: Get \\\\\\\"https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-cluster-machine-approver/leases/cluster-machine-approver-leader\\\\\\\": dial tcp 10.217.4.1:443: i/o timeout\\\\nE0813 20:04:17.937199 1 leaderelection.go:332] error retrieving resource lock openshift-cluster-machine-approver/cluster-machine-approver-leader: Get \\\\\\\"https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-cluster-machine-approver/leases/cluster-machine-approver-leader\\\\\\\": dial tcp 10.217.4.1:443: i/o timeout\\\\nI0813 20:04:38.936003 1 leaderelection.go:285] failed to renew lease openshift-cluster-machine-approver/cluster-machine-approver-leader: timed out waiting for the condition\\\\nE0813 20:05:08.957257 1 leaderelection.go:308] Failed to release lock: Put \\\\\\\"https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-cluster-machine-approver/leases/cluster-machine-approver-leader\\\\\\\": dial tcp 10.217.4.1:443: i/o timeout\\\\nF0813 20:05:08.990431 1 main.go:235] unable to run the manager: leader election lost\\\\nI0813 20:05:09.028498 1 internal.go:516] \\\\\\\"Stopping and waiting for non leader election runnables\\\\\\\"\\\\nI0813 20:05:09.028591 1 internal.go:520] \\\\\\\"Stopping and waiting for leader election runnables\\\\\\\"\\\\nI0813 20:05:09.028608 1 internal.go:526] \\\\\\\"Stopping and waiting for caches\\\\\\\"\\\\nI0813 20:05:09.028585 1 recorder.go:104] \\\\\\\"crc_998ad275-6fd6-49e7-a1d3-0d4cd7031028 stopped leading\\\\\\\" logger=\\\\\\\"events\\\\\\\" type=\\\\\\\"Normal\\\\\\\" object={\\\\\\\"kind\\\\\\\":\\\\\\\"Lease\\\\\\\",\\\\\\\"namespace\\\\\\\":\\\\\\\"openshift-cluster-machine-approver\\\\\\\",\\\\\\\"name\\\\\\\":\\\\\\\"cluster-machine-approver-leader\\\\\\\",\\\\\\\"uid\\\\\\\":\\\\\\\"396b5b52-acf2-4d11-8e98-69ecff2f52d0\\\\\\\",\\\\\\\"apiVersion\\\\\\\":\\\\\\\"coordination.k8s.io/v1\\\\\\\",\\\\\\\"resourceVersion\\\\\\\":\\\\\\\"30699\\\\\\\"} reason=\\\\\\\"LeaderElection\\\\\\\"\\\\nI0813 20:05:09.028819 1 internal.go:530] \\\\\\\"Stopping and waiting for webhooks\\\\\\\"\\\\nI0813 20:05:09.028849 1 internal.go:533] \\\\\\\"Stopping and waiting for HTTP servers\\\\\\\"\\\\nI0813 20:05:09.028884 1 internal.go:537] \\\\\\\"Wait completed, proceeding to shutdown the manager\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.799809 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2zpsk\" (UniqueName: \"kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk\") pod \"downloads-65476884b9-9wcvx\" (UID: \"6268b7fe-8910-4505-b404-6f1df638105c\") " pod="openshift-console/downloads-65476884b9-9wcvx" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.800173 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmhq\" (UniqueName: \"kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq\") pod \"migrator-f7c6d88df-q2fnv\" (UID: \"cf1a8966-f594-490a-9fbb-eec5bafd13d3\") " pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.800074 3562 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.800579 3562 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.800814 3562 projected.go:200] Error preparing data for projected volume kube-api-access-2zpsk for pod openshift-console/downloads-65476884b9-9wcvx: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.800983 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk podName:6268b7fe-8910-4505-b404-6f1df638105c nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.80096345 +0000 UTC m=+21.284193973 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-2zpsk" (UniqueName: "kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk") pod "downloads-65476884b9-9wcvx" (UID: "6268b7fe-8910-4505-b404-6f1df638105c") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.800326 3562 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.801214 3562 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.801287 3562 projected.go:200] Error preparing data for projected volume kube-api-access-hqmhq for pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.801388 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq podName:cf1a8966-f594-490a-9fbb-eec5bafd13d3 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.801374808 +0000 UTC m=+21.284605331 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqmhq" (UniqueName: "kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq") pod "migrator-f7c6d88df-q2fnv" (UID: "cf1a8966-f594-490a-9fbb-eec5bafd13d3") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.836241 3562 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T00:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T00:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T00:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.874154 3562 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T00:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T00:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T00:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30bf5390313371a8f7b0bd5cd736b789b0d1779681e69eff1d8e1c6c5c72d56d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T20:01:35Z\\\",\\\"message\\\":\\\"73-4e9d-b5ff-47904d2b347f\\\\\\\", APIVersion:\\\\\\\"apps/v1\\\\\\\", ResourceVersion:\\\\\\\"\\\\\\\", FieldPath:\\\\\\\"\\\\\\\"}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/config -n openshift-route-controller-manager:\\\\ncause by changes in data.openshift-route-controller-manager.client-ca.configmap\\\\nI0813 20:01:32.709976 1 observer_polling.go:120] Observed file \\\\\\\"/var/run/secrets/serving-cert/tls.crt\\\\\\\" has been modified (old=\\\\\\\"f4b72f648a02bf4d745720b461c43dc88e5b533156c427b7905f426178ca53a1\\\\\\\", new=\\\\\\\"d241a06236d5f1f5f86885717c7d346103e02b5d1ed9dcf4c19f7f338250fbcb\\\\\\\")\\\\nW0813 20:01:32.710474 1 builder.go:155] Restart triggered because of file /var/run/secrets/serving-cert/tls.crt was modified\\\\nI0813 20:01:32.710576 1 observer_polling.go:120] Observed file \\\\\\\"/var/run/secrets/serving-cert/tls.key\\\\\\\" has been modified (old=\\\\\\\"9fa7e5fbef9e286ed42003219ce81736b0a30e8ce2f7dd520c0c149b834fa6a0\\\\\\\", new=\\\\\\\"db6902c5c5fee4f9a52663b228002d42646911159d139a2d4d9110064da348fd\\\\\\\")\\\\nI0813 20:01:32.710987 1 genericapiserver.go:679] \\\\\\\"[graceful-termination] pre-shutdown hooks completed\\\\\\\" name=\\\\\\\"PreShutdownHooksStopped\\\\\\\"\\\\nI0813 20:01:32.711074 1 genericapiserver.go:536] \\\\\\\"[graceful-termination] shutdown event\\\\\\\" name=\\\\\\\"ShutdownInitiated\\\\\\\"\\\\nI0813 20:01:32.711163 1 object_count_tracker.go:151] \\\\\\\"StorageObjectCountTracker pruner is exiting\\\\\\\"\\\\nI0813 20:01:32.711622 1 base_controller.go:172] Shutting down StatusSyncer_openshift-controller-manager ...\\\\nI0813 20:01:32.711623 1 base_controller.go:172] Shutting down OpenshiftControllerManagerStaticResources ...\\\\nI0813 20:01:32.711872 1 operator.go:151] Shutting down OpenShiftControllerManagerOperator\\\\nI0813 20:01:32.711949 1 base_controller.go:172] Shutting down ResourceSyncController ...\\\\nI0813 20:01:32.711995 1 base_controller.go:172] Shutting down ConfigObserver ...\\\\nI0813 20:01:32.712115 1 base_controller.go:114] Shutting down worker of ResourceSyncController controller ...\\\\nW0813 20:01:32.712173 1 builder.go:131] graceful termination failed, controllers failed with error: stopped\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:59:04Z\\\"}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.874517 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.874615 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.874665 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.874696 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.874752 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.874760 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.874805 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.874989 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.875026 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.875071 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.875094 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.875141 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.875208 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.875267 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.875315 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.875326 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.875363 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.875415 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.875495 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.875573 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.875638 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.875701 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.903234 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wrd8h\" (UniqueName: \"kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.903305 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.903351 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vhj\" (UniqueName: \"kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.903400 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tl5kg\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.903452 3562 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.903489 3562 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.903503 3562 projected.go:200] Error preparing data for projected volume kube-api-access-wrd8h for pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.903607 3562 projected.go:294] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.903652 3562 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.903653 3562 projected.go:294] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.903674 3562 projected.go:294] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.903686 3562 projected.go:200] Error preparing data for projected volume kube-api-access-tl5kg for pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.903725 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.90370061 +0000 UTC m=+21.386931143 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.903763 3562 projected.go:294] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.903793 3562 projected.go:294] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.903802 3562 projected.go:200] Error preparing data for projected volume kube-api-access-d9vhj for pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.903881 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.903854043 +0000 UTC m=+21.387084566 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-wrd8h" (UniqueName: "kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.903915 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.903893134 +0000 UTC m=+21.387123657 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-d9vhj" (UniqueName: "kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: E1213 00:06:23.903932 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:24.903923555 +0000 UTC m=+21.387154308 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-tl5kg" (UniqueName: "kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.917205 3562 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T00:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T00:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T00:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de2b2e2d762c8b359ec567ae879d9fedbdd2fb02f477f190f4465a6d6279b220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T20:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:59:16Z\\\"}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.951802 3562 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deaee4f4-7b7a-442d-99b7-c8ac62ef5f27\\\"},\\\"status\\\":{\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"collect-profiles-29251920-wcws2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.983694 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" event={"ID":"410cf605-1970-4691-9c95-53fdc123b1f3","Type":"ContainerStarted","Data":"eab7e9cb774f25dd98e4f96ece2a2eec3b885209e4e78988bfeb2f792b55f0cf"} Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.983734 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" event={"ID":"410cf605-1970-4691-9c95-53fdc123b1f3","Type":"ContainerStarted","Data":"79f7de5b77dd2c3260e5eb52ea2ecbc0ebc9a88e05716451f996b4845c839805"} Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.985233 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-l92hr" event={"ID":"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e","Type":"ContainerStarted","Data":"8b848656e6965bee6c449b89a2cb57984c2e8a91c08918ec202f98f8dc5cba60"} Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.985259 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-l92hr" event={"ID":"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e","Type":"ContainerStarted","Data":"b591b298d317d474cef309c1e461a92bd735834b7bdd19d20fd9a8c87271dd7a"} Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.987545 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerStarted","Data":"d9cd6e7ed08a34334bf96054844371c2994ce56c114eef98d5962798f69d33b1"} Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.987566 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerStarted","Data":"6f050250cf2ec95c071158dd93704a2edcae951b1c7c4cbefc8d8a73798e9ee6"} Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.987575 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerStarted","Data":"695aa0b2a8d32f6c331eae9c26c851dae99584fb6956ff96cf513e44713ff511"} Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.989583 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" event={"ID":"ec1bae8b-3200-4ad9-b33b-cf8701f3027c","Type":"ContainerStarted","Data":"817e32cb6ed44ede483a37358b06afd80312e8a9c328a4725d5dcfa506033b67"} Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.989606 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" event={"ID":"ec1bae8b-3200-4ad9-b33b-cf8701f3027c","Type":"ContainerStarted","Data":"c4c3ea4f660c39d8a7ef4271c77ddfac508b9b1a382204bdf3aae1a6af48f04e"} Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.989616 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" event={"ID":"ec1bae8b-3200-4ad9-b33b-cf8701f3027c","Type":"ContainerStarted","Data":"5b7e7a1efdab0444514ee68da95459453c53e3cb92d3e5086d867943dc6bebfb"} Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.991114 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" event={"ID":"9fb762d1-812f-43f1-9eac-68034c1ecec7","Type":"ContainerStarted","Data":"640ca961af1a69808102aac39c896839d7772ff4fa8e43f573804a91107170b1"} Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.991137 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" event={"ID":"9fb762d1-812f-43f1-9eac-68034c1ecec7","Type":"ContainerStarted","Data":"bf2656b92b0ba84ecf3eeeb1a4c85d632224258156fc8341132ea67867e4f27c"} Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.992740 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" event={"ID":"aa90b3c2-febd-4588-a063-7fbbe82f00c1","Type":"ContainerStarted","Data":"22511aa8d6dc79c1754301b9800a2a0b1dc4d8841bd28ffd8250d62e84371bed"} Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.992762 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" event={"ID":"aa90b3c2-febd-4588-a063-7fbbe82f00c1","Type":"ContainerStarted","Data":"b7d82466f165f2bb3d4fe16426613d95d9b0d485fd124fb0b70585b317131309"} Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.997590 3562 generic.go:334] "Generic (PLEG): container finished" podID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerID="f968a3e94307889638684d24e6122df1506a5cfad6e0cce68879d106182844b0" exitCode=0 Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.997656 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerDied","Data":"f968a3e94307889638684d24e6122df1506a5cfad6e0cce68879d106182844b0"} Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.997673 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"c010b1c206b69a7e00e4458b3e8c1380ac49ca645d3bab467e11b98caeafb9f0"} Dec 13 00:06:23 crc kubenswrapper[3562]: I1213 00:06:23.999113 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" event={"ID":"cc291782-27d2-4a74-af79-c7dcb31535d2","Type":"ContainerStarted","Data":"aee64bc00007aa027bae355f350ba7c284e5956f3047314a80af0d9e943f0cd9"} Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.000540 3562 generic.go:334] "Generic (PLEG): container finished" podID="7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8" containerID="07e47b79aa7e4b2080223155dda9e2f73d0b6a9aa8fb54457b73bd29f5b0da6a" exitCode=0 Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.000587 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" event={"ID":"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8","Type":"ContainerDied","Data":"07e47b79aa7e4b2080223155dda9e2f73d0b6a9aa8fb54457b73bd29f5b0da6a"} Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.000600 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" event={"ID":"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8","Type":"ContainerStarted","Data":"e5ca30db4af6dbbd34a04893505b873f96498bc58c6a3ed5f6872b6d33e5e963"} Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.001406 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" event={"ID":"2b6d14a5-ca00-40c7-af7a-051a98a24eed","Type":"ContainerStarted","Data":"3b921022deb43c374aacc6519f48b6aadc1dd7ce727bf21a3a2cd6860860fefb"} Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.003180 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-dn27q" event={"ID":"6a23c0ee-5648-448c-b772-83dced2891ce","Type":"ContainerStarted","Data":"274ac09b00962e79fbb054a46e064001b17a4772b3d58e35752b995a54ca97e6"} Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.003197 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-dn27q" event={"ID":"6a23c0ee-5648-448c-b772-83dced2891ce","Type":"ContainerStarted","Data":"d35f811370995fa191df09ad947a2908e2db137c04201ceaee9c27f71f563738"} Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.004578 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.004621 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5rpl7\" (UniqueName: \"kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.004733 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.004874 3562 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.004990 3562 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.005026 3562 projected.go:200] Error preparing data for projected volume kube-api-access-n6sqt for pod openshift-marketplace/community-operators-8jhz6: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.005112 3562 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.005145 3562 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.005149 3562 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.005182 3562 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.005196 3562 projected.go:200] Error preparing data for projected volume kube-api-access-tf29r for pod openshift-marketplace/redhat-marketplace-8s8pc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.005161 3562 projected.go:200] Error preparing data for projected volume kube-api-access-5rpl7 for pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.005120 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt podName:3f4dca86-e6ee-4ec9-8324-86aff960225e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:25.005075264 +0000 UTC m=+21.488305787 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-n6sqt" (UniqueName: "kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt") pod "community-operators-8jhz6" (UID: "3f4dca86-e6ee-4ec9-8324-86aff960225e") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.006089 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r podName:c782cf62-a827-4677-b3c2-6f82c5f09cbb nodeName:}" failed. No retries permitted until 2025-12-13 00:06:25.006010353 +0000 UTC m=+21.489241376 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-tf29r" (UniqueName: "kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r") pod "redhat-marketplace-8s8pc" (UID: "c782cf62-a827-4677-b3c2-6f82c5f09cbb") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.006124 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7 podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:25.006112905 +0000 UTC m=+21.489343438 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-5rpl7" (UniqueName: "kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.006822 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" event={"ID":"bf1a8b70-3856-486f-9912-a2de1d57c3fb","Type":"ContainerStarted","Data":"8d8820b939b94398f2a91e346722ef42c8fb082e0e27161b3c7e447a55e0cd6b"} Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.006855 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" event={"ID":"bf1a8b70-3856-486f-9912-a2de1d57c3fb","Type":"ContainerStarted","Data":"5734fc3a6ab7719df74d7651f95408483ac5a46384eb167915e1f5bf8e694203"} Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.010231 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q88th" event={"ID":"475321a1-8b7e-4033-8f72-b05a8b377347","Type":"ContainerStarted","Data":"958f2eff55abf2b420e4008cfd5b0306001e5f2c7ce3d7533e59c3fad562aeae"} Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.010274 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q88th" event={"ID":"475321a1-8b7e-4033-8f72-b05a8b377347","Type":"ContainerStarted","Data":"00250378014cfa681a7b9e944b385e3c41bb9e64f5772d6afcc893a612ada1aa"} Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.013880 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" event={"ID":"51a02bbf-2d40-4f84-868a-d399ea18a846","Type":"ContainerStarted","Data":"9e094dca68cea643c5f93fbe94d1a27d16da36c9c9e324953bb88d433cf561ad"} Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.014290 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" event={"ID":"51a02bbf-2d40-4f84-868a-d399ea18a846","Type":"ContainerStarted","Data":"6804ec1c212eb4db4ccc0ad4459247b60499ee503bc83a9078ce849831b956b2"} Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.014587 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" event={"ID":"51a02bbf-2d40-4f84-868a-d399ea18a846","Type":"ContainerStarted","Data":"dbde952e96a55eb4741ece6ea6ac0b53d639043706556da568c7b0875417563e"} Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.111035 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pkhl4\" (UniqueName: \"kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.111503 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-js87r\" (UniqueName: \"kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.111211 3562 projected.go:294] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.111573 3562 projected.go:294] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.111590 3562 projected.go:200] Error preparing data for projected volume kube-api-access-pkhl4 for pod openshift-controller-manager/controller-manager-778975cc4f-x5vcf: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.111727 3562 projected.go:294] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.111760 3562 projected.go:294] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.111774 3562 projected.go:200] Error preparing data for projected volume kube-api-access-js87r for pod openshift-service-ca/service-ca-666f99b6f-kk8kg: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.111781 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4 podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-13 00:06:25.111727952 +0000 UTC m=+21.594958475 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-pkhl4" (UniqueName: "kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.111834 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:25.111810195 +0000 UTC m=+21.595040718 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-js87r" (UniqueName: "kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.469437 3562 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.478209 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:06:24 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:06:24 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:06:24 crc kubenswrapper[3562]: healthz check failed Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.478321 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.525132 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.525368 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.525468 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.525566 3562 secret.go:194] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.525628 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.525673 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.525647164 +0000 UTC m=+23.008877687 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.525689 3562 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-idp-0-file-data: object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.525717 3562 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.525592 3562 configmap.go:199] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.525790 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.525769866 +0000 UTC m=+23.009000389 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-idp-0-file-data" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.525803 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.525813 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.525804247 +0000 UTC m=+23.009034770 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.525837 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.525824467 +0000 UTC m=+23.009054990 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.525888 3562 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.525988 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.5259708 +0000 UTC m=+23.009201323 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.526077 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.526118 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.526192 3562 configmap.go:199] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.526208 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.526245 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.526232085 +0000 UTC m=+23.009462608 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"console-operator-config" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.526271 3562 secret.go:194] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.526287 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.526315 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.526304936 +0000 UTC m=+23.009535459 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-serving-cert" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.526348 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.526394 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.526395 3562 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.526439 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.526443 3562 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.526479 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.526471361 +0000 UTC m=+23.009701884 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.526478 3562 configmap.go:199] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.526516 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.526507491 +0000 UTC m=+23.009738014 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"trusted-ca" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.526520 3562 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.526518 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.526559 3562 configmap.go:199] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.526537 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.526527832 +0000 UTC m=+23.009758355 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.526589 3562 secret.go:194] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.526604 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.526596333 +0000 UTC m=+23.009826856 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.526630 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.526647 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.526624624 +0000 UTC m=+23.009855167 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.526700 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.526708 3562 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.526723 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.526740 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.526729796 +0000 UTC m=+23.009960319 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.526758 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.526749966 +0000 UTC m=+23.009980489 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"openshift-global-ca" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.526775 3562 secret.go:194] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.526796 3562 configmap.go:199] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.526808 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.526834 3562 secret.go:194] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.526845 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs podName:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.526834188 +0000 UTC m=+23.010064711 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs") pod "multus-admission-controller-6c7c885997-4hbbc" (UID: "d5025cb4-ddb0-4107-88c1-bcbcdb779ac0") : object "openshift-multus"/"multus-admission-controller-secret" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.526861 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.526854128 +0000 UTC m=+23.010084651 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"etcd-client" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.526879 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.526871738 +0000 UTC m=+23.010102261 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"image-import-ca" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.526920 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.526968 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.526993 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.527022 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.527028 3562 configmap.go:199] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.527048 3562 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.527059 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.527066 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.527057752 +0000 UTC m=+23.010288275 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.527082 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.527074342 +0000 UTC m=+23.010304855 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.527114 3562 configmap.go:199] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.527132 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.527132 3562 secret.go:194] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.527139 3562 secret.go:194] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.527146 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.527139854 +0000 UTC m=+23.010370377 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.527197 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.527203 3562 configmap.go:199] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.527227 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.527260 3562 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.527263 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.527230585 +0000 UTC m=+23.010461308 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.527320 3562 secret.go:194] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.527340 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.527354 3562 configmap.go:199] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.527348 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.527338348 +0000 UTC m=+23.010568871 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.527386 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.527378758 +0000 UTC m=+23.010609281 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.527406 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.527393659 +0000 UTC m=+23.010624272 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.527454 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs podName:a702c6d2-4dde-4077-ab8c-0f8df804bf7a nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.527419279 +0000 UTC m=+23.010650012 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs") pod "network-metrics-daemon-qdfr4" (UID: "a702c6d2-4dde-4077-ab8c-0f8df804bf7a") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.527483 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.52747455 +0000 UTC m=+23.010705273 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"config" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.527510 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.527594 3562 configmap.go:199] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.527717 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.527708245 +0000 UTC m=+23.010938778 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.527743 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.527779 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.527921 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.527826 3562 configmap.go:199] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.527992 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.52797675 +0000 UTC m=+23.011207523 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"config" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.527865 3562 secret.go:194] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.528043 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.528036431 +0000 UTC m=+23.011266954 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.528071 3562 secret.go:194] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.528113 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.528105543 +0000 UTC m=+23.011336066 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"serving-cert" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.527967 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.528137 3562 configmap.go:199] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.528176 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.528179 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.528171764 +0000 UTC m=+23.011402287 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"audit" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.528216 3562 configmap.go:199] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.528241 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.528234555 +0000 UTC m=+23.011465078 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"oauth-serving-cert" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.528218 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.528300 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2h9\" (UniqueName: \"kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.528247 3562 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.528330 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.528359 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.528397 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.528411 3562 secret.go:194] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.528416 3562 configmap.go:199] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.528458 3562 projected.go:294] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.528485 3562 secret.go:194] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.528486 3562 projected.go:294] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.528513 3562 projected.go:200] Error preparing data for projected volume kube-api-access-lx2h9 for pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.528396 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.528389368 +0000 UTC m=+23.011619891 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.528581 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9 podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.528560251 +0000 UTC m=+23.011790954 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-lx2h9" (UniqueName: "kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.528587 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-serving-cert\" (UniqueName: \"kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.528596 3562 secret.go:194] Couldn't get secret openshift-console-operator/webhook-serving-cert: object "openshift-console-operator"/"webhook-serving-cert" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.528627 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.528619583 +0000 UTC m=+23.011850106 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-serving-cert" (UniqueName: "kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : object "openshift-console-operator"/"webhook-serving-cert" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.528663 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.528700 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.528683694 +0000 UTC m=+23.011914417 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"serving-cert" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.529094 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.529085252 +0000 UTC m=+23.012315775 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.529113 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.529107972 +0000 UTC m=+23.012338485 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.529124 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 podName: nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.529118902 +0000 UTC m=+23.012349415 (durationBeforeRetry 2s). Error: MountVolume.MountDevice failed for volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.529147 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.529175 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.529217 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.529280 3562 configmap.go:199] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.529304 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.529297637 +0000 UTC m=+23.012528160 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.529306 3562 secret.go:194] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.529407 3562 secret.go:194] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.529452 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.52944344 +0000 UTC m=+23.012673963 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"serving-cert" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.529466 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls podName:297ab9b6-2186-4d5b-a952-2bfd59af63c4 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.52945914 +0000 UTC m=+23.012689663 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls") pod "machine-config-controller-6df6df6b6b-58shh" (UID: "297ab9b6-2186-4d5b-a952-2bfd59af63c4") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.529452 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.529514 3562 secret.go:194] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.529544 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.529568 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.529547522 +0000 UTC m=+23.012778045 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"serving-cert" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.529597 3562 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.529624 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.529618143 +0000 UTC m=+23.012848666 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-session" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.529638 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.529681 3562 secret.go:194] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.529770 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.529797 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.529844 3562 configmap.go:199] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.529861 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.529847128 +0000 UTC m=+23.013077851 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.529881 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.529873288 +0000 UTC m=+23.013103811 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.529944 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.529973 3562 secret.go:194] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.529997 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.5299908 +0000 UTC m=+23.013221323 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-oauth-config" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.530002 3562 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.529973 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.530027 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.530020201 +0000 UTC m=+23.013250724 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.530038 3562 configmap.go:199] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.530058 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.530086 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.530112 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.530119 3562 secret.go:194] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.530154 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.530147693 +0000 UTC m=+23.013378216 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"serving-cert" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.530153 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.530189 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.530195 3562 secret.go:194] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.530214 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.530227 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.530212545 +0000 UTC m=+23.013443068 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.530256 3562 secret.go:194] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.530269 3562 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.530285 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.530278646 +0000 UTC m=+23.013509169 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"encryption-config-1" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.530301 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.530295546 +0000 UTC m=+23.013526069 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.530307 3562 configmap.go:199] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.530315 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.530309067 +0000 UTC m=+23.013539590 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default-metrics-tls" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.530316 3562 secret.go:194] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.530395 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.530408 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.530397458 +0000 UTC m=+23.013628041 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"metrics-tls" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.530455 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.530482 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.53047412 +0000 UTC m=+23.013704643 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"config" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.530523 3562 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.530588 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.530650 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.530663 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.530643843 +0000 UTC m=+23.013874366 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.530671 3562 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.530700 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.530721 3562 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.530749 3562 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.530726 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.530712414 +0000 UTC m=+23.013942927 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"client-ca" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.530759 3562 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.530795 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.530814 3562 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.530832 3562 projected.go:200] Error preparing data for projected volume kube-api-access-ncrf5 for pod openshift-marketplace/certified-operators-7287f: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.530841 3562 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.530796 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.530789796 +0000 UTC m=+23.014020319 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"trusted-ca" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.530950 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.531034 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.531080 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.531118 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.531199 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.531261 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.531301 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.531337 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.531371 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.531415 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.531466 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.531502 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.531562 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.531646 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.531734 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.531777 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.531811 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.531892 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.531952 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-76gl8\" (UniqueName: \"kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8\") pod \"network-check-target-v54bt\" (UID: \"34a48baf-1bee-4921-8bb2-9b7320e76f79\") " pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.531989 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.532022 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.532065 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.532101 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.532135 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.532166 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.532249 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.532287 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.532338 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.532376 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.532418 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.532455 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.532500 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.532539 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.532596 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4w8wh\" (UniqueName: \"kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.532674 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.532778 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.532816 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.532849 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.532890 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.532952 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.532990 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.533153 3562 secret.go:194] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.533190 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.533179924 +0000 UTC m=+23.016410457 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.533242 3562 secret.go:194] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.533272 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.533262715 +0000 UTC m=+23.016493428 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.533299 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.533283936 +0000 UTC m=+23.016514679 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.533322 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5 podName:887d596e-c519-4bfa-af90-3edd9e1b2f0f nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.533313126 +0000 UTC m=+23.016543879 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-ncrf5" (UniqueName: "kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5") pod "certified-operators-7287f" (UID: "887d596e-c519-4bfa-af90-3edd9e1b2f0f") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.533341 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.533330797 +0000 UTC m=+23.016561520 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.533399 3562 secret.go:194] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.533428 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.533420358 +0000 UTC m=+23.016650881 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : object "openshift-dns-operator"/"metrics-tls" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.533479 3562 secret.go:194] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.533506 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.53349806 +0000 UTC m=+23.016728583 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.533555 3562 secret.go:194] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.533583 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.533574701 +0000 UTC m=+23.016805224 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : object "openshift-service-ca"/"signing-key" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.533625 3562 configmap.go:199] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.533654 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.533646383 +0000 UTC m=+23.016876916 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.533711 3562 projected.go:269] Couldn't get secret openshift-image-registry/image-registry-tls: object "openshift-image-registry"/"image-registry-tls" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.533721 3562 projected.go:200] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-75779c45fd-v2j2v: object "openshift-image-registry"/"image-registry-tls" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.533755 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.533740975 +0000 UTC m=+23.016971698 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"image-registry-tls" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.533802 3562 configmap.go:199] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.533833 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.533824186 +0000 UTC m=+23.017054709 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"client-ca" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.533873 3562 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.533943 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.533891537 +0000 UTC m=+23.017122060 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.534179 3562 configmap.go:199] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.534221 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.534203844 +0000 UTC m=+23.017434367 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"audit-1" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.534269 3562 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.534298 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.534289325 +0000 UTC m=+23.017519848 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.534349 3562 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.534386 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.534372067 +0000 UTC m=+23.017602590 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.534437 3562 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.534485 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.534471849 +0000 UTC m=+23.017702372 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.534530 3562 configmap.go:199] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.534563 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.53455245 +0000 UTC m=+23.017782973 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : object "openshift-service-ca"/"signing-cabundle" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.534614 3562 configmap.go:199] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.534643 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.534634612 +0000 UTC m=+23.017865135 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"etcd-serving-ca" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.534684 3562 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.534729 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.534720194 +0000 UTC m=+23.017950967 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"audit-1" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.534783 3562 secret.go:194] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.534813 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.534803796 +0000 UTC m=+23.018034319 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.534860 3562 secret.go:194] Couldn't get secret openshift-image-registry/installation-pull-secrets: object "openshift-image-registry"/"installation-pull-secrets" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.534888 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.534879008 +0000 UTC m=+23.018109531 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "installation-pull-secrets" (UniqueName: "kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"installation-pull-secrets" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.534954 3562 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.534988 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.53497794 +0000 UTC m=+23.018208683 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.535541 3562 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.535569 3562 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.535584 3562 projected.go:200] Error preparing data for projected volume kube-api-access-76gl8 for pod openshift-network-diagnostics/network-check-target-v54bt: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.535615 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8 podName:34a48baf-1bee-4921-8bb2-9b7320e76f79 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.535605912 +0000 UTC m=+23.018836635 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-76gl8" (UniqueName: "kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8") pod "network-check-target-v54bt" (UID: "34a48baf-1bee-4921-8bb2-9b7320e76f79") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.535659 3562 configmap.go:199] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.535682 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.535675413 +0000 UTC m=+23.018906176 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.535724 3562 secret.go:194] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.535755 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.535747545 +0000 UTC m=+23.018978308 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"serving-cert" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.535799 3562 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.535838 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.535831206 +0000 UTC m=+23.019061729 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.535918 3562 configmap.go:199] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.535948 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.535940188 +0000 UTC m=+23.019170951 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-images" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.535983 3562 configmap.go:199] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.536018 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.53600796 +0000 UTC m=+23.019238713 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"service-ca" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.536062 3562 secret.go:194] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.536085 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.536077651 +0000 UTC m=+23.019308424 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"serving-cert" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.536119 3562 configmap.go:199] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.536153 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.536146792 +0000 UTC m=+23.019377315 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"trusted-ca-bundle" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.536197 3562 secret.go:194] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.536223 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.536216664 +0000 UTC m=+23.019447417 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.536256 3562 configmap.go:199] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.536279 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.536272005 +0000 UTC m=+23.019502768 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.536321 3562 secret.go:194] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.536344 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.536337596 +0000 UTC m=+23.019568119 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.536377 3562 configmap.go:199] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.536398 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.536391367 +0000 UTC m=+23.019622140 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-config" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.536430 3562 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.536450 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.536444718 +0000 UTC m=+23.019675241 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.536481 3562 configmap.go:199] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.536500 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.536494699 +0000 UTC m=+23.019725222 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.536531 3562 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.536552 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.53654587 +0000 UTC m=+23.019776393 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"trusted-ca" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.536605 3562 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.536620 3562 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.536629 3562 projected.go:200] Error preparing data for projected volume kube-api-access-4w8wh for pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.536655 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.536647662 +0000 UTC m=+23.019878425 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-4w8wh" (UniqueName: "kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.536695 3562 configmap.go:199] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.536719 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.536711533 +0000 UTC m=+23.019942286 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.536761 3562 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.536787 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.536779645 +0000 UTC m=+23.020010418 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.536829 3562 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.536851 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.536844676 +0000 UTC m=+23.020075439 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-client" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.536894 3562 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.536942 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.536934638 +0000 UTC m=+23.020165161 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.537004 3562 secret.go:194] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.537033 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.53702579 +0000 UTC m=+23.020256313 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.537080 3562 secret.go:194] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.537106 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.537099071 +0000 UTC m=+23.020329594 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-client" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.634691 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8hpxx\" (UniqueName: \"kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.635152 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kp86\" (UniqueName: \"kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.635191 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-v7vkr\" (UniqueName: \"kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.635273 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.635463 3562 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.635506 3562 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.635523 3562 projected.go:200] Error preparing data for projected volume kube-api-access-6kp86 for pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.635679 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86 podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.635649659 +0000 UTC m=+23.118880192 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kp86" (UniqueName: "kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.635702 3562 projected.go:294] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.635726 3562 projected.go:294] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.635741 3562 projected.go:200] Error preparing data for projected volume kube-api-access-v7vkr for pod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.635799 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.635780102 +0000 UTC m=+23.119010625 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-v7vkr" (UniqueName: "kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.635837 3562 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.635873 3562 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.635927 3562 projected.go:200] Error preparing data for projected volume kube-api-access-ptdrb for pod openshift-marketplace/redhat-operators-f4jkp: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.636008 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb podName:4092a9f8-5acc-4932-9e90-ef962eeb301a nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.635972065 +0000 UTC m=+23.119202588 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-ptdrb" (UniqueName: "kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb") pod "redhat-operators-f4jkp" (UID: "4092a9f8-5acc-4932-9e90-ef962eeb301a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.636100 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tvc4r\" (UniqueName: \"kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.636178 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-fqnmc\" (UniqueName: \"kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.636203 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-7ggjm\" (UniqueName: \"kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.636398 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9p8gt\" (UniqueName: \"kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt\") pod \"community-operators-sdddl\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " pod="openshift-marketplace/community-operators-sdddl" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.636447 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bm986\" (UniqueName: \"kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.636560 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j7zrh\" (UniqueName: \"kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.636611 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxcp\" (UniqueName: \"kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp\") pod \"network-check-source-5c5478f8c-vqvt7\" (UID: \"d0f40333-c860-4c04-8058-a0bf572dcf12\") " pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.636707 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.636797 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-x5d97\" (UniqueName: \"kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.636847 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8dcvj\" (UniqueName: \"kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.636937 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtrv\" (UniqueName: \"kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.636987 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2nz92\" (UniqueName: \"kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.637017 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kgvs\" (UniqueName: \"kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.637036 3562 projected.go:294] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.637054 3562 projected.go:294] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.637064 3562 projected.go:200] Error preparing data for projected volume kube-api-access-8hpxx for pod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.637154 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nf4t2\" (UniqueName: \"kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.637199 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.637223 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hjlhw\" (UniqueName: \"kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.637301 3562 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.637317 3562 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.637327 3562 projected.go:200] Error preparing data for projected volume kube-api-access-tvc4r for pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.637360 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.637350152 +0000 UTC m=+23.120580865 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-tvc4r" (UniqueName: "kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.637391 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bxr\" (UniqueName: \"kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.637400 3562 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.637419 3562 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.637427 3562 projected.go:200] Error preparing data for projected volume kube-api-access-hjlhw for pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.637486 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.637447594 +0000 UTC m=+23.120678107 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-hjlhw" (UniqueName: "kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.637500 3562 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.637519 3562 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.637526 3562 projected.go:200] Error preparing data for projected volume kube-api-access-l8bxr for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.637551 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9724w\" (UniqueName: \"kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.637609 3562 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.637670 3562 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.637694 3562 projected.go:200] Error preparing data for projected volume kube-api-access-2nz92 for pod openshift-console/console-644bb77b49-5x5xk: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.637701 3562 projected.go:294] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.637714 3562 projected.go:294] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.637718 3562 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.637723 3562 projected.go:200] Error preparing data for projected volume kube-api-access-7ggjm for pod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.637730 3562 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.637739 3562 projected.go:200] Error preparing data for projected volume kube-api-access-6kgvs for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.637631 3562 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.637622 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.637610508 +0000 UTC m=+23.120841031 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l8bxr" (UniqueName: "kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.637851 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.637795531 +0000 UTC m=+23.121026194 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-7ggjm" (UniqueName: "kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.637884 3562 projected.go:294] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.637890 3562 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.637898 3562 projected.go:294] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.637924 3562 projected.go:200] Error preparing data for projected volume kube-api-access-nf4t2 for pod openshift-dns-operator/dns-operator-75f687757b-nz2xb: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.637916 3562 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.637959 3562 projected.go:200] Error preparing data for projected volume kube-api-access-9p8gt for pod openshift-marketplace/community-operators-sdddl: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.638023 3562 projected.go:294] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.638053 3562 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.638075 3562 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.638081 3562 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.638051 3562 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.638093 3562 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.638103 3562 projected.go:200] Error preparing data for projected volume kube-api-access-rg2zg for pod openshift-marketplace/marketplace-operator-8b455464d-f9xdt: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.638085 3562 projected.go:200] Error preparing data for projected volume kube-api-access-bm986 for pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.638123 3562 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.637199 3562 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.638154 3562 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.638179 3562 projected.go:200] Error preparing data for projected volume kube-api-access-x5d97 for pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.638187 3562 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.638214 3562 projected.go:294] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.638227 3562 projected.go:294] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.638235 3562 projected.go:200] Error preparing data for projected volume kube-api-access-j7zrh for pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.638217 3562 projected.go:200] Error preparing data for projected volume kube-api-access-qcxcp for pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.637888 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.637875833 +0000 UTC m=+23.121106556 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kgvs" (UniqueName: "kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.638306 3562 projected.go:294] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.638307 3562 projected.go:294] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.638321 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92 podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.638311781 +0000 UTC m=+23.121542294 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-2nz92" (UniqueName: "kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.638326 3562 projected.go:294] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.638331 3562 projected.go:294] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.638340 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.638333972 +0000 UTC m=+23.121564495 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-8hpxx" (UniqueName: "kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.638344 3562 projected.go:200] Error preparing data for projected volume kube-api-access-9724w for pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.638355 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2 podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.638348602 +0000 UTC m=+23.121579335 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-nf4t2" (UniqueName: "kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.638341 3562 projected.go:200] Error preparing data for projected volume kube-api-access-8dcvj for pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.638370 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.638361252 +0000 UTC m=+23.121591775 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-9724w" (UniqueName: "kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.638383 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt podName:fc9c9ba0-fcbb-4e78-8cf5-a059ec435760 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.638378152 +0000 UTC m=+23.121608675 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-9p8gt" (UniqueName: "kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt") pod "community-operators-sdddl" (UID: "fc9c9ba0-fcbb-4e78-8cf5-a059ec435760") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.637775 3562 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.638398 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.638390513 +0000 UTC m=+23.121621036 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.638399 3562 projected.go:294] Couldn't get configMap hostpath-provisioner/kube-root-ca.crt: object "hostpath-provisioner"/"kube-root-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.638414 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.638406093 +0000 UTC m=+23.121636616 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-rg2zg" (UniqueName: "kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.638420 3562 projected.go:294] Couldn't get configMap hostpath-provisioner/openshift-service-ca.crt: object "hostpath-provisioner"/"openshift-service-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.638426 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986 podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.638421263 +0000 UTC m=+23.121651786 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-bm986" (UniqueName: "kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.638433 3562 projected.go:200] Error preparing data for projected volume kube-api-access-vvtrv for pod hostpath-provisioner/csi-hostpathplugin-hvm8g: [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.638444 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97 podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.638435955 +0000 UTC m=+23.121666478 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-x5d97" (UniqueName: "kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.638459 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.638453335 +0000 UTC m=+23.121683858 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-j7zrh" (UniqueName: "kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.638401 3562 projected.go:200] Error preparing data for projected volume kube-api-access-fqnmc for pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.638475 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp podName:d0f40333-c860-4c04-8058-a0bf572dcf12 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.638468265 +0000 UTC m=+23.121698788 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-qcxcp" (UniqueName: "kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp") pod "network-check-source-5c5478f8c-vqvt7" (UID: "d0f40333-c860-4c04-8058-a0bf572dcf12") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.638489 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.638483246 +0000 UTC m=+23.121713769 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-8dcvj" (UniqueName: "kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.638504 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv podName:12e733dd-0939-4f1b-9cbb-13897e093787 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.638496856 +0000 UTC m=+23.121727379 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-vvtrv" (UniqueName: "kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv") pod "csi-hostpathplugin-hvm8g" (UID: "12e733dd-0939-4f1b-9cbb-13897e093787") : [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.638519 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.638512506 +0000 UTC m=+23.121743029 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-fqnmc" (UniqueName: "kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.738817 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.738944 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.739115 3562 projected.go:294] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.739167 3562 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.739250 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.739223466 +0000 UTC m=+23.222454189 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.739308 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dt5cx\" (UniqueName: \"kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx\") pod \"ingress-canary-2vhcn\" (UID: \"0b5d722a-1123-4935-9740-52a08d018bc9\") " pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.739366 3562 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.739407 3562 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.739421 3562 projected.go:200] Error preparing data for projected volume kube-api-access-55f7t for pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.739498 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.739460671 +0000 UTC m=+23.222691374 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-55f7t" (UniqueName: "kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.739513 3562 projected.go:294] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.739530 3562 projected.go:294] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.739540 3562 projected.go:200] Error preparing data for projected volume kube-api-access-dt5cx for pod openshift-ingress-canary/ingress-canary-2vhcn: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.739576 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx podName:0b5d722a-1123-4935-9740-52a08d018bc9 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.739562813 +0000 UTC m=+23.222793546 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-dt5cx" (UniqueName: "kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx") pod "ingress-canary-2vhcn" (UID: "0b5d722a-1123-4935-9740-52a08d018bc9") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.842453 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2zpsk\" (UniqueName: \"kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk\") pod \"downloads-65476884b9-9wcvx\" (UID: \"6268b7fe-8910-4505-b404-6f1df638105c\") " pod="openshift-console/downloads-65476884b9-9wcvx" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.843074 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmhq\" (UniqueName: \"kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq\") pod \"migrator-f7c6d88df-q2fnv\" (UID: \"cf1a8966-f594-490a-9fbb-eec5bafd13d3\") " pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.842721 3562 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.843384 3562 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.843425 3562 projected.go:200] Error preparing data for projected volume kube-api-access-2zpsk for pod openshift-console/downloads-65476884b9-9wcvx: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.843186 3562 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.843475 3562 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.843491 3562 projected.go:200] Error preparing data for projected volume kube-api-access-hqmhq for pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.843996 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk podName:6268b7fe-8910-4505-b404-6f1df638105c nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.843953536 +0000 UTC m=+23.327184099 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-2zpsk" (UniqueName: "kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk") pod "downloads-65476884b9-9wcvx" (UID: "6268b7fe-8910-4505-b404-6f1df638105c") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.844461 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq podName:cf1a8966-f594-490a-9fbb-eec5bafd13d3 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.844427596 +0000 UTC m=+23.327658159 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqmhq" (UniqueName: "kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq") pod "migrator-f7c6d88df-q2fnv" (UID: "cf1a8966-f594-490a-9fbb-eec5bafd13d3") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.874622 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.874939 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.875123 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.875181 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.875129 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.875274 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.875384 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.875434 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.875497 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.875546 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.875595 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.875659 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.875704 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.875745 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.875810 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.875866 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.875933 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.876007 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.876070 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.876130 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.876188 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.876236 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.876284 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.876335 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.876381 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.876427 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.876478 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.876526 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.876575 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.876621 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.876671 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.876723 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.876769 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.876819 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.876870 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.876931 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.876979 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.877025 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.877069 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.877109 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.877188 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.877234 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.877281 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.877327 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.877382 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.877438 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.877480 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.877529 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.877577 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.877616 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.877660 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.877701 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.877745 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.877784 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.877840 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.877876 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.877937 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.877978 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.878017 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.878052 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.878124 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.878162 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.878203 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.878240 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.878276 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.878329 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.875317 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.878433 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.875342 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.878493 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.875361 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.878552 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.949840 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wrd8h\" (UniqueName: \"kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.950081 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.950231 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vhj\" (UniqueName: \"kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:06:24 crc kubenswrapper[3562]: I1213 00:06:24.950402 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tl5kg\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.951046 3562 projected.go:294] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.951072 3562 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.951170 3562 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.951183 3562 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.951191 3562 projected.go:200] Error preparing data for projected volume kube-api-access-wrd8h for pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.951244 3562 projected.go:294] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.951254 3562 projected.go:294] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.951260 3562 projected.go:200] Error preparing data for projected volume kube-api-access-d9vhj for pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.951381 3562 projected.go:294] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.951392 3562 projected.go:294] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.951399 3562 projected.go:200] Error preparing data for projected volume kube-api-access-tl5kg for pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.953103 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.953080964 +0000 UTC m=+23.436311497 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.953203 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.953190846 +0000 UTC m=+23.436421379 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-wrd8h" (UniqueName: "kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.953287 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.953276988 +0000 UTC m=+23.436507511 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-d9vhj" (UniqueName: "kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:24 crc kubenswrapper[3562]: E1213 00:06:24.953376 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:26.953365369 +0000 UTC m=+23.436595892 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-tl5kg" (UniqueName: "kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:25 crc kubenswrapper[3562]: I1213 00:06:25.015138 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" event={"ID":"cc291782-27d2-4a74-af79-c7dcb31535d2","Type":"ContainerStarted","Data":"db2ec7464e483a1398c9209e788bb7e150cc6d2018514e6cb6e805bb0829d37c"} Dec 13 00:06:25 crc kubenswrapper[3562]: I1213 00:06:25.017630 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" event={"ID":"410cf605-1970-4691-9c95-53fdc123b1f3","Type":"ContainerStarted","Data":"3531cf3b4d4b8d54841591bc198da779d8318702f505d43a90a6f4cba2d5f433"} Dec 13 00:06:25 crc kubenswrapper[3562]: I1213 00:06:25.019998 3562 generic.go:334] "Generic (PLEG): container finished" podID="7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8" containerID="36911c198b2ce0c8cde039a0ebc8112b7f81dba598c43d4394f5a9c19730a92e" exitCode=0 Dec 13 00:06:25 crc kubenswrapper[3562]: I1213 00:06:25.020066 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" event={"ID":"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8","Type":"ContainerDied","Data":"36911c198b2ce0c8cde039a0ebc8112b7f81dba598c43d4394f5a9c19730a92e"} Dec 13 00:06:25 crc kubenswrapper[3562]: I1213 00:06:25.059578 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Dec 13 00:06:25 crc kubenswrapper[3562]: I1213 00:06:25.059830 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5rpl7\" (UniqueName: \"kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:06:25 crc kubenswrapper[3562]: I1213 00:06:25.060049 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 13 00:06:25 crc kubenswrapper[3562]: E1213 00:06:25.061555 3562 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 13 00:06:25 crc kubenswrapper[3562]: E1213 00:06:25.061655 3562 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 13 00:06:25 crc kubenswrapper[3562]: E1213 00:06:25.061752 3562 projected.go:200] Error preparing data for projected volume kube-api-access-tf29r for pod openshift-marketplace/redhat-marketplace-8s8pc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:25 crc kubenswrapper[3562]: E1213 00:06:25.061881 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r podName:c782cf62-a827-4677-b3c2-6f82c5f09cbb nodeName:}" failed. No retries permitted until 2025-12-13 00:06:27.061858585 +0000 UTC m=+23.545089108 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-tf29r" (UniqueName: "kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r") pod "redhat-marketplace-8s8pc" (UID: "c782cf62-a827-4677-b3c2-6f82c5f09cbb") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:25 crc kubenswrapper[3562]: E1213 00:06:25.059737 3562 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 13 00:06:25 crc kubenswrapper[3562]: E1213 00:06:25.062068 3562 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 13 00:06:25 crc kubenswrapper[3562]: E1213 00:06:25.062140 3562 projected.go:200] Error preparing data for projected volume kube-api-access-n6sqt for pod openshift-marketplace/community-operators-8jhz6: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:25 crc kubenswrapper[3562]: E1213 00:06:25.062249 3562 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:25 crc kubenswrapper[3562]: E1213 00:06:25.062269 3562 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:25 crc kubenswrapper[3562]: E1213 00:06:25.062278 3562 projected.go:200] Error preparing data for projected volume kube-api-access-5rpl7 for pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:25 crc kubenswrapper[3562]: E1213 00:06:25.062417 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt podName:3f4dca86-e6ee-4ec9-8324-86aff960225e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:27.062403415 +0000 UTC m=+23.545633938 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-n6sqt" (UniqueName: "kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt") pod "community-operators-8jhz6" (UID: "3f4dca86-e6ee-4ec9-8324-86aff960225e") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:25 crc kubenswrapper[3562]: E1213 00:06:25.062509 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7 podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:27.062497567 +0000 UTC m=+23.545728090 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-5rpl7" (UniqueName: "kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:25 crc kubenswrapper[3562]: I1213 00:06:25.166187 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pkhl4\" (UniqueName: \"kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:06:25 crc kubenswrapper[3562]: I1213 00:06:25.166596 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-js87r\" (UniqueName: \"kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:06:25 crc kubenswrapper[3562]: E1213 00:06:25.167554 3562 projected.go:294] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Dec 13 00:06:25 crc kubenswrapper[3562]: E1213 00:06:25.167582 3562 projected.go:294] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Dec 13 00:06:25 crc kubenswrapper[3562]: E1213 00:06:25.167596 3562 projected.go:200] Error preparing data for projected volume kube-api-access-pkhl4 for pod openshift-controller-manager/controller-manager-778975cc4f-x5vcf: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:25 crc kubenswrapper[3562]: E1213 00:06:25.167647 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4 podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-13 00:06:27.167626465 +0000 UTC m=+23.650857108 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-pkhl4" (UniqueName: "kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:25 crc kubenswrapper[3562]: E1213 00:06:25.167695 3562 projected.go:294] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Dec 13 00:06:25 crc kubenswrapper[3562]: E1213 00:06:25.167729 3562 projected.go:294] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Dec 13 00:06:25 crc kubenswrapper[3562]: E1213 00:06:25.167745 3562 projected.go:200] Error preparing data for projected volume kube-api-access-js87r for pod openshift-service-ca/service-ca-666f99b6f-kk8kg: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Dec 13 00:06:25 crc kubenswrapper[3562]: E1213 00:06:25.167818 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:27.167790368 +0000 UTC m=+23.651021071 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-js87r" (UniqueName: "kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Dec 13 00:06:25 crc kubenswrapper[3562]: I1213 00:06:25.476612 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:06:25 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:06:25 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:06:25 crc kubenswrapper[3562]: healthz check failed Dec 13 00:06:25 crc kubenswrapper[3562]: I1213 00:06:25.476953 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:06:25 crc kubenswrapper[3562]: I1213 00:06:25.876346 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 13 00:06:25 crc kubenswrapper[3562]: I1213 00:06:25.876410 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:06:25 crc kubenswrapper[3562]: I1213 00:06:25.876483 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:06:25 crc kubenswrapper[3562]: E1213 00:06:25.876498 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 13 00:06:25 crc kubenswrapper[3562]: I1213 00:06:25.876587 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:06:25 crc kubenswrapper[3562]: E1213 00:06:25.876641 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 13 00:06:25 crc kubenswrapper[3562]: I1213 00:06:25.876650 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:06:25 crc kubenswrapper[3562]: E1213 00:06:25.876725 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 13 00:06:25 crc kubenswrapper[3562]: I1213 00:06:25.876734 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 13 00:06:25 crc kubenswrapper[3562]: I1213 00:06:25.876777 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 13 00:06:25 crc kubenswrapper[3562]: E1213 00:06:25.876834 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 13 00:06:25 crc kubenswrapper[3562]: E1213 00:06:25.876889 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 13 00:06:25 crc kubenswrapper[3562]: I1213 00:06:25.876968 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:06:25 crc kubenswrapper[3562]: I1213 00:06:25.877010 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:06:25 crc kubenswrapper[3562]: E1213 00:06:25.876977 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 13 00:06:25 crc kubenswrapper[3562]: E1213 00:06:25.877084 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 13 00:06:25 crc kubenswrapper[3562]: E1213 00:06:25.877147 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 13 00:06:25 crc kubenswrapper[3562]: I1213 00:06:25.877179 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:06:25 crc kubenswrapper[3562]: E1213 00:06:25.877291 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 13 00:06:25 crc kubenswrapper[3562]: E1213 00:06:25.877386 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 13 00:06:25 crc kubenswrapper[3562]: I1213 00:06:25.877394 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:06:25 crc kubenswrapper[3562]: E1213 00:06:25.877642 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.024572 3562 generic.go:334] "Generic (PLEG): container finished" podID="7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8" containerID="c8cfb5616e94af8be50c9eb3f24db713da19d0bab90bdfe2dbbf994821dc203e" exitCode=0 Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.024617 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" event={"ID":"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8","Type":"ContainerDied","Data":"c8cfb5616e94af8be50c9eb3f24db713da19d0bab90bdfe2dbbf994821dc203e"} Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.027802 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"8a731876e3bbbab3a44d68b148fce29008990c0afae08eff5e1b7843e2f119ad"} Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.027846 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"569f526146fcd390ebfe15d96174d7b40622a40e05ef443639d55b236fd6ca8f"} Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.027863 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"6ff3cacf40a00028b6ec8f160befac64722580e285ef4efbc36852ad2230f9af"} Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.027879 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"f41680682228b8f78c399b3833b0efdc4f2e17060b4158c788bc97f18f98428b"} Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.027893 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"bf1a86f1acab43578dc69e33d56079d3193ddddd52c91b77cc329317f8aeef19"} Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.458694 3562 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.461147 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.461201 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.461218 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.461565 3562 kubelet_node_status.go:77] "Attempting to register node" node="crc" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.475581 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:06:26 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:06:26 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:06:26 crc kubenswrapper[3562]: healthz check failed Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.475649 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.527295 3562 kubelet_node_status.go:116] "Node was previously registered" node="crc" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.527569 3562 kubelet_node_status.go:80] "Successfully registered node" node="crc" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.529936 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.530136 3562 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T00:06:26Z","lastTransitionTime":"2025-12-13T00:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.608449 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.608534 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.608603 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.608639 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.608674 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.608728 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.608775 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.609018 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.609123 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.609189 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.609218 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.609253 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.609285 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.609314 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.609343 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.609522 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.609583 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.609731 3562 secret.go:194] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.609859 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.609831141 +0000 UTC m=+27.093061664 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"metrics-tls" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.609979 3562 secret.go:194] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.610042 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.610026844 +0000 UTC m=+27.093257367 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-oauth-config" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.610099 3562 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.610131 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.610124476 +0000 UTC m=+27.093354999 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.610185 3562 secret.go:194] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.610213 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.610206628 +0000 UTC m=+27.093437151 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default-metrics-tls" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.610262 3562 secret.go:194] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.610291 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.610284339 +0000 UTC m=+27.093514862 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"serving-cert" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.610338 3562 configmap.go:199] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.610372 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.610365471 +0000 UTC m=+27.093595994 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"config" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.610439 3562 secret.go:194] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.610466 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.610459773 +0000 UTC m=+27.093690296 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"encryption-config-1" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.610524 3562 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.610553 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.610546195 +0000 UTC m=+27.093776718 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.610564 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.610661 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.610708 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.610741 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.610769 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.610796 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.610836 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.610883 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.610948 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.611141 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.611220 3562 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.611306 3562 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.612484 3562 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.611409 3562 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.612609 3562 configmap.go:199] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.611445 3562 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.611449 3562 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.611520 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.611495694 +0000 UTC m=+27.094726217 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.612966 3562 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.612949 3562 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.613048 3562 projected.go:200] Error preparing data for projected volume kube-api-access-ncrf5 for pod openshift-marketplace/certified-operators-7287f: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.613111 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5 podName:887d596e-c519-4bfa-af90-3edd9e1b2f0f nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.613094985 +0000 UTC m=+27.096325498 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-ncrf5" (UniqueName: "kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5") pod "certified-operators-7287f" (UID: "887d596e-c519-4bfa-af90-3edd9e1b2f0f") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.613143 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.613135736 +0000 UTC m=+27.096366259 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.613204 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.613152136 +0000 UTC m=+27.096382659 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.613230 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.613217168 +0000 UTC m=+27.096447691 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"trusted-ca" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.613263 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.613251548 +0000 UTC m=+27.096482071 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"audit-1" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.613279 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.613270029 +0000 UTC m=+27.096500552 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.613285 3562 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.611702 3562 secret.go:194] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.611721 3562 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.611714 3562 secret.go:194] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.611804 3562 secret.go:194] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.611938 3562 secret.go:194] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.612120 3562 configmap.go:199] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.612229 3562 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.612325 3562 configmap.go:199] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.612347 3562 projected.go:269] Couldn't get secret openshift-image-registry/image-registry-tls: object "openshift-image-registry"/"image-registry-tls" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.613412 3562 configmap.go:199] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.613580 3562 projected.go:200] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-75779c45fd-v2j2v: object "openshift-image-registry"/"image-registry-tls" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.613449 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.613410642 +0000 UTC m=+27.096641195 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.613619 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.613610346 +0000 UTC m=+27.096840869 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"etcd-serving-ca" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.613664 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.613624897 +0000 UTC m=+27.096855420 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.613703 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.613692148 +0000 UTC m=+27.096922671 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.613714 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.613708778 +0000 UTC m=+27.096939291 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : object "openshift-service-ca"/"signing-key" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.613726 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.613720939 +0000 UTC m=+27.096951462 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.613767 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.613733289 +0000 UTC m=+27.096963802 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : object "openshift-dns-operator"/"metrics-tls" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.613780 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.61377363 +0000 UTC m=+27.097004153 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"client-ca" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.613839 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.61378564 +0000 UTC m=+27.097016163 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"image-registry-tls" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.613925 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.613877862 +0000 UTC m=+27.097108385 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.613943 3562 configmap.go:199] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.613968 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.613947593 +0000 UTC m=+27.097178116 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.614043 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.614015864 +0000 UTC m=+27.097246427 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : object "openshift-service-ca"/"signing-cabundle" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.614105 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.614089806 +0000 UTC m=+27.097320329 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"client-ca" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.614546 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.615523 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.615556 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.615600 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.615653 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.615685 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-76gl8\" (UniqueName: \"kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8\") pod \"network-check-target-v54bt\" (UID: \"34a48baf-1bee-4921-8bb2-9b7320e76f79\") " pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.615713 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.615736 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.615764 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.615796 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.615822 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.615862 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.615889 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.615927 3562 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.615941 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.615948 3562 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.615964 3562 projected.go:200] Error preparing data for projected volume kube-api-access-76gl8 for pod openshift-network-diagnostics/network-check-target-v54bt: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.615973 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.616003 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.616010 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8 podName:34a48baf-1bee-4921-8bb2-9b7320e76f79 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.616000453 +0000 UTC m=+27.099230976 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-76gl8" (UniqueName: "kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8") pod "network-check-target-v54bt" (UID: "34a48baf-1bee-4921-8bb2-9b7320e76f79") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.616051 3562 configmap.go:199] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.616072 3562 secret.go:194] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.616105 3562 configmap.go:199] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.616112 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.616101935 +0000 UTC m=+27.099332458 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.616130 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.616121865 +0000 UTC m=+27.099352378 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.616151 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.616141046 +0000 UTC m=+27.099371569 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"service-ca" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.616161 3562 configmap.go:199] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.616175 3562 secret.go:194] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.616189 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.616181617 +0000 UTC m=+27.099412140 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"trusted-ca-bundle" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.616194 3562 secret.go:194] Couldn't get secret openshift-image-registry/installation-pull-secrets: object "openshift-image-registry"/"installation-pull-secrets" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.616225 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.616205538 +0000 UTC m=+27.099436101 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"serving-cert" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.616264 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.616249759 +0000 UTC m=+27.099480312 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "installation-pull-secrets" (UniqueName: "kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"installation-pull-secrets" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.616270 3562 configmap.go:199] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.616298 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.61629129 +0000 UTC m=+27.099521813 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.616275 3562 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.616310 3562 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.616076 3562 secret.go:194] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.616331 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.61632275 +0000 UTC m=+27.099553273 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"audit-1" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.616354 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.616345051 +0000 UTC m=+27.099575574 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.616330 3562 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.616372 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.616364621 +0000 UTC m=+27.099595144 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"serving-cert" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.616378 3562 configmap.go:199] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.616389 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.616381981 +0000 UTC m=+27.099612504 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.616238 3562 secret.go:194] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.616421 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.616448 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.616461 3562 secret.go:194] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.616489 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.616503 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.616492334 +0000 UTC m=+27.099722857 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-images" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.616529 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.616514234 +0000 UTC m=+27.099744757 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.616529 3562 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.616546 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.616537694 +0000 UTC m=+27.099768217 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.616555 3562 configmap.go:199] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.616578 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.616593 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.616580615 +0000 UTC m=+27.099811178 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-config" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.616595 3562 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.616622 3562 configmap.go:199] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.616645 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4w8wh\" (UniqueName: \"kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.616650 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.616642257 +0000 UTC m=+27.099872780 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.616698 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.616684317 +0000 UTC m=+27.099914880 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"trusted-ca" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.616710 3562 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.616727 3562 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.616738 3562 projected.go:200] Error preparing data for projected volume kube-api-access-4w8wh for pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.616783 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.616773199 +0000 UTC m=+27.100003932 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.616803 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.616793669 +0000 UTC m=+27.100024192 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-4w8wh" (UniqueName: "kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.616841 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.616926 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.616965 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.616998 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.617026 3562 configmap.go:199] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.617058 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.617088 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.617070545 +0000 UTC m=+27.100301098 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.617130 3562 secret.go:194] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.617135 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.617162 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.617152966 +0000 UTC m=+27.100383489 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-client" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.617194 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.617218 3562 secret.go:194] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.617233 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.617260 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.617246978 +0000 UTC m=+27.100477541 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.617283 3562 secret.go:194] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.617310 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.617300659 +0000 UTC m=+27.100531182 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.617319 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.617345 3562 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.617370 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.617362041 +0000 UTC m=+27.100592564 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.617376 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.617408 3562 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.617434 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.617427872 +0000 UTC m=+27.100658395 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-client" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.617480 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.617494 3562 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-idp-0-file-data: object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.617523 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.617537 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.617524304 +0000 UTC m=+27.100754867 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-idp-0-file-data" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.617570 3562 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.617579 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.617596 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.617587655 +0000 UTC m=+27.100818178 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.617631 3562 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.617653 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.617646866 +0000 UTC m=+27.100877389 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.617652 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.617686 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.617709 3562 secret.go:194] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.617718 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.617744 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.617750 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.617736658 +0000 UTC m=+27.100967211 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-serving-cert" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.617785 3562 configmap.go:199] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.617808 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.617801579 +0000 UTC m=+27.101032102 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"trusted-ca" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.617810 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.617842 3562 secret.go:194] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.617859 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.617864 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.61785734 +0000 UTC m=+27.101087863 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.617919 3562 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.617951 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.617938782 +0000 UTC m=+27.101169305 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.617964 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.617981 3562 configmap.go:199] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.617996 3562 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.618014 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.618022 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.618014593 +0000 UTC m=+27.101245106 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.618048 3562 configmap.go:199] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.618072 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.618057504 +0000 UTC m=+27.101288057 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"console-operator-config" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.618096 3562 configmap.go:199] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.618120 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.618107345 +0000 UTC m=+27.101337908 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"openshift-global-ca" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.618144 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.618132606 +0000 UTC m=+27.101363159 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.618151 3562 secret.go:194] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.618175 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.618168396 +0000 UTC m=+27.101398909 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.618190 3562 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.618199 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.618229 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.618248 3562 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.618255 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.618244388 +0000 UTC m=+27.101474901 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.618293 3562 secret.go:194] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.618296 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.618284599 +0000 UTC m=+27.101515152 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.618120 3562 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.618343 3562 configmap.go:199] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.618359 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.618402 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.618391881 +0000 UTC m=+27.101622404 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"image-import-ca" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.618415 3562 secret.go:194] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.618447 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.618439112 +0000 UTC m=+27.101669635 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"etcd-client" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.618471 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.618531 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.618558 3562 configmap.go:199] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.618595 3562 configmap.go:199] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.618604 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.618591145 +0000 UTC m=+27.101821698 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.618620 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.618613085 +0000 UTC m=+27.101843608 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.618639 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.618627775 +0000 UTC m=+27.101858298 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.618565 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.618652 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs podName:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.618645836 +0000 UTC m=+27.101876359 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs") pod "multus-admission-controller-6c7c885997-4hbbc" (UID: "d5025cb4-ddb0-4107-88c1-bcbcdb779ac0") : object "openshift-multus"/"multus-admission-controller-secret" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.618687 3562 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.618710 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.618703857 +0000 UTC m=+27.101934370 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.618716 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.618765 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.618814 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.618865 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.618948 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.619032 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.619086 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.619153 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.619201 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.619251 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.619298 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.619373 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.619438 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.619484 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.619532 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.619587 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2h9\" (UniqueName: \"kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.619639 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-serving-cert\" (UniqueName: \"kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.619688 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.619736 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.619787 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.619833 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.619927 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.620024 3562 configmap.go:199] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.620064 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.620068 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.620057004 +0000 UTC m=+27.103287527 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"audit" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.620126 3562 secret.go:194] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.620163 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.620166 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.620158296 +0000 UTC m=+27.103388819 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.620205 3562 configmap.go:199] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.620223 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.620233 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.620226257 +0000 UTC m=+27.103456780 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.620272 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.620283 3562 secret.go:194] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.620310 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs podName:a702c6d2-4dde-4077-ab8c-0f8df804bf7a nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.620302509 +0000 UTC m=+27.103533032 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs") pod "network-metrics-daemon-qdfr4" (UID: "a702c6d2-4dde-4077-ab8c-0f8df804bf7a") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.620318 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.620346 3562 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.620374 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.62036775 +0000 UTC m=+27.103598513 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"config" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.620412 3562 configmap.go:199] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.620436 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.620429201 +0000 UTC m=+27.103659724 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.620451 3562 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.620480 3562 secret.go:194] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.620502 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.620486983 +0000 UTC m=+27.103717536 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.620528 3562 configmap.go:199] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.620534 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.620517623 +0000 UTC m=+27.103748186 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.620554 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.620545594 +0000 UTC m=+27.103776117 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.620587 3562 configmap.go:199] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.620591 3562 configmap.go:199] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.620609 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.620602465 +0000 UTC m=+27.103832988 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"config" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.620633 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.620621295 +0000 UTC m=+27.103851848 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"oauth-serving-cert" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.620649 3562 secret.go:194] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.620675 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.620668696 +0000 UTC m=+27.103899219 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.620698 3562 secret.go:194] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.620714 3562 secret.go:194] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.620741 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.620729767 +0000 UTC m=+27.103960330 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"serving-cert" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.620755 3562 configmap.go:199] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.620767 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.620754928 +0000 UTC m=+27.103985481 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"serving-cert" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.620786 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.620779078 +0000 UTC m=+27.104009601 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.620829 3562 configmap.go:199] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.620847 3562 secret.go:194] Couldn't get secret openshift-console-operator/webhook-serving-cert: object "openshift-console-operator"/"webhook-serving-cert" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.620870 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.62085864 +0000 UTC m=+27.104089203 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.620899 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.62088516 +0000 UTC m=+27.104115713 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-serving-cert" (UniqueName: "kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : object "openshift-console-operator"/"webhook-serving-cert" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.621082 3562 secret.go:194] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.621131 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.621117695 +0000 UTC m=+27.104348258 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"serving-cert" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.621189 3562 secret.go:194] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.621223 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 podName: nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.621215327 +0000 UTC m=+27.104445850 (durationBeforeRetry 4s). Error: MountVolume.MountDevice failed for volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.621244 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.621236487 +0000 UTC m=+27.104467010 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"serving-cert" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.621264 3562 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.621296 3562 secret.go:194] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.621305 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.621292438 +0000 UTC m=+27.104523001 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-session" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.621324 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.621317739 +0000 UTC m=+27.104548262 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.621364 3562 secret.go:194] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.621366 3562 secret.go:194] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.621395 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls podName:297ab9b6-2186-4d5b-a952-2bfd59af63c4 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.6213888 +0000 UTC m=+27.104619323 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls") pod "machine-config-controller-6df6df6b6b-58shh" (UID: "297ab9b6-2186-4d5b-a952-2bfd59af63c4") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.621411 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.62140297 +0000 UTC m=+27.104633493 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.621437 3562 configmap.go:199] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.621467 3562 configmap.go:199] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.621479 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.621467152 +0000 UTC m=+27.104697705 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.621512 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.621505872 +0000 UTC m=+27.104736395 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.621567 3562 projected.go:294] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.621579 3562 projected.go:294] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.621590 3562 projected.go:200] Error preparing data for projected volume kube-api-access-lx2h9 for pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.621614 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9 podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.621608144 +0000 UTC m=+27.104838667 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-lx2h9" (UniqueName: "kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.721592 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bxr\" (UniqueName: \"kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.721738 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9724w\" (UniqueName: \"kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.721806 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8hpxx\" (UniqueName: \"kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.721839 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kp86\" (UniqueName: \"kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.721868 3562 projected.go:294] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.721890 3562 projected.go:294] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.721915 3562 projected.go:200] Error preparing data for projected volume kube-api-access-9724w for pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.721959 3562 projected.go:294] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.721968 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.721948808 +0000 UTC m=+27.205179331 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-9724w" (UniqueName: "kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.721869 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-v7vkr\" (UniqueName: \"kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.722011 3562 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.722033 3562 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.721976 3562 projected.go:294] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.722047 3562 projected.go:200] Error preparing data for projected volume kube-api-access-v7vkr for pod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.722050 3562 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.722060 3562 projected.go:200] Error preparing data for projected volume kube-api-access-6kp86 for pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.722072 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.72206499 +0000 UTC m=+27.205295503 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-v7vkr" (UniqueName: "kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.722037 3562 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.722085 3562 projected.go:200] Error preparing data for projected volume kube-api-access-l8bxr for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.722170 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.722184 3562 projected.go:294] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.722221 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86 podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.722198352 +0000 UTC m=+27.205428865 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kp86" (UniqueName: "kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.722244 3562 projected.go:294] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.722252 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.722244693 +0000 UTC m=+27.205475216 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l8bxr" (UniqueName: "kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.722277 3562 projected.go:200] Error preparing data for projected volume kube-api-access-8hpxx for pod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.722336 3562 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.722354 3562 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.722369 3562 projected.go:200] Error preparing data for projected volume kube-api-access-ptdrb for pod openshift-marketplace/redhat-operators-f4jkp: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.722409 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.722372016 +0000 UTC m=+27.205602739 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-8hpxx" (UniqueName: "kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.722567 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb podName:4092a9f8-5acc-4932-9e90-ef962eeb301a nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.722557149 +0000 UTC m=+27.205787672 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-ptdrb" (UniqueName: "kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb") pod "redhat-operators-f4jkp" (UID: "4092a9f8-5acc-4932-9e90-ef962eeb301a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.722957 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tvc4r\" (UniqueName: \"kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.723021 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-fqnmc\" (UniqueName: \"kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.723128 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-7ggjm\" (UniqueName: \"kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.723206 3562 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.723226 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9p8gt\" (UniqueName: \"kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt\") pod \"community-operators-sdddl\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " pod="openshift-marketplace/community-operators-sdddl" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.723226 3562 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.723229 3562 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.723269 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bm986\" (UniqueName: \"kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.723271 3562 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.723295 3562 projected.go:200] Error preparing data for projected volume kube-api-access-tvc4r for pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.723322 3562 projected.go:294] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.723340 3562 projected.go:294] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.723344 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j7zrh\" (UniqueName: \"kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.723351 3562 projected.go:200] Error preparing data for projected volume kube-api-access-7ggjm for pod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.723241 3562 projected.go:200] Error preparing data for projected volume kube-api-access-fqnmc for pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.723358 3562 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.723386 3562 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.723394 3562 projected.go:200] Error preparing data for projected volume kube-api-access-9p8gt for pod openshift-marketplace/community-operators-sdddl: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.723415 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.723394217 +0000 UTC m=+27.206624740 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-tvc4r" (UniqueName: "kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.723439 3562 projected.go:294] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.723462 3562 projected.go:294] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.723465 3562 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.723476 3562 projected.go:200] Error preparing data for projected volume kube-api-access-j7zrh for pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.723444 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.723427107 +0000 UTC m=+27.206657870 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-7ggjm" (UniqueName: "kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.723506 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxcp\" (UniqueName: \"kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp\") pod \"network-check-source-5c5478f8c-vqvt7\" (UID: \"d0f40333-c860-4c04-8058-a0bf572dcf12\") " pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.723512 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.723494749 +0000 UTC m=+27.206725522 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-fqnmc" (UniqueName: "kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.723482 3562 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.723532 3562 projected.go:200] Error preparing data for projected volume kube-api-access-bm986 for pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.723572 3562 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.723587 3562 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.723595 3562 projected.go:200] Error preparing data for projected volume kube-api-access-qcxcp for pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.723595 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt podName:fc9c9ba0-fcbb-4e78-8cf5-a059ec435760 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.7235714 +0000 UTC m=+27.206802173 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-9p8gt" (UniqueName: "kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt") pod "community-operators-sdddl" (UID: "fc9c9ba0-fcbb-4e78-8cf5-a059ec435760") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.723632 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986 podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.723619571 +0000 UTC m=+27.206850104 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-bm986" (UniqueName: "kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.723665 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.723652652 +0000 UTC m=+27.206883185 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-j7zrh" (UniqueName: "kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.723787 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.723794 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp podName:d0f40333-c860-4c04-8058-a0bf572dcf12 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.723783434 +0000 UTC m=+27.207013957 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-qcxcp" (UniqueName: "kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp") pod "network-check-source-5c5478f8c-vqvt7" (UID: "d0f40333-c860-4c04-8058-a0bf572dcf12") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.723890 3562 projected.go:294] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.723928 3562 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.723948 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-x5d97\" (UniqueName: \"kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.723979 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.723964748 +0000 UTC m=+27.207195491 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.724023 3562 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.724042 3562 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.724052 3562 projected.go:200] Error preparing data for projected volume kube-api-access-x5d97 for pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.724191 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8dcvj\" (UniqueName: \"kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.724288 3562 projected.go:294] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.724304 3562 projected.go:294] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.724313 3562 projected.go:200] Error preparing data for projected volume kube-api-access-8dcvj for pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.724324 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97 podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.724309185 +0000 UTC m=+27.207539908 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-x5d97" (UniqueName: "kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.724313 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2nz92\" (UniqueName: \"kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.724346 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.724338215 +0000 UTC m=+27.207569008 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-8dcvj" (UniqueName: "kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.724373 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kgvs\" (UniqueName: \"kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.724410 3562 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.724437 3562 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.724453 3562 projected.go:200] Error preparing data for projected volume kube-api-access-2nz92 for pod openshift-console/console-644bb77b49-5x5xk: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.724462 3562 projected.go:294] Couldn't get configMap hostpath-provisioner/kube-root-ca.crt: object "hostpath-provisioner"/"kube-root-ca.crt" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.724479 3562 projected.go:294] Couldn't get configMap hostpath-provisioner/openshift-service-ca.crt: object "hostpath-provisioner"/"openshift-service-ca.crt" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.724490 3562 projected.go:200] Error preparing data for projected volume kube-api-access-vvtrv for pod hostpath-provisioner/csi-hostpathplugin-hvm8g: [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.724508 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92 podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.724492408 +0000 UTC m=+27.207722961 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-2nz92" (UniqueName: "kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.724417 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtrv\" (UniqueName: \"kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.724527 3562 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.724539 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv podName:12e733dd-0939-4f1b-9cbb-13897e093787 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.724525009 +0000 UTC m=+27.207755572 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-vvtrv" (UniqueName: "kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv") pod "csi-hostpathplugin-hvm8g" (UID: "12e733dd-0939-4f1b-9cbb-13897e093787") : [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.724548 3562 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.724563 3562 projected.go:200] Error preparing data for projected volume kube-api-access-6kgvs for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.724606 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.72459487 +0000 UTC m=+27.207825613 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kgvs" (UniqueName: "kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.724866 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nf4t2\" (UniqueName: \"kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.724975 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.725030 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hjlhw\" (UniqueName: \"kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.725081 3562 projected.go:294] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.725099 3562 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.725105 3562 projected.go:294] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.725106 3562 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.725118 3562 projected.go:200] Error preparing data for projected volume kube-api-access-nf4t2 for pod openshift-dns-operator/dns-operator-75f687757b-nz2xb: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.725119 3562 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.725129 3562 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.725136 3562 projected.go:200] Error preparing data for projected volume kube-api-access-rg2zg for pod openshift-marketplace/marketplace-operator-8b455464d-f9xdt: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.725141 3562 projected.go:200] Error preparing data for projected volume kube-api-access-hjlhw for pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.725183 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.725169931 +0000 UTC m=+27.208400664 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-rg2zg" (UniqueName: "kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.725203 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2 podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.725193152 +0000 UTC m=+27.208423925 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-nf4t2" (UniqueName: "kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.725220 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.725212562 +0000 UTC m=+27.208443085 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-hjlhw" (UniqueName: "kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.826528 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.826618 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.826789 3562 projected.go:294] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.826841 3562 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.826863 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dt5cx\" (UniqueName: \"kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx\") pod \"ingress-canary-2vhcn\" (UID: \"0b5d722a-1123-4935-9740-52a08d018bc9\") " pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.826865 3562 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.826917 3562 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.826935 3562 projected.go:200] Error preparing data for projected volume kube-api-access-55f7t for pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.826983 3562 projected.go:294] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.826996 3562 projected.go:294] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.826997 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.826974204 +0000 UTC m=+27.310204727 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-55f7t" (UniqueName: "kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.827008 3562 projected.go:200] Error preparing data for projected volume kube-api-access-dt5cx for pod openshift-ingress-canary/ingress-canary-2vhcn: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.827051 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx podName:0b5d722a-1123-4935-9740-52a08d018bc9 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.827033185 +0000 UTC m=+27.310263708 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-dt5cx" (UniqueName: "kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx") pod "ingress-canary-2vhcn" (UID: "0b5d722a-1123-4935-9740-52a08d018bc9") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.827681 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.827672178 +0000 UTC m=+27.310902701 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.874675 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.874732 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.874775 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.874792 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.874880 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.874893 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.874977 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.875049 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.875054 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.875085 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.875318 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.875330 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.875352 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.875373 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.875389 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.875451 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.875469 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.875492 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.875501 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.875561 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.875717 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.875850 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.875874 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.875924 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.875961 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.875927 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.875987 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.876066 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.876072 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.876122 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.876158 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.876176 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.876121 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.876291 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.876389 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.876392 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.876499 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.876577 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.876617 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.876703 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.876768 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.876866 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.876945 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.877071 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.877094 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.877241 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.877296 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.877384 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.877558 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.877604 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.877598 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.877800 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.877659 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.877733 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.877858 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.878011 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.878174 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.878238 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.878391 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.878552 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.878591 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.878645 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.878745 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.878809 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.878832 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.878955 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.879069 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.879234 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.879307 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.879364 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.879507 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.879681 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.929536 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2zpsk\" (UniqueName: \"kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk\") pod \"downloads-65476884b9-9wcvx\" (UID: \"6268b7fe-8910-4505-b404-6f1df638105c\") " pod="openshift-console/downloads-65476884b9-9wcvx" Dec 13 00:06:26 crc kubenswrapper[3562]: I1213 00:06:26.929788 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmhq\" (UniqueName: \"kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq\") pod \"migrator-f7c6d88df-q2fnv\" (UID: \"cf1a8966-f594-490a-9fbb-eec5bafd13d3\") " pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.929835 3562 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.929891 3562 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.929942 3562 projected.go:200] Error preparing data for projected volume kube-api-access-2zpsk for pod openshift-console/downloads-65476884b9-9wcvx: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.930021 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk podName:6268b7fe-8910-4505-b404-6f1df638105c nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.929996631 +0000 UTC m=+27.413227154 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-2zpsk" (UniqueName: "kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk") pod "downloads-65476884b9-9wcvx" (UID: "6268b7fe-8910-4505-b404-6f1df638105c") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.930088 3562 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.930110 3562 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.930122 3562 projected.go:200] Error preparing data for projected volume kube-api-access-hqmhq for pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:26 crc kubenswrapper[3562]: E1213 00:06:26.930170 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq podName:cf1a8966-f594-490a-9fbb-eec5bafd13d3 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:30.930152474 +0000 UTC m=+27.413383107 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqmhq" (UniqueName: "kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq") pod "migrator-f7c6d88df-q2fnv" (UID: "cf1a8966-f594-490a-9fbb-eec5bafd13d3") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:27 crc kubenswrapper[3562]: I1213 00:06:27.033375 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wrd8h\" (UniqueName: \"kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:06:27 crc kubenswrapper[3562]: I1213 00:06:27.033462 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:06:27 crc kubenswrapper[3562]: I1213 00:06:27.033526 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vhj\" (UniqueName: \"kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:06:27 crc kubenswrapper[3562]: I1213 00:06:27.033690 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tl5kg\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:06:27 crc kubenswrapper[3562]: E1213 00:06:27.034335 3562 projected.go:294] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:27 crc kubenswrapper[3562]: E1213 00:06:27.034386 3562 projected.go:294] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:27 crc kubenswrapper[3562]: E1213 00:06:27.034405 3562 projected.go:200] Error preparing data for projected volume kube-api-access-d9vhj for pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:27 crc kubenswrapper[3562]: E1213 00:06:27.034415 3562 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 13 00:06:27 crc kubenswrapper[3562]: E1213 00:06:27.034473 3562 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 13 00:06:27 crc kubenswrapper[3562]: E1213 00:06:27.034499 3562 projected.go:294] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:27 crc kubenswrapper[3562]: E1213 00:06:27.034518 3562 projected.go:294] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:27 crc kubenswrapper[3562]: E1213 00:06:27.034521 3562 projected.go:200] Error preparing data for projected volume kube-api-access-wrd8h for pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:27 crc kubenswrapper[3562]: E1213 00:06:27.034535 3562 projected.go:200] Error preparing data for projected volume kube-api-access-tl5kg for pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:27 crc kubenswrapper[3562]: E1213 00:06:27.034634 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:31.034598918 +0000 UTC m=+27.517829441 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-tl5kg" (UniqueName: "kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:27 crc kubenswrapper[3562]: E1213 00:06:27.034329 3562 projected.go:294] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:27 crc kubenswrapper[3562]: E1213 00:06:27.034681 3562 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:27 crc kubenswrapper[3562]: E1213 00:06:27.034784 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:31.034755361 +0000 UTC m=+27.517985874 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:27 crc kubenswrapper[3562]: E1213 00:06:27.034929 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-12-13 00:06:31.034897244 +0000 UTC m=+27.518127757 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-wrd8h" (UniqueName: "kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:27 crc kubenswrapper[3562]: E1213 00:06:27.034950 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:31.034941755 +0000 UTC m=+27.518172278 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-d9vhj" (UniqueName: "kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:27 crc kubenswrapper[3562]: I1213 00:06:27.040491 3562 generic.go:334] "Generic (PLEG): container finished" podID="7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8" containerID="978e2a5c2e58848dae6b2d928dbb361a89b5e177c0e3e13e67fca196648b3d76" exitCode=0 Dec 13 00:06:27 crc kubenswrapper[3562]: I1213 00:06:27.040577 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" event={"ID":"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8","Type":"ContainerDied","Data":"978e2a5c2e58848dae6b2d928dbb361a89b5e177c0e3e13e67fca196648b3d76"} Dec 13 00:06:27 crc kubenswrapper[3562]: I1213 00:06:27.044130 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"0234c71f3bb882dec8a9d4dd88a3ac07afcfb144218f359950383a42a6d3a9dc"} Dec 13 00:06:27 crc kubenswrapper[3562]: I1213 00:06:27.136837 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Dec 13 00:06:27 crc kubenswrapper[3562]: I1213 00:06:27.136920 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5rpl7\" (UniqueName: \"kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:06:27 crc kubenswrapper[3562]: I1213 00:06:27.137023 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 13 00:06:27 crc kubenswrapper[3562]: E1213 00:06:27.137207 3562 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 13 00:06:27 crc kubenswrapper[3562]: E1213 00:06:27.137226 3562 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 13 00:06:27 crc kubenswrapper[3562]: E1213 00:06:27.137238 3562 projected.go:200] Error preparing data for projected volume kube-api-access-tf29r for pod openshift-marketplace/redhat-marketplace-8s8pc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:27 crc kubenswrapper[3562]: E1213 00:06:27.137299 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r podName:c782cf62-a827-4677-b3c2-6f82c5f09cbb nodeName:}" failed. No retries permitted until 2025-12-13 00:06:31.137281628 +0000 UTC m=+27.620512151 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-tf29r" (UniqueName: "kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r") pod "redhat-marketplace-8s8pc" (UID: "c782cf62-a827-4677-b3c2-6f82c5f09cbb") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:27 crc kubenswrapper[3562]: E1213 00:06:27.137364 3562 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 13 00:06:27 crc kubenswrapper[3562]: E1213 00:06:27.137393 3562 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 13 00:06:27 crc kubenswrapper[3562]: E1213 00:06:27.137408 3562 projected.go:200] Error preparing data for projected volume kube-api-access-n6sqt for pod openshift-marketplace/community-operators-8jhz6: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:27 crc kubenswrapper[3562]: E1213 00:06:27.137487 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt podName:3f4dca86-e6ee-4ec9-8324-86aff960225e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:31.137474462 +0000 UTC m=+27.620705185 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-n6sqt" (UniqueName: "kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt") pod "community-operators-8jhz6" (UID: "3f4dca86-e6ee-4ec9-8324-86aff960225e") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:27 crc kubenswrapper[3562]: E1213 00:06:27.137540 3562 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:27 crc kubenswrapper[3562]: E1213 00:06:27.137553 3562 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:27 crc kubenswrapper[3562]: E1213 00:06:27.137563 3562 projected.go:200] Error preparing data for projected volume kube-api-access-5rpl7 for pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:27 crc kubenswrapper[3562]: E1213 00:06:27.137600 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7 podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:31.137589544 +0000 UTC m=+27.620820267 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-5rpl7" (UniqueName: "kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:27 crc kubenswrapper[3562]: I1213 00:06:27.242044 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pkhl4\" (UniqueName: \"kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:06:27 crc kubenswrapper[3562]: I1213 00:06:27.242133 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-js87r\" (UniqueName: \"kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:06:27 crc kubenswrapper[3562]: E1213 00:06:27.242523 3562 projected.go:294] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Dec 13 00:06:27 crc kubenswrapper[3562]: E1213 00:06:27.242557 3562 projected.go:294] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Dec 13 00:06:27 crc kubenswrapper[3562]: E1213 00:06:27.242573 3562 projected.go:200] Error preparing data for projected volume kube-api-access-js87r for pod openshift-service-ca/service-ca-666f99b6f-kk8kg: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Dec 13 00:06:27 crc kubenswrapper[3562]: E1213 00:06:27.242650 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:31.24262767 +0000 UTC m=+27.725858193 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-js87r" (UniqueName: "kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Dec 13 00:06:27 crc kubenswrapper[3562]: E1213 00:06:27.242730 3562 projected.go:294] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Dec 13 00:06:27 crc kubenswrapper[3562]: E1213 00:06:27.242747 3562 projected.go:294] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Dec 13 00:06:27 crc kubenswrapper[3562]: E1213 00:06:27.242758 3562 projected.go:200] Error preparing data for projected volume kube-api-access-pkhl4 for pod openshift-controller-manager/controller-manager-778975cc4f-x5vcf: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:27 crc kubenswrapper[3562]: E1213 00:06:27.242792 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4 podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-13 00:06:31.242780443 +0000 UTC m=+27.726010976 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-pkhl4" (UniqueName: "kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:27 crc kubenswrapper[3562]: I1213 00:06:27.471548 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:06:27 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:06:27 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:06:27 crc kubenswrapper[3562]: healthz check failed Dec 13 00:06:27 crc kubenswrapper[3562]: I1213 00:06:27.471666 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:06:27 crc kubenswrapper[3562]: I1213 00:06:27.874136 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:06:27 crc kubenswrapper[3562]: E1213 00:06:27.874331 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 13 00:06:27 crc kubenswrapper[3562]: I1213 00:06:27.874387 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:06:27 crc kubenswrapper[3562]: E1213 00:06:27.874447 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 13 00:06:27 crc kubenswrapper[3562]: I1213 00:06:27.874480 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:06:27 crc kubenswrapper[3562]: E1213 00:06:27.874544 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 13 00:06:27 crc kubenswrapper[3562]: I1213 00:06:27.874582 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:06:27 crc kubenswrapper[3562]: E1213 00:06:27.874637 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 13 00:06:27 crc kubenswrapper[3562]: I1213 00:06:27.874674 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:06:27 crc kubenswrapper[3562]: E1213 00:06:27.874728 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 13 00:06:27 crc kubenswrapper[3562]: I1213 00:06:27.874772 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 13 00:06:27 crc kubenswrapper[3562]: E1213 00:06:27.874837 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 13 00:06:27 crc kubenswrapper[3562]: I1213 00:06:27.874873 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:06:27 crc kubenswrapper[3562]: E1213 00:06:27.874954 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 13 00:06:27 crc kubenswrapper[3562]: I1213 00:06:27.874988 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 13 00:06:27 crc kubenswrapper[3562]: I1213 00:06:27.874992 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 13 00:06:27 crc kubenswrapper[3562]: E1213 00:06:27.875056 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 13 00:06:27 crc kubenswrapper[3562]: I1213 00:06:27.875075 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:06:27 crc kubenswrapper[3562]: E1213 00:06:27.875117 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 13 00:06:27 crc kubenswrapper[3562]: I1213 00:06:27.875142 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:06:27 crc kubenswrapper[3562]: E1213 00:06:27.875249 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 13 00:06:27 crc kubenswrapper[3562]: E1213 00:06:27.875301 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 13 00:06:28 crc kubenswrapper[3562]: I1213 00:06:28.057822 3562 generic.go:334] "Generic (PLEG): container finished" podID="7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8" containerID="45dbeb3e83fef673bd3011933c621a4eb2970239987e36e224baa1256d58a827" exitCode=0 Dec 13 00:06:28 crc kubenswrapper[3562]: I1213 00:06:28.057875 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" event={"ID":"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8","Type":"ContainerDied","Data":"45dbeb3e83fef673bd3011933c621a4eb2970239987e36e224baa1256d58a827"} Dec 13 00:06:28 crc kubenswrapper[3562]: I1213 00:06:28.470852 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:06:28 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:06:28 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:06:28 crc kubenswrapper[3562]: healthz check failed Dec 13 00:06:28 crc kubenswrapper[3562]: I1213 00:06:28.470942 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:06:28 crc kubenswrapper[3562]: E1213 00:06:28.862028 3562 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 13 00:06:28 crc kubenswrapper[3562]: I1213 00:06:28.874747 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:06:28 crc kubenswrapper[3562]: I1213 00:06:28.874784 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 13 00:06:28 crc kubenswrapper[3562]: I1213 00:06:28.874859 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 13 00:06:28 crc kubenswrapper[3562]: I1213 00:06:28.874933 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:06:28 crc kubenswrapper[3562]: E1213 00:06:28.874934 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 13 00:06:28 crc kubenswrapper[3562]: I1213 00:06:28.874979 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:06:28 crc kubenswrapper[3562]: I1213 00:06:28.874989 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:06:28 crc kubenswrapper[3562]: I1213 00:06:28.875073 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 13 00:06:28 crc kubenswrapper[3562]: E1213 00:06:28.875078 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 13 00:06:28 crc kubenswrapper[3562]: I1213 00:06:28.875122 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 13 00:06:28 crc kubenswrapper[3562]: I1213 00:06:28.875166 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:28 crc kubenswrapper[3562]: I1213 00:06:28.875198 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:28 crc kubenswrapper[3562]: E1213 00:06:28.875231 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 13 00:06:28 crc kubenswrapper[3562]: E1213 00:06:28.875263 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 13 00:06:28 crc kubenswrapper[3562]: I1213 00:06:28.875291 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 13 00:06:28 crc kubenswrapper[3562]: I1213 00:06:28.875309 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 13 00:06:28 crc kubenswrapper[3562]: I1213 00:06:28.875367 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 13 00:06:28 crc kubenswrapper[3562]: E1213 00:06:28.875397 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 13 00:06:28 crc kubenswrapper[3562]: I1213 00:06:28.875423 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:06:28 crc kubenswrapper[3562]: I1213 00:06:28.875488 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 13 00:06:28 crc kubenswrapper[3562]: E1213 00:06:28.875490 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 13 00:06:28 crc kubenswrapper[3562]: I1213 00:06:28.875547 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:28 crc kubenswrapper[3562]: I1213 00:06:28.875567 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 13 00:06:28 crc kubenswrapper[3562]: E1213 00:06:28.875616 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 13 00:06:28 crc kubenswrapper[3562]: I1213 00:06:28.875619 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 13 00:06:28 crc kubenswrapper[3562]: I1213 00:06:28.875652 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:06:28 crc kubenswrapper[3562]: I1213 00:06:28.875697 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:06:28 crc kubenswrapper[3562]: E1213 00:06:28.875703 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 13 00:06:28 crc kubenswrapper[3562]: I1213 00:06:28.875743 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:06:28 crc kubenswrapper[3562]: I1213 00:06:28.875765 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:06:28 crc kubenswrapper[3562]: E1213 00:06:28.875810 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 13 00:06:28 crc kubenswrapper[3562]: I1213 00:06:28.875743 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 13 00:06:28 crc kubenswrapper[3562]: E1213 00:06:28.875848 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 13 00:06:28 crc kubenswrapper[3562]: E1213 00:06:28.875924 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 13 00:06:28 crc kubenswrapper[3562]: I1213 00:06:28.875940 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:06:28 crc kubenswrapper[3562]: I1213 00:06:28.875957 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:06:28 crc kubenswrapper[3562]: E1213 00:06:28.876020 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 13 00:06:28 crc kubenswrapper[3562]: E1213 00:06:28.876076 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 13 00:06:28 crc kubenswrapper[3562]: I1213 00:06:28.876101 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:06:28 crc kubenswrapper[3562]: I1213 00:06:28.876140 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:06:28 crc kubenswrapper[3562]: E1213 00:06:28.876217 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 13 00:06:28 crc kubenswrapper[3562]: E1213 00:06:28.876284 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 13 00:06:28 crc kubenswrapper[3562]: I1213 00:06:28.876309 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 13 00:06:28 crc kubenswrapper[3562]: I1213 00:06:28.876349 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:06:28 crc kubenswrapper[3562]: E1213 00:06:28.876399 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 13 00:06:28 crc kubenswrapper[3562]: I1213 00:06:28.876426 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 13 00:06:28 crc kubenswrapper[3562]: E1213 00:06:28.876492 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 13 00:06:28 crc kubenswrapper[3562]: E1213 00:06:28.876552 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 13 00:06:28 crc kubenswrapper[3562]: I1213 00:06:28.876595 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 13 00:06:28 crc kubenswrapper[3562]: E1213 00:06:28.876648 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 13 00:06:28 crc kubenswrapper[3562]: I1213 00:06:28.876670 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:06:28 crc kubenswrapper[3562]: E1213 00:06:28.876729 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 13 00:06:28 crc kubenswrapper[3562]: E1213 00:06:28.876778 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 13 00:06:28 crc kubenswrapper[3562]: I1213 00:06:28.876802 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 13 00:06:28 crc kubenswrapper[3562]: I1213 00:06:28.876842 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 13 00:06:28 crc kubenswrapper[3562]: E1213 00:06:28.876889 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 13 00:06:28 crc kubenswrapper[3562]: I1213 00:06:28.876928 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:06:28 crc kubenswrapper[3562]: E1213 00:06:28.876993 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 13 00:06:28 crc kubenswrapper[3562]: E1213 00:06:28.877043 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 13 00:06:28 crc kubenswrapper[3562]: E1213 00:06:28.877090 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 13 00:06:28 crc kubenswrapper[3562]: E1213 00:06:28.877138 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 13 00:06:28 crc kubenswrapper[3562]: E1213 00:06:28.877218 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 13 00:06:28 crc kubenswrapper[3562]: E1213 00:06:28.877266 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 13 00:06:28 crc kubenswrapper[3562]: E1213 00:06:28.877321 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 13 00:06:28 crc kubenswrapper[3562]: E1213 00:06:28.877364 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 13 00:06:28 crc kubenswrapper[3562]: E1213 00:06:28.877429 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 13 00:06:28 crc kubenswrapper[3562]: E1213 00:06:28.877477 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 13 00:06:28 crc kubenswrapper[3562]: E1213 00:06:28.877536 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 13 00:06:28 crc kubenswrapper[3562]: E1213 00:06:28.877574 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 13 00:06:28 crc kubenswrapper[3562]: E1213 00:06:28.877624 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 13 00:06:28 crc kubenswrapper[3562]: I1213 00:06:28.877672 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:06:28 crc kubenswrapper[3562]: E1213 00:06:28.877745 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 13 00:06:29 crc kubenswrapper[3562]: I1213 00:06:29.062464 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"0facfc544e10ddf63b593145df26a401b3022a1eff530ef79bc1a721672e0769"} Dec 13 00:06:29 crc kubenswrapper[3562]: I1213 00:06:29.064449 3562 generic.go:334] "Generic (PLEG): container finished" podID="7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8" containerID="270b24892415af6b0c008da3a565de900e772dfe2c9e8a87cd5d92f91e0d966b" exitCode=0 Dec 13 00:06:29 crc kubenswrapper[3562]: I1213 00:06:29.064472 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" event={"ID":"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8","Type":"ContainerDied","Data":"270b24892415af6b0c008da3a565de900e772dfe2c9e8a87cd5d92f91e0d966b"} Dec 13 00:06:29 crc kubenswrapper[3562]: I1213 00:06:29.471553 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:06:29 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:06:29 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:06:29 crc kubenswrapper[3562]: healthz check failed Dec 13 00:06:29 crc kubenswrapper[3562]: I1213 00:06:29.471636 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:06:29 crc kubenswrapper[3562]: I1213 00:06:29.878296 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:06:29 crc kubenswrapper[3562]: E1213 00:06:29.878505 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 13 00:06:29 crc kubenswrapper[3562]: I1213 00:06:29.878584 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:06:29 crc kubenswrapper[3562]: E1213 00:06:29.878693 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 13 00:06:29 crc kubenswrapper[3562]: I1213 00:06:29.878733 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:06:29 crc kubenswrapper[3562]: E1213 00:06:29.878813 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 13 00:06:29 crc kubenswrapper[3562]: I1213 00:06:29.878852 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:06:29 crc kubenswrapper[3562]: E1213 00:06:29.878956 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 13 00:06:29 crc kubenswrapper[3562]: I1213 00:06:29.879012 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 13 00:06:29 crc kubenswrapper[3562]: E1213 00:06:29.879116 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 13 00:06:29 crc kubenswrapper[3562]: I1213 00:06:29.879176 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:06:29 crc kubenswrapper[3562]: E1213 00:06:29.879280 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 13 00:06:29 crc kubenswrapper[3562]: I1213 00:06:29.879333 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:06:29 crc kubenswrapper[3562]: E1213 00:06:29.879440 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 13 00:06:29 crc kubenswrapper[3562]: I1213 00:06:29.879493 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 13 00:06:29 crc kubenswrapper[3562]: E1213 00:06:29.879595 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 13 00:06:29 crc kubenswrapper[3562]: I1213 00:06:29.879655 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 13 00:06:29 crc kubenswrapper[3562]: E1213 00:06:29.879774 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 13 00:06:29 crc kubenswrapper[3562]: I1213 00:06:29.879819 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:06:29 crc kubenswrapper[3562]: E1213 00:06:29.879926 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 13 00:06:29 crc kubenswrapper[3562]: I1213 00:06:29.879967 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:06:29 crc kubenswrapper[3562]: E1213 00:06:29.880042 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.070289 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" event={"ID":"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8","Type":"ContainerStarted","Data":"2c618e89033ca2507d1bd36eaf9db1b475360068074e7abfb37834b43994f980"} Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.472027 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:06:30 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:06:30 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:06:30 crc kubenswrapper[3562]: healthz check failed Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.472107 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.637637 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-76gl8\" (UniqueName: \"kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8\") pod \"network-check-target-v54bt\" (UID: \"34a48baf-1bee-4921-8bb2-9b7320e76f79\") " pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.637691 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.637720 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.637743 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.637766 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.637787 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.637810 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.637830 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.637861 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.637883 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.637899 3562 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.638000 3562 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.638018 3562 configmap.go:199] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.638026 3562 projected.go:200] Error preparing data for projected volume kube-api-access-76gl8 for pod openshift-network-diagnostics/network-check-target-v54bt: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.638073 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.638056401 +0000 UTC m=+35.121286924 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.638129 3562 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.638138 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8 podName:34a48baf-1bee-4921-8bb2-9b7320e76f79 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.638101162 +0000 UTC m=+35.121331725 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-76gl8" (UniqueName: "kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8") pod "network-check-target-v54bt" (UID: "34a48baf-1bee-4921-8bb2-9b7320e76f79") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.638171 3562 configmap.go:199] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.638185 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.638161913 +0000 UTC m=+35.121392556 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.638210 3562 configmap.go:199] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.638219 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.638202964 +0000 UTC m=+35.121433607 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-images" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.637944 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.638273 3562 secret.go:194] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.638239 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.638230545 +0000 UTC m=+35.121461198 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.638312 3562 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.638333 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.638325866 +0000 UTC m=+35.121556389 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.638357 3562 configmap.go:199] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.638354 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.638385 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.638363917 +0000 UTC m=+35.121594600 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"serving-cert" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.638411 3562 secret.go:194] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.638420 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.638403578 +0000 UTC m=+35.121634201 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"service-ca" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.638445 3562 configmap.go:199] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.638463 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.638457749 +0000 UTC m=+35.121688272 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"trusted-ca-bundle" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.638476 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.638634 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.638655 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.638647613 +0000 UTC m=+35.121878286 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"serving-cert" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.638490 3562 secret.go:194] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.638724 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.638718164 +0000 UTC m=+35.121948687 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.638526 3562 secret.go:194] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.638734 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.638562 3562 configmap.go:199] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.638794 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.638788215 +0000 UTC m=+35.122018738 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-config" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.638823 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.638802986 +0000 UTC m=+35.122033629 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.638856 3562 configmap.go:199] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.638878 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.638872687 +0000 UTC m=+35.122103200 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.638829 3562 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.638925 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.638898848 +0000 UTC m=+35.122129491 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.638883 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.639010 3562 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.639082 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.639061922 +0000 UTC m=+35.122292555 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"trusted-ca" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.639150 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4w8wh\" (UniqueName: \"kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.639271 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.639387 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.639447 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.639510 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.639661 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.639725 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.639784 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.639851 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.639981 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.640055 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.640190 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.640259 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.640356 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.640427 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.640497 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.640674 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.640770 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.640846 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.640946 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.641035 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.641138 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.641242 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.641319 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.641474 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.641558 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.641710 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.641786 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.641853 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.641956 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.642056 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.642123 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.642192 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.642261 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.642326 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.642397 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.642466 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.642560 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.642629 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.642825 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.642932 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.643013 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2h9\" (UniqueName: \"kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.643085 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.643167 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.643237 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-serving-cert\" (UniqueName: \"kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.643311 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.643382 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.643458 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.643531 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.643633 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.643803 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.643887 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.643998 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.644077 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.644152 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.644223 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.644330 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.644404 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.644476 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.644575 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.645279 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.645323 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.645358 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.645387 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.645422 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.645446 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.645472 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.645506 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.645538 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.645562 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.645609 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.645631 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.645655 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.645678 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.645709 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.645748 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.645772 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.645795 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.645818 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.645849 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.645870 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.645893 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.645999 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.646052 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.646078 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.646102 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.646114 3562 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.646236 3562 configmap.go:199] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.646240 3562 configmap.go:199] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.646240 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.646212753 +0000 UTC m=+35.129443326 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.646292 3562 secret.go:194] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.646314 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.646295105 +0000 UTC m=+35.129525628 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"openshift-global-ca" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.646320 3562 secret.go:194] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.646334 3562 secret.go:194] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.646348 3562 configmap.go:199] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.646363 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.646335806 +0000 UTC m=+35.129566339 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.646390 3562 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.646390 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.646374686 +0000 UTC m=+35.129605209 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.646411 3562 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-idp-0-file-data: object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.646434 3562 configmap.go:199] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.646417 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.646409617 +0000 UTC m=+35.129640140 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-serving-cert" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.646442 3562 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.646471 3562 configmap.go:199] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.646488 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.646467508 +0000 UTC m=+35.129698151 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.646501 3562 configmap.go:199] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.646463 3562 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.646528 3562 secret.go:194] Couldn't get secret openshift-console-operator/webhook-serving-cert: object "openshift-console-operator"/"webhook-serving-cert" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.646538 3562 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.646465 3562 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.646539 3562 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.646568 3562 projected.go:200] Error preparing data for projected volume kube-api-access-4w8wh for pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.646512 3562 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.646515 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.646501499 +0000 UTC m=+35.129732172 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"serving-cert" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.646596 3562 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.646610 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.646599141 +0000 UTC m=+35.129829784 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.646616 3562 projected.go:200] Error preparing data for projected volume kube-api-access-ncrf5 for pod openshift-marketplace/certified-operators-7287f: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.646626 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.646618161 +0000 UTC m=+35.129848684 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-idp-0-file-data" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.646633 3562 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.646660 3562 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.646640 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.646633871 +0000 UTC m=+35.129864394 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.646698 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.646683822 +0000 UTC m=+35.129914485 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"client-ca" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.646618 3562 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.646753 3562 secret.go:194] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.646773 3562 secret.go:194] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.646782 3562 configmap.go:199] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.646717 3562 secret.go:194] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.646808 3562 secret.go:194] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.646415 3562 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.646724 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.646710053 +0000 UTC m=+35.129940586 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"image-import-ca" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.646733 3562 secret.go:194] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.646577 3562 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.646862 3562 secret.go:194] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.646849 3562 secret.go:194] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.646883 3562 secret.go:194] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.646700 3562 secret.go:194] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.646845 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.646834955 +0000 UTC m=+35.130065488 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.646946 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.646937987 +0000 UTC m=+35.130168510 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-serving-cert" (UniqueName: "kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : object "openshift-console-operator"/"webhook-serving-cert" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.646958 3562 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.646973 3562 configmap.go:199] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.646985 3562 secret.go:194] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.646273 3562 projected.go:294] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.647017 3562 projected.go:294] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.647028 3562 projected.go:200] Error preparing data for projected volume kube-api-access-lx2h9 for pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.647047 3562 configmap.go:199] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.647054 3562 projected.go:269] Couldn't get secret openshift-image-registry/image-registry-tls: object "openshift-image-registry"/"image-registry-tls" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.647073 3562 projected.go:200] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-75779c45fd-v2j2v: object "openshift-image-registry"/"image-registry-tls" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.647081 3562 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.647089 3562 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.647108 3562 secret.go:194] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.647121 3562 configmap.go:199] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.647144 3562 configmap.go:199] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.647174 3562 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.647187 3562 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.647211 3562 secret.go:194] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.647232 3562 secret.go:194] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.647260 3562 secret.go:194] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.647267 3562 secret.go:194] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.647277 3562 configmap.go:199] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.646961 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.646952998 +0000 UTC m=+35.130183521 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"trusted-ca" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.647301 3562 configmap.go:199] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.647321 3562 configmap.go:199] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.647212 3562 secret.go:194] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.647340 3562 configmap.go:199] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.647343 3562 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.647364 3562 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.647270 3562 secret.go:194] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.647389 3562 configmap.go:199] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.647276 3562 secret.go:194] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.647404 3562 secret.go:194] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.647305 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.647292424 +0000 UTC m=+35.130523077 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.647439 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.647430268 +0000 UTC m=+35.130660941 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-4w8wh" (UniqueName: "kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.647447 3562 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.647460 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5 podName:887d596e-c519-4bfa-af90-3edd9e1b2f0f nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.647449278 +0000 UTC m=+35.130679951 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-ncrf5" (UniqueName: "kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5") pod "certified-operators-7287f" (UID: "887d596e-c519-4bfa-af90-3edd9e1b2f0f") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.647019 3562 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.647476 3562 secret.go:194] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.647479 3562 secret.go:194] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.647096 3562 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.647509 3562 configmap.go:199] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.647482 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.647469009 +0000 UTC m=+35.130699662 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"config" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.647533 3562 secret.go:194] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.647545 3562 configmap.go:199] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.647412 3562 configmap.go:199] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.647580 3562 configmap.go:199] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.647441 3562 configmap.go:199] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.647615 3562 secret.go:194] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.647618 3562 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.646989 3562 configmap.go:199] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.647548 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.64753215 +0000 UTC m=+35.130762773 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.647679 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.647669413 +0000 UTC m=+35.130900066 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-client" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.647694 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.647685663 +0000 UTC m=+35.130916336 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-client" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.647697 3562 secret.go:194] Couldn't get secret openshift-image-registry/installation-pull-secrets: object "openshift-image-registry"/"installation-pull-secrets" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.647710 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.647702713 +0000 UTC m=+35.130933366 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : object "openshift-dns-operator"/"metrics-tls" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.646575 3562 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.647708 3562 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.647339 3562 configmap.go:199] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.646674 3562 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.646977 3562 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.647212 3562 configmap.go:199] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.646891 3562 secret.go:194] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.647423 3562 secret.go:194] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.647577 3562 secret.go:194] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.647725 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.647717004 +0000 UTC m=+35.130947657 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.648240 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 podName: nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.648223133 +0000 UTC m=+35.131453666 (durationBeforeRetry 8s). Error: MountVolume.MountDevice failed for volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.648324 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.648306875 +0000 UTC m=+35.131537508 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"console-operator-config" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.648345 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.648335186 +0000 UTC m=+35.131565859 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.648366 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.648355356 +0000 UTC m=+35.131586019 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.648388 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.648376546 +0000 UTC m=+35.131607209 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.648410 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.648398237 +0000 UTC m=+35.131628780 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : object "openshift-service-ca"/"signing-key" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.648428 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.648418937 +0000 UTC m=+35.131649470 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.648446 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.648437818 +0000 UTC m=+35.131668351 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.648461 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.648452698 +0000 UTC m=+35.131683231 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.648475 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.648468168 +0000 UTC m=+35.131698701 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.648490 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.648483269 +0000 UTC m=+35.131713802 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.648508 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.648499989 +0000 UTC m=+35.131730522 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.648523 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.648515279 +0000 UTC m=+35.131745812 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"etcd-client" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.648538 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9 podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.648530589 +0000 UTC m=+35.131761122 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-lx2h9" (UniqueName: "kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.648555 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.64854721 +0000 UTC m=+35.131777743 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : object "openshift-service-ca"/"signing-cabundle" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.648572 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.64856455 +0000 UTC m=+35.131795083 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"image-registry-tls" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.648587 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.64858011 +0000 UTC m=+35.131810643 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.648603 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.648594511 +0000 UTC m=+35.131825044 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.648617 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls podName:297ab9b6-2186-4d5b-a952-2bfd59af63c4 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.648610111 +0000 UTC m=+35.131840644 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls") pod "machine-config-controller-6df6df6b6b-58shh" (UID: "297ab9b6-2186-4d5b-a952-2bfd59af63c4") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.648633 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.648625371 +0000 UTC m=+35.131855904 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"audit-1" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.648646 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.648639482 +0000 UTC m=+35.131870015 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"client-ca" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.648661 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.648652452 +0000 UTC m=+35.131882985 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-session" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.648676 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.648668622 +0000 UTC m=+35.131899155 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.648690 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.648683962 +0000 UTC m=+35.131914495 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"encryption-config-1" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.648707 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.648699793 +0000 UTC m=+35.131930326 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.648722 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.648714453 +0000 UTC m=+35.131944986 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"metrics-tls" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.648737 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.648729743 +0000 UTC m=+35.131960276 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.648753 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.648744274 +0000 UTC m=+35.131974807 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-oauth-config" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.648766 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.648759774 +0000 UTC m=+35.131990307 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"config" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.648781 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.648773194 +0000 UTC m=+35.132003727 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.648794 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.648787294 +0000 UTC m=+35.132017827 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"serving-cert" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.648809 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.648801735 +0000 UTC m=+35.132032268 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.648823 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.648815805 +0000 UTC m=+35.132046338 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"audit-1" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.648838 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.648830215 +0000 UTC m=+35.132060748 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.648852 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.648844806 +0000 UTC m=+35.132075339 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default-metrics-tls" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.648865 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.648858706 +0000 UTC m=+35.132089239 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"audit" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.648879 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs podName:a702c6d2-4dde-4077-ab8c-0f8df804bf7a nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.648871546 +0000 UTC m=+35.132102079 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs") pod "network-metrics-daemon-qdfr4" (UID: "a702c6d2-4dde-4077-ab8c-0f8df804bf7a") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.648893 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.648885336 +0000 UTC m=+35.132115869 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.648932 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.648921907 +0000 UTC m=+35.132152440 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.648947 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.648939707 +0000 UTC m=+35.132170240 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.648962 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.648954718 +0000 UTC m=+35.132185251 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.648977 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.648969098 +0000 UTC m=+35.132199631 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"serving-cert" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.648993 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.648984978 +0000 UTC m=+35.132215511 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.649007 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.649000339 +0000 UTC m=+35.132230872 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.649022 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.649014839 +0000 UTC m=+35.132245372 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.649037 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.649028899 +0000 UTC m=+35.132259432 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.649054 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.649044849 +0000 UTC m=+35.132275392 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"trusted-ca" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.649069 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.64906089 +0000 UTC m=+35.132291433 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"oauth-serving-cert" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.649084 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.64907614 +0000 UTC m=+35.132306673 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"config" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.649138 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.649129741 +0000 UTC m=+35.132360274 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.649156 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.649146381 +0000 UTC m=+35.132376914 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.649172 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.649165252 +0000 UTC m=+35.132395785 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"etcd-serving-ca" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.649188 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.649177892 +0000 UTC m=+35.132408425 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "installation-pull-secrets" (UniqueName: "kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"installation-pull-secrets" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.649202 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.649195332 +0000 UTC m=+35.132425865 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.649218 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.649209633 +0000 UTC m=+35.132440166 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.649233 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.649226083 +0000 UTC m=+35.132456616 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.649246 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.649239643 +0000 UTC m=+35.132470176 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.649260 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.649253274 +0000 UTC m=+35.132483807 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.649273 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.649266924 +0000 UTC m=+35.132497457 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.649286 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs podName:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.649280044 +0000 UTC m=+35.132510577 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs") pod "multus-admission-controller-6c7c885997-4hbbc" (UID: "d5025cb4-ddb0-4107-88c1-bcbcdb779ac0") : object "openshift-multus"/"multus-admission-controller-secret" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.649300 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.649293434 +0000 UTC m=+35.132523967 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"serving-cert" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.649313 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.649306625 +0000 UTC m=+35.132537158 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"serving-cert" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.749587 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-fqnmc\" (UniqueName: \"kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.749655 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-7ggjm\" (UniqueName: \"kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.749801 3562 projected.go:294] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.749817 3562 projected.go:294] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.749828 3562 projected.go:200] Error preparing data for projected volume kube-api-access-7ggjm for pod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.749840 3562 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.749875 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.749855972 +0000 UTC m=+35.233086495 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-7ggjm" (UniqueName: "kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.749880 3562 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.749896 3562 projected.go:200] Error preparing data for projected volume kube-api-access-fqnmc for pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.749955 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9p8gt\" (UniqueName: \"kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt\") pod \"community-operators-sdddl\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " pod="openshift-marketplace/community-operators-sdddl" Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.749977 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.749952414 +0000 UTC m=+35.233183017 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-fqnmc" (UniqueName: "kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.750018 3562 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.750029 3562 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.750037 3562 projected.go:200] Error preparing data for projected volume kube-api-access-9p8gt for pod openshift-marketplace/community-operators-sdddl: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.750071 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt podName:fc9c9ba0-fcbb-4e78-8cf5-a059ec435760 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.750063916 +0000 UTC m=+35.233294429 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-9p8gt" (UniqueName: "kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt") pod "community-operators-sdddl" (UID: "fc9c9ba0-fcbb-4e78-8cf5-a059ec435760") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.750072 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bm986\" (UniqueName: \"kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.750118 3562 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.750128 3562 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.750134 3562 projected.go:200] Error preparing data for projected volume kube-api-access-bm986 for pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.750211 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986 podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.750201639 +0000 UTC m=+35.233432162 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-bm986" (UniqueName: "kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.750217 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j7zrh\" (UniqueName: \"kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.750299 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxcp\" (UniqueName: \"kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp\") pod \"network-check-source-5c5478f8c-vqvt7\" (UID: \"d0f40333-c860-4c04-8058-a0bf572dcf12\") " pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.750404 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.750461 3562 projected.go:294] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.750509 3562 projected.go:294] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.750523 3562 projected.go:200] Error preparing data for projected volume kube-api-access-j7zrh for pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.750542 3562 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.750556 3562 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.750567 3562 projected.go:200] Error preparing data for projected volume kube-api-access-x5d97 for pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.750602 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.750571526 +0000 UTC m=+35.233802049 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-j7zrh" (UniqueName: "kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.750627 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97 podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.750617687 +0000 UTC m=+35.233848300 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-x5d97" (UniqueName: "kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.750482 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-x5d97\" (UniqueName: \"kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.750639 3562 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.750657 3562 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.750668 3562 projected.go:200] Error preparing data for projected volume kube-api-access-qcxcp for pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.750699 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp podName:d0f40333-c860-4c04-8058-a0bf572dcf12 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.750688038 +0000 UTC m=+35.233918651 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-qcxcp" (UniqueName: "kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp") pod "network-check-source-5c5478f8c-vqvt7" (UID: "d0f40333-c860-4c04-8058-a0bf572dcf12") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.750749 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8dcvj\" (UniqueName: \"kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.750762 3562 projected.go:294] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.750775 3562 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.750804 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.75079536 +0000 UTC m=+35.234026003 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.750857 3562 projected.go:294] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.750871 3562 projected.go:294] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.750879 3562 projected.go:200] Error preparing data for projected volume kube-api-access-8dcvj for pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.750930 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.750896452 +0000 UTC m=+35.234126975 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-8dcvj" (UniqueName: "kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.750859 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kgvs\" (UniqueName: \"kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.750991 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtrv\" (UniqueName: \"kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.751038 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2nz92\" (UniqueName: \"kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.751339 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nf4t2\" (UniqueName: \"kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.751374 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.751405 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hjlhw\" (UniqueName: \"kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.751546 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bxr\" (UniqueName: \"kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.751690 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9724w\" (UniqueName: \"kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.751762 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8hpxx\" (UniqueName: \"kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.751805 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-v7vkr\" (UniqueName: \"kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.751849 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kp86\" (UniqueName: \"kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.751878 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.752474 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tvc4r\" (UniqueName: \"kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.752582 3562 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.752600 3562 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.752609 3562 projected.go:200] Error preparing data for projected volume kube-api-access-tvc4r for pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.752640 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.752630827 +0000 UTC m=+35.235861420 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-tvc4r" (UniqueName: "kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.750930 3562 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.752662 3562 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.752669 3562 projected.go:200] Error preparing data for projected volume kube-api-access-6kgvs for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.752696 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.752688208 +0000 UTC m=+35.235918811 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kgvs" (UniqueName: "kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.752746 3562 projected.go:294] Couldn't get configMap hostpath-provisioner/kube-root-ca.crt: object "hostpath-provisioner"/"kube-root-ca.crt" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.752762 3562 projected.go:294] Couldn't get configMap hostpath-provisioner/openshift-service-ca.crt: object "hostpath-provisioner"/"openshift-service-ca.crt" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.752770 3562 projected.go:200] Error preparing data for projected volume kube-api-access-vvtrv for pod hostpath-provisioner/csi-hostpathplugin-hvm8g: [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.752796 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv podName:12e733dd-0939-4f1b-9cbb-13897e093787 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.75278816 +0000 UTC m=+35.236018753 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-vvtrv" (UniqueName: "kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv") pod "csi-hostpathplugin-hvm8g" (UID: "12e733dd-0939-4f1b-9cbb-13897e093787") : [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.752846 3562 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.752858 3562 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.752867 3562 projected.go:200] Error preparing data for projected volume kube-api-access-2nz92 for pod openshift-console/console-644bb77b49-5x5xk: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.752895 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92 podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.752884512 +0000 UTC m=+35.236115035 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-2nz92" (UniqueName: "kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.752967 3562 projected.go:294] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.752981 3562 projected.go:294] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.752988 3562 projected.go:200] Error preparing data for projected volume kube-api-access-nf4t2 for pod openshift-dns-operator/dns-operator-75f687757b-nz2xb: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.753017 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2 podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.753008974 +0000 UTC m=+35.236239557 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-nf4t2" (UniqueName: "kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.753069 3562 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.753081 3562 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.753089 3562 projected.go:200] Error preparing data for projected volume kube-api-access-rg2zg for pod openshift-marketplace/marketplace-operator-8b455464d-f9xdt: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.753118 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.753107966 +0000 UTC m=+35.236338559 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-rg2zg" (UniqueName: "kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.753197 3562 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.753210 3562 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.753218 3562 projected.go:200] Error preparing data for projected volume kube-api-access-hjlhw for pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.753245 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.753236949 +0000 UTC m=+35.236467472 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-hjlhw" (UniqueName: "kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.753297 3562 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.753310 3562 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.753318 3562 projected.go:200] Error preparing data for projected volume kube-api-access-l8bxr for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.753345 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.753336021 +0000 UTC m=+35.236566554 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l8bxr" (UniqueName: "kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.753393 3562 projected.go:294] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.753405 3562 projected.go:294] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.753413 3562 projected.go:200] Error preparing data for projected volume kube-api-access-9724w for pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.753439 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.753430523 +0000 UTC m=+35.236661136 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-9724w" (UniqueName: "kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.753491 3562 projected.go:294] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.753503 3562 projected.go:294] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.753511 3562 projected.go:200] Error preparing data for projected volume kube-api-access-8hpxx for pod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.753537 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.753528755 +0000 UTC m=+35.236759358 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-8hpxx" (UniqueName: "kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.753586 3562 projected.go:294] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.753598 3562 projected.go:294] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.753605 3562 projected.go:200] Error preparing data for projected volume kube-api-access-v7vkr for pod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.753630 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.753622236 +0000 UTC m=+35.236852819 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-v7vkr" (UniqueName: "kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.753678 3562 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.753690 3562 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.753698 3562 projected.go:200] Error preparing data for projected volume kube-api-access-6kp86 for pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.753723 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86 podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.753715748 +0000 UTC m=+35.236946271 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kp86" (UniqueName: "kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.753772 3562 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.753785 3562 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.753793 3562 projected.go:200] Error preparing data for projected volume kube-api-access-ptdrb for pod openshift-marketplace/redhat-operators-f4jkp: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.753830 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb podName:4092a9f8-5acc-4932-9e90-ef962eeb301a nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.753822171 +0000 UTC m=+35.237052794 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-ptdrb" (UniqueName: "kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb") pod "redhat-operators-f4jkp" (UID: "4092a9f8-5acc-4932-9e90-ef962eeb301a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.854331 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.854406 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.854523 3562 projected.go:294] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.854546 3562 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.854594 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.854575152 +0000 UTC m=+35.337805675 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.854623 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dt5cx\" (UniqueName: \"kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx\") pod \"ingress-canary-2vhcn\" (UID: \"0b5d722a-1123-4935-9740-52a08d018bc9\") " pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.854702 3562 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.854760 3562 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.854774 3562 projected.go:200] Error preparing data for projected volume kube-api-access-55f7t for pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.854724 3562 projected.go:294] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.854837 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.854816717 +0000 UTC m=+35.338047240 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-55f7t" (UniqueName: "kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.854832 3562 projected.go:294] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.854900 3562 projected.go:200] Error preparing data for projected volume kube-api-access-dt5cx for pod openshift-ingress-canary/ingress-canary-2vhcn: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.855015 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx podName:0b5d722a-1123-4935-9740-52a08d018bc9 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.854995881 +0000 UTC m=+35.338226444 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-dt5cx" (UniqueName: "kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx") pod "ingress-canary-2vhcn" (UID: "0b5d722a-1123-4935-9740-52a08d018bc9") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.874816 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.874856 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.874881 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.874888 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.874971 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.874979 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.875022 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.875040 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.875024 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.875060 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.874817 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.875122 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.874898 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.875141 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.874977 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.875240 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.875278 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.875306 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.874821 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.874841 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.874940 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.875060 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.875392 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.875479 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.875513 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.875532 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.875545 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.875397 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.875100 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.875773 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.875844 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.875895 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.875974 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.876064 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.876086 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.876229 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.876248 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.876299 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.876363 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.876380 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.875759 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.876451 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.876508 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.876694 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.876751 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.876765 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.876855 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.876945 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.877025 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.877085 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.877166 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.877256 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.877331 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.877418 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.877518 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.877592 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.877662 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.877706 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.877794 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.877888 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.878033 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.878126 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.878210 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.878282 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.878367 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.878409 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.878469 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.878539 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.878611 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.878684 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.878770 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.878843 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.956488 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2zpsk\" (UniqueName: \"kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk\") pod \"downloads-65476884b9-9wcvx\" (UID: \"6268b7fe-8910-4505-b404-6f1df638105c\") " pod="openshift-console/downloads-65476884b9-9wcvx" Dec 13 00:06:30 crc kubenswrapper[3562]: I1213 00:06:30.956575 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmhq\" (UniqueName: \"kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq\") pod \"migrator-f7c6d88df-q2fnv\" (UID: \"cf1a8966-f594-490a-9fbb-eec5bafd13d3\") " pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.956661 3562 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.956680 3562 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.956691 3562 projected.go:200] Error preparing data for projected volume kube-api-access-2zpsk for pod openshift-console/downloads-65476884b9-9wcvx: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.956734 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk podName:6268b7fe-8910-4505-b404-6f1df638105c nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.956716471 +0000 UTC m=+35.439946994 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-2zpsk" (UniqueName: "kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk") pod "downloads-65476884b9-9wcvx" (UID: "6268b7fe-8910-4505-b404-6f1df638105c") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.956763 3562 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.956776 3562 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.956784 3562 projected.go:200] Error preparing data for projected volume kube-api-access-hqmhq for pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:30 crc kubenswrapper[3562]: E1213 00:06:30.956810 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq podName:cf1a8966-f594-490a-9fbb-eec5bafd13d3 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:38.956801263 +0000 UTC m=+35.440031786 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqmhq" (UniqueName: "kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq") pod "migrator-f7c6d88df-q2fnv" (UID: "cf1a8966-f594-490a-9fbb-eec5bafd13d3") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:31 crc kubenswrapper[3562]: I1213 00:06:31.058146 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wrd8h\" (UniqueName: \"kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:06:31 crc kubenswrapper[3562]: I1213 00:06:31.058216 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:06:31 crc kubenswrapper[3562]: I1213 00:06:31.058276 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vhj\" (UniqueName: \"kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:06:31 crc kubenswrapper[3562]: I1213 00:06:31.058345 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tl5kg\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:06:31 crc kubenswrapper[3562]: E1213 00:06:31.060255 3562 projected.go:294] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:31 crc kubenswrapper[3562]: E1213 00:06:31.060297 3562 projected.go:294] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:31 crc kubenswrapper[3562]: E1213 00:06:31.060314 3562 projected.go:200] Error preparing data for projected volume kube-api-access-tl5kg for pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:31 crc kubenswrapper[3562]: E1213 00:06:31.060385 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:39.060361139 +0000 UTC m=+35.543591682 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-tl5kg" (UniqueName: "kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:31 crc kubenswrapper[3562]: E1213 00:06:31.060465 3562 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 13 00:06:31 crc kubenswrapper[3562]: E1213 00:06:31.060494 3562 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 13 00:06:31 crc kubenswrapper[3562]: E1213 00:06:31.060508 3562 projected.go:200] Error preparing data for projected volume kube-api-access-wrd8h for pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:31 crc kubenswrapper[3562]: E1213 00:06:31.060549 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-12-13 00:06:39.060534883 +0000 UTC m=+35.543765426 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-wrd8h" (UniqueName: "kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:31 crc kubenswrapper[3562]: E1213 00:06:31.060608 3562 projected.go:294] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:31 crc kubenswrapper[3562]: E1213 00:06:31.060619 3562 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:31 crc kubenswrapper[3562]: E1213 00:06:31.060646 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:39.060637635 +0000 UTC m=+35.543868178 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:31 crc kubenswrapper[3562]: E1213 00:06:31.060703 3562 projected.go:294] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:31 crc kubenswrapper[3562]: E1213 00:06:31.060719 3562 projected.go:294] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:31 crc kubenswrapper[3562]: E1213 00:06:31.060729 3562 projected.go:200] Error preparing data for projected volume kube-api-access-d9vhj for pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:31 crc kubenswrapper[3562]: E1213 00:06:31.060760 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:39.060751307 +0000 UTC m=+35.543981840 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-d9vhj" (UniqueName: "kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:31 crc kubenswrapper[3562]: I1213 00:06:31.077563 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"42b26fd145f8c23082ac9366b503eaa610dfe5f8cf214181397855b5f66201ff"} Dec 13 00:06:31 crc kubenswrapper[3562]: I1213 00:06:31.080311 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 13 00:06:31 crc kubenswrapper[3562]: I1213 00:06:31.080368 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 13 00:06:31 crc kubenswrapper[3562]: I1213 00:06:31.134516 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 13 00:06:31 crc kubenswrapper[3562]: I1213 00:06:31.137697 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 13 00:06:31 crc kubenswrapper[3562]: I1213 00:06:31.162157 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5rpl7\" (UniqueName: \"kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:06:31 crc kubenswrapper[3562]: I1213 00:06:31.162244 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Dec 13 00:06:31 crc kubenswrapper[3562]: I1213 00:06:31.162374 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 13 00:06:31 crc kubenswrapper[3562]: E1213 00:06:31.162591 3562 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:31 crc kubenswrapper[3562]: E1213 00:06:31.162685 3562 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:31 crc kubenswrapper[3562]: E1213 00:06:31.162766 3562 projected.go:200] Error preparing data for projected volume kube-api-access-5rpl7 for pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:31 crc kubenswrapper[3562]: E1213 00:06:31.162837 3562 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 13 00:06:31 crc kubenswrapper[3562]: E1213 00:06:31.162934 3562 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 13 00:06:31 crc kubenswrapper[3562]: E1213 00:06:31.162959 3562 projected.go:200] Error preparing data for projected volume kube-api-access-n6sqt for pod openshift-marketplace/community-operators-8jhz6: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:31 crc kubenswrapper[3562]: E1213 00:06:31.162865 3562 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 13 00:06:31 crc kubenswrapper[3562]: E1213 00:06:31.163038 3562 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 13 00:06:31 crc kubenswrapper[3562]: E1213 00:06:31.163049 3562 projected.go:200] Error preparing data for projected volume kube-api-access-tf29r for pod openshift-marketplace/redhat-marketplace-8s8pc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:31 crc kubenswrapper[3562]: E1213 00:06:31.162889 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7 podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:39.162866966 +0000 UTC m=+35.646097489 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-5rpl7" (UniqueName: "kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:31 crc kubenswrapper[3562]: E1213 00:06:31.163285 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r podName:c782cf62-a827-4677-b3c2-6f82c5f09cbb nodeName:}" failed. No retries permitted until 2025-12-13 00:06:39.163265474 +0000 UTC m=+35.646496177 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-tf29r" (UniqueName: "kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r") pod "redhat-marketplace-8s8pc" (UID: "c782cf62-a827-4677-b3c2-6f82c5f09cbb") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:31 crc kubenswrapper[3562]: E1213 00:06:31.163320 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt podName:3f4dca86-e6ee-4ec9-8324-86aff960225e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:39.163307794 +0000 UTC m=+35.646538587 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-n6sqt" (UniqueName: "kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt") pod "community-operators-8jhz6" (UID: "3f4dca86-e6ee-4ec9-8324-86aff960225e") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:31 crc kubenswrapper[3562]: I1213 00:06:31.265776 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pkhl4\" (UniqueName: \"kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:06:31 crc kubenswrapper[3562]: I1213 00:06:31.266472 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-js87r\" (UniqueName: \"kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:06:31 crc kubenswrapper[3562]: E1213 00:06:31.265980 3562 projected.go:294] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Dec 13 00:06:31 crc kubenswrapper[3562]: E1213 00:06:31.266561 3562 projected.go:294] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Dec 13 00:06:31 crc kubenswrapper[3562]: E1213 00:06:31.266573 3562 projected.go:200] Error preparing data for projected volume kube-api-access-pkhl4 for pod openshift-controller-manager/controller-manager-778975cc4f-x5vcf: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:31 crc kubenswrapper[3562]: E1213 00:06:31.266618 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4 podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-13 00:06:39.266601616 +0000 UTC m=+35.749832139 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-pkhl4" (UniqueName: "kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:31 crc kubenswrapper[3562]: E1213 00:06:31.266712 3562 projected.go:294] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Dec 13 00:06:31 crc kubenswrapper[3562]: E1213 00:06:31.266737 3562 projected.go:294] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Dec 13 00:06:31 crc kubenswrapper[3562]: E1213 00:06:31.266750 3562 projected.go:200] Error preparing data for projected volume kube-api-access-js87r for pod openshift-service-ca/service-ca-666f99b6f-kk8kg: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Dec 13 00:06:31 crc kubenswrapper[3562]: E1213 00:06:31.266795 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:39.26678299 +0000 UTC m=+35.750013513 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-js87r" (UniqueName: "kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Dec 13 00:06:31 crc kubenswrapper[3562]: I1213 00:06:31.471805 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:06:31 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:06:31 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:06:31 crc kubenswrapper[3562]: healthz check failed Dec 13 00:06:31 crc kubenswrapper[3562]: I1213 00:06:31.472124 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:06:31 crc kubenswrapper[3562]: I1213 00:06:31.874769 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 13 00:06:31 crc kubenswrapper[3562]: I1213 00:06:31.874876 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 13 00:06:31 crc kubenswrapper[3562]: I1213 00:06:31.874957 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:06:31 crc kubenswrapper[3562]: I1213 00:06:31.875026 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:06:31 crc kubenswrapper[3562]: I1213 00:06:31.875040 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:06:31 crc kubenswrapper[3562]: I1213 00:06:31.875049 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:06:31 crc kubenswrapper[3562]: I1213 00:06:31.875060 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:06:31 crc kubenswrapper[3562]: I1213 00:06:31.875125 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:06:31 crc kubenswrapper[3562]: I1213 00:06:31.875059 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 13 00:06:31 crc kubenswrapper[3562]: I1213 00:06:31.875079 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:06:31 crc kubenswrapper[3562]: I1213 00:06:31.874968 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:06:31 crc kubenswrapper[3562]: E1213 00:06:31.875345 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 13 00:06:31 crc kubenswrapper[3562]: E1213 00:06:31.875457 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 13 00:06:31 crc kubenswrapper[3562]: E1213 00:06:31.875616 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 13 00:06:31 crc kubenswrapper[3562]: E1213 00:06:31.875767 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 13 00:06:31 crc kubenswrapper[3562]: E1213 00:06:31.875886 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 13 00:06:31 crc kubenswrapper[3562]: E1213 00:06:31.876023 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 13 00:06:31 crc kubenswrapper[3562]: E1213 00:06:31.876097 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 13 00:06:31 crc kubenswrapper[3562]: E1213 00:06:31.876236 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 13 00:06:31 crc kubenswrapper[3562]: E1213 00:06:31.876396 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 13 00:06:31 crc kubenswrapper[3562]: E1213 00:06:31.876481 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 13 00:06:31 crc kubenswrapper[3562]: E1213 00:06:31.876633 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 13 00:06:32 crc kubenswrapper[3562]: I1213 00:06:32.080614 3562 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 00:06:32 crc kubenswrapper[3562]: I1213 00:06:32.472357 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:06:32 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:06:32 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:06:32 crc kubenswrapper[3562]: healthz check failed Dec 13 00:06:32 crc kubenswrapper[3562]: I1213 00:06:32.472492 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:06:32 crc kubenswrapper[3562]: I1213 00:06:32.874225 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:06:32 crc kubenswrapper[3562]: E1213 00:06:32.874461 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 13 00:06:32 crc kubenswrapper[3562]: I1213 00:06:32.874547 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 13 00:06:32 crc kubenswrapper[3562]: E1213 00:06:32.874736 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 13 00:06:32 crc kubenswrapper[3562]: I1213 00:06:32.874842 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 13 00:06:32 crc kubenswrapper[3562]: E1213 00:06:32.874985 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 13 00:06:32 crc kubenswrapper[3562]: I1213 00:06:32.875063 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 13 00:06:32 crc kubenswrapper[3562]: E1213 00:06:32.875490 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 13 00:06:32 crc kubenswrapper[3562]: I1213 00:06:32.875582 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 13 00:06:32 crc kubenswrapper[3562]: E1213 00:06:32.875703 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 13 00:06:32 crc kubenswrapper[3562]: I1213 00:06:32.875757 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:06:32 crc kubenswrapper[3562]: E1213 00:06:32.875837 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 13 00:06:32 crc kubenswrapper[3562]: I1213 00:06:32.875889 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 13 00:06:32 crc kubenswrapper[3562]: E1213 00:06:32.876007 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 13 00:06:32 crc kubenswrapper[3562]: I1213 00:06:32.876062 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 13 00:06:32 crc kubenswrapper[3562]: E1213 00:06:32.876149 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 13 00:06:32 crc kubenswrapper[3562]: I1213 00:06:32.876196 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:06:32 crc kubenswrapper[3562]: E1213 00:06:32.876272 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 13 00:06:32 crc kubenswrapper[3562]: I1213 00:06:32.876324 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 13 00:06:32 crc kubenswrapper[3562]: E1213 00:06:32.876424 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 13 00:06:32 crc kubenswrapper[3562]: I1213 00:06:32.876473 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 13 00:06:32 crc kubenswrapper[3562]: E1213 00:06:32.876558 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 13 00:06:32 crc kubenswrapper[3562]: I1213 00:06:32.876614 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:06:32 crc kubenswrapper[3562]: E1213 00:06:32.876725 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 13 00:06:32 crc kubenswrapper[3562]: I1213 00:06:32.876775 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:06:32 crc kubenswrapper[3562]: E1213 00:06:32.876897 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 13 00:06:32 crc kubenswrapper[3562]: I1213 00:06:32.876967 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:06:32 crc kubenswrapper[3562]: E1213 00:06:32.877042 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 13 00:06:32 crc kubenswrapper[3562]: I1213 00:06:32.877087 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:06:32 crc kubenswrapper[3562]: E1213 00:06:32.877160 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 13 00:06:32 crc kubenswrapper[3562]: I1213 00:06:32.877209 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 13 00:06:32 crc kubenswrapper[3562]: E1213 00:06:32.877302 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 13 00:06:32 crc kubenswrapper[3562]: I1213 00:06:32.877348 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:32 crc kubenswrapper[3562]: E1213 00:06:32.877430 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 13 00:06:32 crc kubenswrapper[3562]: I1213 00:06:32.877476 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:06:32 crc kubenswrapper[3562]: E1213 00:06:32.877550 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 13 00:06:32 crc kubenswrapper[3562]: I1213 00:06:32.877593 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 13 00:06:32 crc kubenswrapper[3562]: E1213 00:06:32.877669 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 13 00:06:32 crc kubenswrapper[3562]: I1213 00:06:32.877724 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:06:32 crc kubenswrapper[3562]: E1213 00:06:32.877833 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 13 00:06:32 crc kubenswrapper[3562]: I1213 00:06:32.877937 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:06:32 crc kubenswrapper[3562]: E1213 00:06:32.878027 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 13 00:06:32 crc kubenswrapper[3562]: I1213 00:06:32.878075 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 13 00:06:32 crc kubenswrapper[3562]: E1213 00:06:32.878157 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 13 00:06:32 crc kubenswrapper[3562]: I1213 00:06:32.878203 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:06:32 crc kubenswrapper[3562]: E1213 00:06:32.878310 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 13 00:06:32 crc kubenswrapper[3562]: I1213 00:06:32.878368 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:06:32 crc kubenswrapper[3562]: E1213 00:06:32.878450 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 13 00:06:32 crc kubenswrapper[3562]: I1213 00:06:32.878498 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:32 crc kubenswrapper[3562]: E1213 00:06:32.878596 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 13 00:06:32 crc kubenswrapper[3562]: I1213 00:06:32.878644 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:06:32 crc kubenswrapper[3562]: E1213 00:06:32.878717 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 13 00:06:32 crc kubenswrapper[3562]: I1213 00:06:32.878762 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 13 00:06:32 crc kubenswrapper[3562]: E1213 00:06:32.878882 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 13 00:06:32 crc kubenswrapper[3562]: I1213 00:06:32.878996 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:06:32 crc kubenswrapper[3562]: E1213 00:06:32.879126 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 13 00:06:32 crc kubenswrapper[3562]: I1213 00:06:32.879183 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 13 00:06:32 crc kubenswrapper[3562]: E1213 00:06:32.879260 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 13 00:06:32 crc kubenswrapper[3562]: I1213 00:06:32.879309 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:06:32 crc kubenswrapper[3562]: E1213 00:06:32.879384 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 13 00:06:32 crc kubenswrapper[3562]: I1213 00:06:32.879429 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:32 crc kubenswrapper[3562]: E1213 00:06:32.879506 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 13 00:06:32 crc kubenswrapper[3562]: I1213 00:06:32.879550 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 13 00:06:32 crc kubenswrapper[3562]: E1213 00:06:32.879624 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 13 00:06:32 crc kubenswrapper[3562]: I1213 00:06:32.879670 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:06:32 crc kubenswrapper[3562]: E1213 00:06:32.879746 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 13 00:06:32 crc kubenswrapper[3562]: I1213 00:06:32.879790 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 13 00:06:32 crc kubenswrapper[3562]: E1213 00:06:32.879863 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 13 00:06:32 crc kubenswrapper[3562]: I1213 00:06:32.879947 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 13 00:06:32 crc kubenswrapper[3562]: E1213 00:06:32.880080 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 13 00:06:32 crc kubenswrapper[3562]: I1213 00:06:32.880158 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:06:32 crc kubenswrapper[3562]: E1213 00:06:32.880282 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 13 00:06:33 crc kubenswrapper[3562]: I1213 00:06:33.083692 3562 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 00:06:33 crc kubenswrapper[3562]: I1213 00:06:33.469217 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Dec 13 00:06:33 crc kubenswrapper[3562]: I1213 00:06:33.473357 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:06:33 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:06:33 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:06:33 crc kubenswrapper[3562]: healthz check failed Dec 13 00:06:33 crc kubenswrapper[3562]: I1213 00:06:33.473429 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:06:33 crc kubenswrapper[3562]: E1213 00:06:33.863017 3562 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 13 00:06:33 crc kubenswrapper[3562]: I1213 00:06:33.874213 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 13 00:06:33 crc kubenswrapper[3562]: I1213 00:06:33.874262 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:06:33 crc kubenswrapper[3562]: I1213 00:06:33.874302 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 13 00:06:33 crc kubenswrapper[3562]: I1213 00:06:33.874271 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 13 00:06:33 crc kubenswrapper[3562]: I1213 00:06:33.874337 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:06:33 crc kubenswrapper[3562]: I1213 00:06:33.874379 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:06:33 crc kubenswrapper[3562]: I1213 00:06:33.874306 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:06:33 crc kubenswrapper[3562]: I1213 00:06:33.874347 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:06:33 crc kubenswrapper[3562]: I1213 00:06:33.874378 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:06:33 crc kubenswrapper[3562]: E1213 00:06:33.876356 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 13 00:06:33 crc kubenswrapper[3562]: E1213 00:06:33.876449 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 13 00:06:33 crc kubenswrapper[3562]: I1213 00:06:33.876545 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:06:33 crc kubenswrapper[3562]: E1213 00:06:33.876568 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 13 00:06:33 crc kubenswrapper[3562]: I1213 00:06:33.876617 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:06:33 crc kubenswrapper[3562]: E1213 00:06:33.876792 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 13 00:06:33 crc kubenswrapper[3562]: E1213 00:06:33.876846 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 13 00:06:33 crc kubenswrapper[3562]: E1213 00:06:33.876946 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 13 00:06:33 crc kubenswrapper[3562]: E1213 00:06:33.877017 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 13 00:06:33 crc kubenswrapper[3562]: E1213 00:06:33.877069 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 13 00:06:33 crc kubenswrapper[3562]: E1213 00:06:33.877146 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 13 00:06:33 crc kubenswrapper[3562]: E1213 00:06:33.877220 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 13 00:06:33 crc kubenswrapper[3562]: E1213 00:06:33.877262 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 13 00:06:34 crc kubenswrapper[3562]: I1213 00:06:34.471826 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:06:34 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:06:34 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:06:34 crc kubenswrapper[3562]: healthz check failed Dec 13 00:06:34 crc kubenswrapper[3562]: I1213 00:06:34.471932 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:06:34 crc kubenswrapper[3562]: I1213 00:06:34.874277 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 13 00:06:34 crc kubenswrapper[3562]: I1213 00:06:34.874320 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:06:34 crc kubenswrapper[3562]: I1213 00:06:34.874286 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:34 crc kubenswrapper[3562]: E1213 00:06:34.874467 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 13 00:06:34 crc kubenswrapper[3562]: I1213 00:06:34.874469 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 13 00:06:34 crc kubenswrapper[3562]: I1213 00:06:34.874547 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:06:34 crc kubenswrapper[3562]: I1213 00:06:34.874481 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 13 00:06:34 crc kubenswrapper[3562]: E1213 00:06:34.874579 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 13 00:06:34 crc kubenswrapper[3562]: I1213 00:06:34.874584 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 13 00:06:34 crc kubenswrapper[3562]: I1213 00:06:34.874630 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:06:34 crc kubenswrapper[3562]: I1213 00:06:34.874643 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:06:34 crc kubenswrapper[3562]: I1213 00:06:34.874630 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:06:34 crc kubenswrapper[3562]: I1213 00:06:34.874657 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 13 00:06:34 crc kubenswrapper[3562]: I1213 00:06:34.874553 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 13 00:06:34 crc kubenswrapper[3562]: I1213 00:06:34.874686 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 13 00:06:34 crc kubenswrapper[3562]: I1213 00:06:34.874659 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:06:34 crc kubenswrapper[3562]: I1213 00:06:34.874599 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:06:34 crc kubenswrapper[3562]: E1213 00:06:34.874810 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 13 00:06:34 crc kubenswrapper[3562]: I1213 00:06:34.874504 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 13 00:06:34 crc kubenswrapper[3562]: I1213 00:06:34.874614 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 13 00:06:34 crc kubenswrapper[3562]: I1213 00:06:34.874878 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:06:34 crc kubenswrapper[3562]: I1213 00:06:34.874632 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:06:34 crc kubenswrapper[3562]: E1213 00:06:34.874967 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 13 00:06:34 crc kubenswrapper[3562]: I1213 00:06:34.874676 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:06:34 crc kubenswrapper[3562]: I1213 00:06:34.874690 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 13 00:06:34 crc kubenswrapper[3562]: I1213 00:06:34.875018 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:06:34 crc kubenswrapper[3562]: I1213 00:06:34.874702 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 13 00:06:34 crc kubenswrapper[3562]: I1213 00:06:34.874883 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 13 00:06:34 crc kubenswrapper[3562]: E1213 00:06:34.875076 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 13 00:06:34 crc kubenswrapper[3562]: E1213 00:06:34.875146 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 13 00:06:34 crc kubenswrapper[3562]: I1213 00:06:34.875208 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 13 00:06:34 crc kubenswrapper[3562]: I1213 00:06:34.875229 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 13 00:06:34 crc kubenswrapper[3562]: E1213 00:06:34.875265 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 13 00:06:34 crc kubenswrapper[3562]: E1213 00:06:34.875379 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 13 00:06:34 crc kubenswrapper[3562]: E1213 00:06:34.875440 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 13 00:06:34 crc kubenswrapper[3562]: I1213 00:06:34.875469 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:06:34 crc kubenswrapper[3562]: I1213 00:06:34.875516 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 13 00:06:34 crc kubenswrapper[3562]: E1213 00:06:34.875571 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 13 00:06:34 crc kubenswrapper[3562]: I1213 00:06:34.875620 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:34 crc kubenswrapper[3562]: I1213 00:06:34.875681 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:06:34 crc kubenswrapper[3562]: E1213 00:06:34.875724 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 13 00:06:34 crc kubenswrapper[3562]: I1213 00:06:34.875727 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:06:34 crc kubenswrapper[3562]: I1213 00:06:34.875744 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:06:34 crc kubenswrapper[3562]: E1213 00:06:34.875702 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 13 00:06:34 crc kubenswrapper[3562]: E1213 00:06:34.875813 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 13 00:06:34 crc kubenswrapper[3562]: E1213 00:06:34.875928 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 13 00:06:34 crc kubenswrapper[3562]: I1213 00:06:34.875998 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:06:34 crc kubenswrapper[3562]: E1213 00:06:34.876031 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 13 00:06:34 crc kubenswrapper[3562]: I1213 00:06:34.876102 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:34 crc kubenswrapper[3562]: I1213 00:06:34.876118 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:06:34 crc kubenswrapper[3562]: E1213 00:06:34.876143 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 13 00:06:34 crc kubenswrapper[3562]: E1213 00:06:34.876196 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 13 00:06:34 crc kubenswrapper[3562]: E1213 00:06:34.876259 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 13 00:06:34 crc kubenswrapper[3562]: E1213 00:06:34.876365 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 13 00:06:34 crc kubenswrapper[3562]: E1213 00:06:34.876422 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 13 00:06:34 crc kubenswrapper[3562]: E1213 00:06:34.876487 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 13 00:06:34 crc kubenswrapper[3562]: E1213 00:06:34.876548 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 13 00:06:34 crc kubenswrapper[3562]: E1213 00:06:34.876613 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 13 00:06:34 crc kubenswrapper[3562]: E1213 00:06:34.876683 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 13 00:06:34 crc kubenswrapper[3562]: E1213 00:06:34.876744 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 13 00:06:34 crc kubenswrapper[3562]: E1213 00:06:34.876800 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 13 00:06:34 crc kubenswrapper[3562]: I1213 00:06:34.876844 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 13 00:06:34 crc kubenswrapper[3562]: E1213 00:06:34.876897 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 13 00:06:34 crc kubenswrapper[3562]: E1213 00:06:34.876996 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 13 00:06:34 crc kubenswrapper[3562]: E1213 00:06:34.877031 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 13 00:06:34 crc kubenswrapper[3562]: E1213 00:06:34.877093 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 13 00:06:34 crc kubenswrapper[3562]: E1213 00:06:34.877143 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 13 00:06:34 crc kubenswrapper[3562]: E1213 00:06:34.877194 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 13 00:06:34 crc kubenswrapper[3562]: E1213 00:06:34.877247 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 13 00:06:34 crc kubenswrapper[3562]: E1213 00:06:34.877295 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 13 00:06:34 crc kubenswrapper[3562]: E1213 00:06:34.877336 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 13 00:06:34 crc kubenswrapper[3562]: E1213 00:06:34.877409 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 13 00:06:35 crc kubenswrapper[3562]: I1213 00:06:35.484805 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:06:35 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:06:35 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:06:35 crc kubenswrapper[3562]: healthz check failed Dec 13 00:06:35 crc kubenswrapper[3562]: I1213 00:06:35.484866 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:06:35 crc kubenswrapper[3562]: I1213 00:06:35.874933 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:06:35 crc kubenswrapper[3562]: I1213 00:06:35.875021 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:06:35 crc kubenswrapper[3562]: I1213 00:06:35.875043 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:06:35 crc kubenswrapper[3562]: E1213 00:06:35.875187 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 13 00:06:35 crc kubenswrapper[3562]: I1213 00:06:35.875227 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:06:35 crc kubenswrapper[3562]: I1213 00:06:35.875259 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:06:35 crc kubenswrapper[3562]: I1213 00:06:35.875323 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 13 00:06:35 crc kubenswrapper[3562]: E1213 00:06:35.875348 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 13 00:06:35 crc kubenswrapper[3562]: E1213 00:06:35.875481 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 13 00:06:35 crc kubenswrapper[3562]: I1213 00:06:35.875548 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:06:35 crc kubenswrapper[3562]: I1213 00:06:35.875672 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 13 00:06:35 crc kubenswrapper[3562]: I1213 00:06:35.875674 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 13 00:06:35 crc kubenswrapper[3562]: E1213 00:06:35.875789 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 13 00:06:35 crc kubenswrapper[3562]: E1213 00:06:35.875893 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 13 00:06:35 crc kubenswrapper[3562]: I1213 00:06:35.875934 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:06:35 crc kubenswrapper[3562]: I1213 00:06:35.875946 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:06:35 crc kubenswrapper[3562]: E1213 00:06:35.875996 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 13 00:06:35 crc kubenswrapper[3562]: E1213 00:06:35.876083 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 13 00:06:35 crc kubenswrapper[3562]: E1213 00:06:35.876437 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 13 00:06:35 crc kubenswrapper[3562]: E1213 00:06:35.876620 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 13 00:06:35 crc kubenswrapper[3562]: E1213 00:06:35.876681 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 13 00:06:35 crc kubenswrapper[3562]: E1213 00:06:35.876756 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 13 00:06:36 crc kubenswrapper[3562]: I1213 00:06:36.093663 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" event={"ID":"2b6d14a5-ca00-40c7-af7a-051a98a24eed","Type":"ContainerStarted","Data":"af9ce9081d78f5864fe98eedde0bf5920e1f563e438c64dd98e7ecb343965869"} Dec 13 00:06:36 crc kubenswrapper[3562]: I1213 00:06:36.471729 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:06:36 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:06:36 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:06:36 crc kubenswrapper[3562]: healthz check failed Dec 13 00:06:36 crc kubenswrapper[3562]: I1213 00:06:36.471955 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:06:36 crc kubenswrapper[3562]: I1213 00:06:36.874456 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 13 00:06:36 crc kubenswrapper[3562]: I1213 00:06:36.874489 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:06:36 crc kubenswrapper[3562]: I1213 00:06:36.874460 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 13 00:06:36 crc kubenswrapper[3562]: I1213 00:06:36.874554 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 13 00:06:36 crc kubenswrapper[3562]: I1213 00:06:36.874561 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:06:36 crc kubenswrapper[3562]: I1213 00:06:36.874636 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:36 crc kubenswrapper[3562]: E1213 00:06:36.874665 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 13 00:06:36 crc kubenswrapper[3562]: I1213 00:06:36.874680 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:06:36 crc kubenswrapper[3562]: I1213 00:06:36.874668 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 13 00:06:36 crc kubenswrapper[3562]: I1213 00:06:36.874728 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:06:36 crc kubenswrapper[3562]: I1213 00:06:36.874733 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 13 00:06:36 crc kubenswrapper[3562]: I1213 00:06:36.874767 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:06:36 crc kubenswrapper[3562]: I1213 00:06:36.874651 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:06:36 crc kubenswrapper[3562]: I1213 00:06:36.874522 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 13 00:06:36 crc kubenswrapper[3562]: I1213 00:06:36.874835 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 13 00:06:36 crc kubenswrapper[3562]: E1213 00:06:36.874858 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 13 00:06:36 crc kubenswrapper[3562]: I1213 00:06:36.874880 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:06:36 crc kubenswrapper[3562]: I1213 00:06:36.874696 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 13 00:06:36 crc kubenswrapper[3562]: I1213 00:06:36.875044 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:06:36 crc kubenswrapper[3562]: E1213 00:06:36.875015 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 13 00:06:36 crc kubenswrapper[3562]: E1213 00:06:36.875090 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 13 00:06:36 crc kubenswrapper[3562]: I1213 00:06:36.875096 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:06:36 crc kubenswrapper[3562]: I1213 00:06:36.875120 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:06:36 crc kubenswrapper[3562]: I1213 00:06:36.875161 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 13 00:06:36 crc kubenswrapper[3562]: E1213 00:06:36.875208 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 13 00:06:36 crc kubenswrapper[3562]: I1213 00:06:36.875212 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:06:36 crc kubenswrapper[3562]: E1213 00:06:36.875307 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 13 00:06:36 crc kubenswrapper[3562]: E1213 00:06:36.875392 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 13 00:06:36 crc kubenswrapper[3562]: I1213 00:06:36.875119 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 13 00:06:36 crc kubenswrapper[3562]: E1213 00:06:36.875669 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 13 00:06:36 crc kubenswrapper[3562]: I1213 00:06:36.875702 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:06:36 crc kubenswrapper[3562]: E1213 00:06:36.875765 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 13 00:06:36 crc kubenswrapper[3562]: I1213 00:06:36.875796 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 13 00:06:36 crc kubenswrapper[3562]: I1213 00:06:36.875809 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 13 00:06:36 crc kubenswrapper[3562]: E1213 00:06:36.875849 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 13 00:06:36 crc kubenswrapper[3562]: E1213 00:06:36.875950 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 13 00:06:36 crc kubenswrapper[3562]: E1213 00:06:36.876049 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 13 00:06:36 crc kubenswrapper[3562]: E1213 00:06:36.876165 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 13 00:06:36 crc kubenswrapper[3562]: I1213 00:06:36.876223 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 13 00:06:36 crc kubenswrapper[3562]: E1213 00:06:36.876295 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 13 00:06:36 crc kubenswrapper[3562]: I1213 00:06:36.876360 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:06:36 crc kubenswrapper[3562]: I1213 00:06:36.876417 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:36 crc kubenswrapper[3562]: E1213 00:06:36.876490 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 13 00:06:36 crc kubenswrapper[3562]: I1213 00:06:36.876530 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:06:36 crc kubenswrapper[3562]: I1213 00:06:36.876598 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 13 00:06:36 crc kubenswrapper[3562]: E1213 00:06:36.876628 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 13 00:06:36 crc kubenswrapper[3562]: E1213 00:06:36.876745 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 13 00:06:36 crc kubenswrapper[3562]: E1213 00:06:36.876779 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 13 00:06:36 crc kubenswrapper[3562]: E1213 00:06:36.876788 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 13 00:06:36 crc kubenswrapper[3562]: E1213 00:06:36.876848 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 13 00:06:36 crc kubenswrapper[3562]: I1213 00:06:36.876885 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 13 00:06:36 crc kubenswrapper[3562]: E1213 00:06:36.876987 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 13 00:06:36 crc kubenswrapper[3562]: E1213 00:06:36.877050 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 13 00:06:36 crc kubenswrapper[3562]: I1213 00:06:36.877081 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:06:36 crc kubenswrapper[3562]: I1213 00:06:36.877101 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 13 00:06:36 crc kubenswrapper[3562]: E1213 00:06:36.877157 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 13 00:06:36 crc kubenswrapper[3562]: E1213 00:06:36.877190 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 13 00:06:36 crc kubenswrapper[3562]: E1213 00:06:36.877204 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 13 00:06:36 crc kubenswrapper[3562]: I1213 00:06:36.877249 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:36 crc kubenswrapper[3562]: E1213 00:06:36.877293 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 13 00:06:36 crc kubenswrapper[3562]: E1213 00:06:36.877357 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 13 00:06:36 crc kubenswrapper[3562]: E1213 00:06:36.877437 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 13 00:06:36 crc kubenswrapper[3562]: I1213 00:06:36.877460 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:06:36 crc kubenswrapper[3562]: E1213 00:06:36.877519 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 13 00:06:36 crc kubenswrapper[3562]: E1213 00:06:36.877571 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 13 00:06:36 crc kubenswrapper[3562]: I1213 00:06:36.877441 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:06:36 crc kubenswrapper[3562]: E1213 00:06:36.877695 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 13 00:06:36 crc kubenswrapper[3562]: E1213 00:06:36.877742 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 13 00:06:36 crc kubenswrapper[3562]: E1213 00:06:36.877645 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 13 00:06:36 crc kubenswrapper[3562]: E1213 00:06:36.877918 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 13 00:06:36 crc kubenswrapper[3562]: E1213 00:06:36.877955 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 13 00:06:36 crc kubenswrapper[3562]: E1213 00:06:36.878008 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 13 00:06:37 crc kubenswrapper[3562]: I1213 00:06:37.472829 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:06:37 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:06:37 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:06:37 crc kubenswrapper[3562]: healthz check failed Dec 13 00:06:37 crc kubenswrapper[3562]: I1213 00:06:37.472935 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:06:37 crc kubenswrapper[3562]: I1213 00:06:37.874926 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 13 00:06:37 crc kubenswrapper[3562]: I1213 00:06:37.874978 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:06:37 crc kubenswrapper[3562]: I1213 00:06:37.875026 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:06:37 crc kubenswrapper[3562]: I1213 00:06:37.875026 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:06:37 crc kubenswrapper[3562]: I1213 00:06:37.875070 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 13 00:06:37 crc kubenswrapper[3562]: E1213 00:06:37.875138 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 13 00:06:37 crc kubenswrapper[3562]: I1213 00:06:37.875079 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:06:37 crc kubenswrapper[3562]: I1213 00:06:37.875215 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:06:37 crc kubenswrapper[3562]: I1213 00:06:37.875254 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:06:37 crc kubenswrapper[3562]: E1213 00:06:37.875226 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 13 00:06:37 crc kubenswrapper[3562]: I1213 00:06:37.875292 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:06:37 crc kubenswrapper[3562]: E1213 00:06:37.875352 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 13 00:06:37 crc kubenswrapper[3562]: E1213 00:06:37.875456 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 13 00:06:37 crc kubenswrapper[3562]: E1213 00:06:37.875526 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 13 00:06:37 crc kubenswrapper[3562]: E1213 00:06:37.875625 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 13 00:06:37 crc kubenswrapper[3562]: E1213 00:06:37.875695 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 13 00:06:37 crc kubenswrapper[3562]: I1213 00:06:37.875737 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:06:37 crc kubenswrapper[3562]: E1213 00:06:37.875783 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 13 00:06:37 crc kubenswrapper[3562]: I1213 00:06:37.875739 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 13 00:06:37 crc kubenswrapper[3562]: E1213 00:06:37.875875 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 13 00:06:37 crc kubenswrapper[3562]: E1213 00:06:37.875984 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 13 00:06:37 crc kubenswrapper[3562]: E1213 00:06:37.876080 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.471521 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:06:38 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:06:38 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:06:38 crc kubenswrapper[3562]: healthz check failed Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.471669 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.669096 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.669224 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.669258 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.669345 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.669385 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.669432 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.669469 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.669461 3562 configmap.go:199] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.669561 3562 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.669615 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.669579564 +0000 UTC m=+51.152810157 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.669624 3562 secret.go:194] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.669619 3562 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.669673 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.669646595 +0000 UTC m=+51.152877348 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-client" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.669706 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.669689697 +0000 UTC m=+51.152920460 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.669711 3562 secret.go:194] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.669732 3562 secret.go:194] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.669767 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.669736168 +0000 UTC m=+51.152966861 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.669505 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.669576 3562 secret.go:194] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.669635 3562 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.669800 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.669786679 +0000 UTC m=+51.153017412 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-client" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.669920 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.669879692 +0000 UTC m=+51.153110415 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.669977 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.670025 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.669966834 +0000 UTC m=+51.153197517 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.670057 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.670044257 +0000 UTC m=+51.153275010 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.670057 3562 configmap.go:199] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.670127 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.670149 3562 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-idp-0-file-data: object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.670220 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.670261 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.670216291 +0000 UTC m=+51.153446994 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.670307 3562 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.670346 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.670349 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.670339595 +0000 UTC m=+51.153570328 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.670395 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.670384386 +0000 UTC m=+51.153615129 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-idp-0-file-data" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.670412 3562 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.670441 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.670466 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.670457068 +0000 UTC m=+51.153687801 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.670500 3562 configmap.go:199] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.670547 3562 secret.go:194] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.670587 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.670574432 +0000 UTC m=+51.153805155 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"console-operator-config" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.670501 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.670607 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.670597613 +0000 UTC m=+51.153828346 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-serving-cert" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.670634 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.670678 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.670717 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.670756 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.670769 3562 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.670802 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.670818 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.670808059 +0000 UTC m=+51.154038762 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.670853 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.670867 3562 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.670884 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.670927 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.670893201 +0000 UTC m=+51.154123914 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.670960 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.670968 3562 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.670998 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.671002 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.670994184 +0000 UTC m=+51.154224957 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.671063 3562 secret.go:194] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.671083 3562 configmap.go:199] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.671097 3562 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.671122 3562 configmap.go:199] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.671107 3562 secret.go:194] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.671199 3562 configmap.go:199] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.671092 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.671083327 +0000 UTC m=+51.154314080 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.671321 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.671374 3562 secret.go:194] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.671409 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.671395525 +0000 UTC m=+51.154626248 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"trusted-ca" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.671434 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.671424636 +0000 UTC m=+51.154655389 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.671452 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.671442907 +0000 UTC m=+51.154673620 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"image-import-ca" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.671469 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs podName:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.671461257 +0000 UTC m=+51.154691980 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs") pod "multus-admission-controller-6c7c885997-4hbbc" (UID: "d5025cb4-ddb0-4107-88c1-bcbcdb779ac0") : object "openshift-multus"/"multus-admission-controller-secret" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.671490 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.671481158 +0000 UTC m=+51.154711691 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"openshift-global-ca" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.671508 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.671498758 +0000 UTC m=+51.154729501 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"etcd-client" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.671535 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.671571 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.671607 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.671640 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.671689 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.671727 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.671736 3562 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.671749 3562 secret.go:194] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.671804 3562 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.671751 3562 configmap.go:199] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.671845 3562 configmap.go:199] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.671760 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.671892 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs podName:a702c6d2-4dde-4077-ab8c-0f8df804bf7a nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.671803497 +0000 UTC m=+51.155034020 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs") pod "network-metrics-daemon-qdfr4" (UID: "a702c6d2-4dde-4077-ab8c-0f8df804bf7a") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.671973 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.671961141 +0000 UTC m=+51.155191884 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"config" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.671821 3562 configmap.go:199] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.671822 3562 secret.go:194] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.672093 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.671986912 +0000 UTC m=+51.155217435 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.672127 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.672116885 +0000 UTC m=+51.155347628 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.672150 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.672143176 +0000 UTC m=+51.155373919 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.672224 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.672272 3562 secret.go:194] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.672304 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.6722889 +0000 UTC m=+51.155519623 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.672320 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.672313171 +0000 UTC m=+51.155543894 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.672369 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.672330741 +0000 UTC m=+51.155561464 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.672419 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.672488 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.672541 3562 configmap.go:199] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.672742 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.672731753 +0000 UTC m=+51.155962446 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.672583 3562 configmap.go:199] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.672790 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.672776354 +0000 UTC m=+51.156007087 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.672658 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.672860 3562 secret.go:194] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.672872 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.672922 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.672894507 +0000 UTC m=+51.156125030 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"serving-cert" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.672967 3562 configmap.go:199] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.673008 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.67299891 +0000 UTC m=+51.156229623 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"audit" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.673066 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.673100 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.673196 3562 configmap.go:199] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.673244 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.673232947 +0000 UTC m=+51.156463640 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"config" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.673203 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.673315 3562 secret.go:194] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.673356 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.673346871 +0000 UTC m=+51.156577584 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.673356 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.673397 3562 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.673421 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2h9\" (UniqueName: \"kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.673444 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.673431413 +0000 UTC m=+51.156662086 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.673496 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.673514 3562 projected.go:294] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.673534 3562 projected.go:294] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.673575 3562 configmap.go:199] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.673580 3562 secret.go:194] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.673610 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.673600978 +0000 UTC m=+51.156831701 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.673622 3562 configmap.go:199] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.673583 3562 projected.go:200] Error preparing data for projected volume kube-api-access-lx2h9 for pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.673633 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.673620688 +0000 UTC m=+51.156851361 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"serving-cert" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.673785 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.673771273 +0000 UTC m=+51.157001966 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"oauth-serving-cert" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.673833 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9 podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.673794993 +0000 UTC m=+51.157025716 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-lx2h9" (UniqueName: "kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.673537 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.673998 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.674081 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.674153 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-serving-cert\" (UniqueName: \"kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.674226 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.674296 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.674361 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.674401 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.674453 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.674492 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.674537 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.674544 3562 secret.go:194] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.674572 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.674583 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.674573595 +0000 UTC m=+51.157804308 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.674593 3562 secret.go:194] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.674630 3562 configmap.go:199] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.674604 3562 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.674665 3562 secret.go:194] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.674669 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.674725 3562 secret.go:194] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.674736 3562 secret.go:194] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.674673 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.674659017 +0000 UTC m=+51.157889610 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.674759 3562 secret.go:194] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.674780 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.67476911 +0000 UTC m=+51.157999743 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"serving-cert" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.674805 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.674789921 +0000 UTC m=+51.158020664 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"serving-cert" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.674810 3562 configmap.go:199] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.674817 3562 configmap.go:199] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.674831 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.674817732 +0000 UTC m=+51.158048485 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.674853 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 podName: nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.674841622 +0000 UTC m=+51.158072395 (durationBeforeRetry 16s). Error: MountVolume.MountDevice failed for volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.674519 3562 secret.go:194] Couldn't get secret openshift-console-operator/webhook-serving-cert: object "openshift-console-operator"/"webhook-serving-cert" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.674939 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.674929775 +0000 UTC m=+51.158160518 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-session" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.674964 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.674955406 +0000 UTC m=+51.158186129 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-oauth-config" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.674896 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.674978 3562 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.674991 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls podName:297ab9b6-2186-4d5b-a952-2bfd59af63c4 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.674980666 +0000 UTC m=+51.158211389 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls") pod "machine-config-controller-6df6df6b6b-58shh" (UID: "297ab9b6-2186-4d5b-a952-2bfd59af63c4") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.675029 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.675057 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.675038908 +0000 UTC m=+51.158269631 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.675069 3562 secret.go:194] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.675082 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.675072399 +0000 UTC m=+51.158303132 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.675115 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.67509775 +0000 UTC m=+51.158328463 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"serving-cert" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.675139 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.67512967 +0000 UTC m=+51.158360403 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-serving-cert" (UniqueName: "kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : object "openshift-console-operator"/"webhook-serving-cert" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.675156 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.675149411 +0000 UTC m=+51.158380154 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.675210 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.675257 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.675325 3562 secret.go:194] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.675371 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.675360507 +0000 UTC m=+51.158591040 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default-metrics-tls" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.675328 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.675395 3562 configmap.go:199] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.675397 3562 secret.go:194] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.675444 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.675457 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.675446219 +0000 UTC m=+51.158676922 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"encryption-config-1" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.675472 3562 secret.go:194] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.675487 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.67547782 +0000 UTC m=+51.158708573 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"config" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.675489 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.675515 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.675500431 +0000 UTC m=+51.158731094 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"metrics-tls" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.675545 3562 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.675553 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.675583 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.675573563 +0000 UTC m=+51.158804266 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.675609 3562 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.675640 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.675649 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.675638335 +0000 UTC m=+51.158869048 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.675691 3562 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.675711 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.675725 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.675714647 +0000 UTC m=+51.158945170 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"client-ca" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.675761 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.675820 3562 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.675859 3562 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.675862 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.67584776 +0000 UTC m=+51.159078493 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.675820 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.675925 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.675890152 +0000 UTC m=+51.159120875 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"trusted-ca" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.675994 3562 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.676009 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.676026 3562 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.676049 3562 projected.go:200] Error preparing data for projected volume kube-api-access-ncrf5 for pod openshift-marketplace/certified-operators-7287f: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.676112 3562 secret.go:194] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.676052 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.676133 3562 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.676113 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5 podName:887d596e-c519-4bfa-af90-3edd9e1b2f0f nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.676093338 +0000 UTC m=+51.159323901 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-ncrf5" (UniqueName: "kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5") pod "certified-operators-7287f" (UID: "887d596e-c519-4bfa-af90-3edd9e1b2f0f") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.676225 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.676276 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.676255113 +0000 UTC m=+51.159485846 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.676317 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.676304934 +0000 UTC m=+51.159535677 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.676318 3562 secret.go:194] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.676363 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.676370 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.676355676 +0000 UTC m=+51.159586229 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : object "openshift-dns-operator"/"metrics-tls" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.676410 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.676463 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.676466 3562 secret.go:194] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.676521 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.67651068 +0000 UTC m=+51.159741403 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.676567 3562 projected.go:269] Couldn't get secret openshift-image-registry/image-registry-tls: object "openshift-image-registry"/"image-registry-tls" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.676568 3562 secret.go:194] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.676582 3562 projected.go:200] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-75779c45fd-v2j2v: object "openshift-image-registry"/"image-registry-tls" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.676615 3562 configmap.go:199] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.676625 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.676609923 +0000 UTC m=+51.159840476 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : object "openshift-service-ca"/"signing-key" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.676578 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.676656 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.676640774 +0000 UTC m=+51.159871327 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"image-registry-tls" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.676681 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.676668704 +0000 UTC m=+51.159899267 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.676764 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.676831 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.676888 3562 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.676952 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.676942612 +0000 UTC m=+51.160173275 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.676989 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.677021 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.677026 3562 configmap.go:199] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.677067 3562 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.677079 3562 configmap.go:199] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.677100 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.677079336 +0000 UTC m=+51.160309899 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"client-ca" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.677137 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.677120257 +0000 UTC m=+51.160350820 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.677231 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.677261 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.677249821 +0000 UTC m=+51.160480554 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"audit-1" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.677322 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.677363 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.677378 3562 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.677423 3562 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.677446 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.677427736 +0000 UTC m=+51.160658459 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.677428 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.677468 3562 configmap.go:199] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.677485 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.677466317 +0000 UTC m=+51.160697070 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.677511 3562 configmap.go:199] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.677538 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.677528658 +0000 UTC m=+51.160759181 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : object "openshift-service-ca"/"signing-cabundle" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.677590 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.677639 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.677671 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.677650152 +0000 UTC m=+51.160880885 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"etcd-serving-ca" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.677692 3562 secret.go:194] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.677730 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.677718974 +0000 UTC m=+51.160949497 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.677738 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.677768 3562 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.677821 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.677893 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-76gl8\" (UniqueName: \"kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8\") pod \"network-check-target-v54bt\" (UID: \"34a48baf-1bee-4921-8bb2-9b7320e76f79\") " pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.677934 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.677918689 +0000 UTC m=+51.161149442 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"audit-1" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.677925 3562 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.677971 3562 secret.go:194] Couldn't get secret openshift-image-registry/installation-pull-secrets: object "openshift-image-registry"/"installation-pull-secrets" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.677997 3562 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.678034 3562 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.678050 3562 projected.go:200] Error preparing data for projected volume kube-api-access-76gl8 for pod openshift-network-diagnostics/network-check-target-v54bt: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.677983 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.677971621 +0000 UTC m=+51.161202354 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.678084 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8 podName:34a48baf-1bee-4921-8bb2-9b7320e76f79 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.678073714 +0000 UTC m=+51.161304427 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-76gl8" (UniqueName: "kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8") pod "network-check-target-v54bt" (UID: "34a48baf-1bee-4921-8bb2-9b7320e76f79") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.678117 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.678145 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.678170 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.678197 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.678230 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.678199927 +0000 UTC m=+51.161430630 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "installation-pull-secrets" (UniqueName: "kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"installation-pull-secrets" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.678234 3562 configmap.go:199] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.678277 3562 secret.go:194] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.678306 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.6782982 +0000 UTC m=+51.161528723 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"serving-cert" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.678309 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.678239 3562 configmap.go:199] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.678370 3562 configmap.go:199] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.678317 3562 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.678351 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.678323251 +0000 UTC m=+51.161553934 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.678498 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.678484075 +0000 UTC m=+51.161714788 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-images" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.678519 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.678511236 +0000 UTC m=+51.161741759 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"service-ca" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.678548 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.678605 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.678647 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.678663 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.678635149 +0000 UTC m=+51.161865852 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.678673 3562 secret.go:194] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.678718 3562 secret.go:194] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.678770 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.678747372 +0000 UTC m=+51.161978035 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"serving-cert" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.678777 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.678814 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.678792634 +0000 UTC m=+51.162023337 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.678879 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.678895 3562 configmap.go:199] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.679015 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.679027 3562 secret.go:194] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.679024 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.67900172 +0000 UTC m=+51.162232443 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.679102 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.679129 3562 configmap.go:199] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.679145 3562 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.679159 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.679201 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.679179765 +0000 UTC m=+51.162410498 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-config" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.679266 3562 configmap.go:199] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.679300 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.679281578 +0000 UTC m=+51.162512271 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.679331 3562 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.679347 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.67932986 +0000 UTC m=+51.162560603 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.679366 3562 configmap.go:199] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.679271 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.679388 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.679366131 +0000 UTC m=+51.162596884 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"trusted-ca-bundle" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.679419 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.679401822 +0000 UTC m=+51.162632545 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"trusted-ca" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.679454 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.679435363 +0000 UTC m=+51.162666066 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.679531 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4w8wh\" (UniqueName: \"kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.679669 3562 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.679690 3562 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.679700 3562 projected.go:200] Error preparing data for projected volume kube-api-access-4w8wh for pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.679750 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.679739231 +0000 UTC m=+51.162969924 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-4w8wh" (UniqueName: "kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.780961 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9p8gt\" (UniqueName: \"kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt\") pod \"community-operators-sdddl\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " pod="openshift-marketplace/community-operators-sdddl" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.781077 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bm986\" (UniqueName: \"kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.781171 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j7zrh\" (UniqueName: \"kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.781237 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxcp\" (UniqueName: \"kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp\") pod \"network-check-source-5c5478f8c-vqvt7\" (UID: \"d0f40333-c860-4c04-8058-a0bf572dcf12\") " pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.781305 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.781368 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-x5d97\" (UniqueName: \"kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.781440 3562 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.781510 3562 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.781523 3562 projected.go:200] Error preparing data for projected volume kube-api-access-bm986 for pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.781551 3562 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.781576 3562 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.781583 3562 projected.go:294] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.781605 3562 projected.go:294] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.781615 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986 podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.781573837 +0000 UTC m=+51.264804400 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-bm986" (UniqueName: "kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.781589 3562 projected.go:200] Error preparing data for projected volume kube-api-access-qcxcp for pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.781653 3562 projected.go:294] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.781661 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp podName:d0f40333-c860-4c04-8058-a0bf572dcf12 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.781652309 +0000 UTC m=+51.264882912 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-qcxcp" (UniqueName: "kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp") pod "network-check-source-5c5478f8c-vqvt7" (UID: "d0f40333-c860-4c04-8058-a0bf572dcf12") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.781617 3562 projected.go:200] Error preparing data for projected volume kube-api-access-8dcvj for pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.781669 3562 projected.go:294] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.781682 3562 projected.go:200] Error preparing data for projected volume kube-api-access-j7zrh for pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.781698 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.78169084 +0000 UTC m=+51.264921453 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-8dcvj" (UniqueName: "kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.781459 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8dcvj\" (UniqueName: \"kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.781709 3562 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.781723 3562 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.781727 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.781712521 +0000 UTC m=+51.264943044 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-j7zrh" (UniqueName: "kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.781732 3562 projected.go:200] Error preparing data for projected volume kube-api-access-x5d97 for pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.781472 3562 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.781761 3562 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.781770 3562 projected.go:200] Error preparing data for projected volume kube-api-access-9p8gt for pod openshift-marketplace/community-operators-sdddl: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.781669 3562 projected.go:294] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.781797 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97 podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.781776723 +0000 UTC m=+51.265007326 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-x5d97" (UniqueName: "kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.781807 3562 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.781841 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2nz92\" (UniqueName: \"kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.781845 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.781835024 +0000 UTC m=+51.265065677 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.781873 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kgvs\" (UniqueName: \"kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.781938 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtrv\" (UniqueName: \"kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.781979 3562 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.781991 3562 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.782000 3562 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.782009 3562 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.782013 3562 projected.go:294] Couldn't get configMap hostpath-provisioner/kube-root-ca.crt: object "hostpath-provisioner"/"kube-root-ca.crt" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.782020 3562 projected.go:200] Error preparing data for projected volume kube-api-access-6kgvs for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.782026 3562 projected.go:294] Couldn't get configMap hostpath-provisioner/openshift-service-ca.crt: object "hostpath-provisioner"/"openshift-service-ca.crt" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.782033 3562 projected.go:200] Error preparing data for projected volume kube-api-access-vvtrv for pod hostpath-provisioner/csi-hostpathplugin-hvm8g: [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.782037 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt podName:fc9c9ba0-fcbb-4e78-8cf5-a059ec435760 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.781978438 +0000 UTC m=+51.265208971 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-9p8gt" (UniqueName: "kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt") pod "community-operators-sdddl" (UID: "fc9c9ba0-fcbb-4e78-8cf5-a059ec435760") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.782011 3562 projected.go:200] Error preparing data for projected volume kube-api-access-2nz92 for pod openshift-console/console-644bb77b49-5x5xk: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.782184 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.782172884 +0000 UTC m=+51.265403407 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kgvs" (UniqueName: "kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.782205 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92 podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.782196234 +0000 UTC m=+51.265426878 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-2nz92" (UniqueName: "kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.782219 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv podName:12e733dd-0939-4f1b-9cbb-13897e093787 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.782212555 +0000 UTC m=+51.265443178 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-vvtrv" (UniqueName: "kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv") pod "csi-hostpathplugin-hvm8g" (UID: "12e733dd-0939-4f1b-9cbb-13897e093787") : [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.782291 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nf4t2\" (UniqueName: \"kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.782325 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hjlhw\" (UniqueName: \"kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.782346 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.782407 3562 projected.go:294] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.782422 3562 projected.go:294] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.782430 3562 projected.go:200] Error preparing data for projected volume kube-api-access-nf4t2 for pod openshift-dns-operator/dns-operator-75f687757b-nz2xb: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.782461 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2 podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.782451432 +0000 UTC m=+51.265682025 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-nf4t2" (UniqueName: "kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.782470 3562 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.782494 3562 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.782505 3562 projected.go:200] Error preparing data for projected volume kube-api-access-hjlhw for pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.782503 3562 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.782524 3562 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.782531 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bxr\" (UniqueName: \"kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.782534 3562 projected.go:200] Error preparing data for projected volume kube-api-access-rg2zg for pod openshift-marketplace/marketplace-operator-8b455464d-f9xdt: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.782550 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.782533135 +0000 UTC m=+51.265763738 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-hjlhw" (UniqueName: "kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.782573 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.782564196 +0000 UTC m=+51.265794799 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-rg2zg" (UniqueName: "kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.782598 3562 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.782613 3562 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.782622 3562 projected.go:200] Error preparing data for projected volume kube-api-access-l8bxr for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.782645 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.782638828 +0000 UTC m=+51.265869351 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-l8bxr" (UniqueName: "kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.782682 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9724w\" (UniqueName: \"kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.782743 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8hpxx\" (UniqueName: \"kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.782768 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kp86\" (UniqueName: \"kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.782804 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-v7vkr\" (UniqueName: \"kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.782832 3562 projected.go:294] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.782850 3562 projected.go:294] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.782858 3562 projected.go:200] Error preparing data for projected volume kube-api-access-9724w for pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.782878 3562 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.782889 3562 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.782897 3562 projected.go:200] Error preparing data for projected volume kube-api-access-ptdrb for pod openshift-marketplace/redhat-operators-f4jkp: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.782927 3562 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.782941 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.782892485 +0000 UTC m=+51.266123038 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-9724w" (UniqueName: "kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.782947 3562 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.782957 3562 projected.go:200] Error preparing data for projected volume kube-api-access-6kp86 for pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.782957 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb podName:4092a9f8-5acc-4932-9e90-ef962eeb301a nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.782950697 +0000 UTC m=+51.266181220 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-ptdrb" (UniqueName: "kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb") pod "redhat-operators-f4jkp" (UID: "4092a9f8-5acc-4932-9e90-ef962eeb301a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.782839 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.783061 3562 projected.go:294] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.783094 3562 projected.go:294] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.783109 3562 projected.go:200] Error preparing data for projected volume kube-api-access-v7vkr for pod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.783109 3562 projected.go:294] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.783152 3562 projected.go:294] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.783180 3562 projected.go:200] Error preparing data for projected volume kube-api-access-8hpxx for pod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.782995 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86 podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.782985748 +0000 UTC m=+51.266216261 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kp86" (UniqueName: "kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.783434 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.78341939 +0000 UTC m=+51.266650123 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-v7vkr" (UniqueName: "kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.783467 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.783454881 +0000 UTC m=+51.266685724 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-8hpxx" (UniqueName: "kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.783890 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tvc4r\" (UniqueName: \"kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.783979 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-fqnmc\" (UniqueName: \"kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.784014 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-7ggjm\" (UniqueName: \"kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.784087 3562 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.784109 3562 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.784120 3562 projected.go:200] Error preparing data for projected volume kube-api-access-tvc4r for pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.784129 3562 projected.go:294] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.784149 3562 projected.go:294] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.784158 3562 projected.go:200] Error preparing data for projected volume kube-api-access-7ggjm for pod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.784163 3562 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.784219 3562 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.784173 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.78415937 +0000 UTC m=+51.267390073 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-tvc4r" (UniqueName: "kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.784235 3562 projected.go:200] Error preparing data for projected volume kube-api-access-fqnmc for pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.784254 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.784241483 +0000 UTC m=+51.267472196 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-7ggjm" (UniqueName: "kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.784303 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.784278734 +0000 UTC m=+51.267509297 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-fqnmc" (UniqueName: "kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.865445 3562 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.874349 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.874436 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.874494 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.874504 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.874561 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.874510 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.874446 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.874633 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.874465 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.874641 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.874679 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.874446 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.874730 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.874754 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.874732 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.874799 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.874803 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.874820 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.874712 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.874869 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.874568 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.874728 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.874942 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.874787 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.874980 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.875005 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.875082 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.874783 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.874844 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.874984 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.874600 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.875029 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.875200 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.875115 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.874891 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.875281 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.875426 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.875548 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.875686 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.875757 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.875816 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.876009 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.876208 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.876343 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.876463 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.876552 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.876627 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.876702 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.876782 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.876866 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.876967 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.877081 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.877215 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.877224 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.877275 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.877413 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.877507 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.877666 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.877677 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.877808 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.877881 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.878029 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.878122 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.878228 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.878328 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.878406 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.878479 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.878560 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.878688 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.878764 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.878864 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.878988 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.885338 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dt5cx\" (UniqueName: \"kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx\") pod \"ingress-canary-2vhcn\" (UID: \"0b5d722a-1123-4935-9740-52a08d018bc9\") " pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.885564 3562 projected.go:294] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.885725 3562 projected.go:294] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.885743 3562 projected.go:200] Error preparing data for projected volume kube-api-access-dt5cx for pod openshift-ingress-canary/ingress-canary-2vhcn: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.885819 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx podName:0b5d722a-1123-4935-9740-52a08d018bc9 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.885800441 +0000 UTC m=+51.369030964 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-dt5cx" (UniqueName: "kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx") pod "ingress-canary-2vhcn" (UID: "0b5d722a-1123-4935-9740-52a08d018bc9") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.887287 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.887432 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.887549 3562 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.887551 3562 projected.go:294] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.887634 3562 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.887581 3562 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.887734 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.887700335 +0000 UTC m=+51.370931018 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.887741 3562 projected.go:200] Error preparing data for projected volume kube-api-access-55f7t for pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.887797 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.887784267 +0000 UTC m=+51.371014970 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-55f7t" (UniqueName: "kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.989520 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2zpsk\" (UniqueName: \"kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk\") pod \"downloads-65476884b9-9wcvx\" (UID: \"6268b7fe-8910-4505-b404-6f1df638105c\") " pod="openshift-console/downloads-65476884b9-9wcvx" Dec 13 00:06:38 crc kubenswrapper[3562]: I1213 00:06:38.989648 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmhq\" (UniqueName: \"kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq\") pod \"migrator-f7c6d88df-q2fnv\" (UID: \"cf1a8966-f594-490a-9fbb-eec5bafd13d3\") " pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.989737 3562 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.989777 3562 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.989791 3562 projected.go:200] Error preparing data for projected volume kube-api-access-2zpsk for pod openshift-console/downloads-65476884b9-9wcvx: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.990017 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk podName:6268b7fe-8910-4505-b404-6f1df638105c nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.989948044 +0000 UTC m=+51.473178567 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-2zpsk" (UniqueName: "kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk") pod "downloads-65476884b9-9wcvx" (UID: "6268b7fe-8910-4505-b404-6f1df638105c") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.990025 3562 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.990169 3562 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.990189 3562 projected.go:200] Error preparing data for projected volume kube-api-access-hqmhq for pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:38 crc kubenswrapper[3562]: E1213 00:06:38.990295 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq podName:cf1a8966-f594-490a-9fbb-eec5bafd13d3 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:54.990266353 +0000 UTC m=+51.473497066 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqmhq" (UniqueName: "kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq") pod "migrator-f7c6d88df-q2fnv" (UID: "cf1a8966-f594-490a-9fbb-eec5bafd13d3") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:39 crc kubenswrapper[3562]: I1213 00:06:39.093768 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wrd8h\" (UniqueName: \"kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:06:39 crc kubenswrapper[3562]: I1213 00:06:39.093848 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:06:39 crc kubenswrapper[3562]: I1213 00:06:39.093890 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vhj\" (UniqueName: \"kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:06:39 crc kubenswrapper[3562]: E1213 00:06:39.094014 3562 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 13 00:06:39 crc kubenswrapper[3562]: E1213 00:06:39.094040 3562 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 13 00:06:39 crc kubenswrapper[3562]: E1213 00:06:39.094064 3562 projected.go:200] Error preparing data for projected volume kube-api-access-wrd8h for pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:39 crc kubenswrapper[3562]: E1213 00:06:39.094087 3562 projected.go:294] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:39 crc kubenswrapper[3562]: E1213 00:06:39.094107 3562 projected.go:294] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:39 crc kubenswrapper[3562]: E1213 00:06:39.094118 3562 projected.go:200] Error preparing data for projected volume kube-api-access-tl5kg for pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:39 crc kubenswrapper[3562]: E1213 00:06:39.094124 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-12-13 00:06:55.094103096 +0000 UTC m=+51.577333619 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-wrd8h" (UniqueName: "kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:39 crc kubenswrapper[3562]: I1213 00:06:39.094022 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tl5kg\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:06:39 crc kubenswrapper[3562]: E1213 00:06:39.094151 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:55.094140707 +0000 UTC m=+51.577371240 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-tl5kg" (UniqueName: "kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:39 crc kubenswrapper[3562]: E1213 00:06:39.094265 3562 projected.go:294] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:39 crc kubenswrapper[3562]: E1213 00:06:39.094329 3562 projected.go:294] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:39 crc kubenswrapper[3562]: E1213 00:06:39.094351 3562 projected.go:200] Error preparing data for projected volume kube-api-access-d9vhj for pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:39 crc kubenswrapper[3562]: E1213 00:06:39.094266 3562 projected.go:294] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:39 crc kubenswrapper[3562]: E1213 00:06:39.094452 3562 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:39 crc kubenswrapper[3562]: E1213 00:06:39.094475 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:55.094439135 +0000 UTC m=+51.577669818 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-d9vhj" (UniqueName: "kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:39 crc kubenswrapper[3562]: E1213 00:06:39.094645 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:55.09462802 +0000 UTC m=+51.577858743 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:39 crc kubenswrapper[3562]: I1213 00:06:39.195841 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5rpl7\" (UniqueName: \"kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:06:39 crc kubenswrapper[3562]: I1213 00:06:39.195965 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Dec 13 00:06:39 crc kubenswrapper[3562]: I1213 00:06:39.196095 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 13 00:06:39 crc kubenswrapper[3562]: E1213 00:06:39.196275 3562 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 13 00:06:39 crc kubenswrapper[3562]: E1213 00:06:39.196342 3562 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 13 00:06:39 crc kubenswrapper[3562]: E1213 00:06:39.196350 3562 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:39 crc kubenswrapper[3562]: E1213 00:06:39.196399 3562 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:39 crc kubenswrapper[3562]: E1213 00:06:39.196433 3562 projected.go:200] Error preparing data for projected volume kube-api-access-5rpl7 for pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:39 crc kubenswrapper[3562]: E1213 00:06:39.196361 3562 projected.go:200] Error preparing data for projected volume kube-api-access-n6sqt for pod openshift-marketplace/community-operators-8jhz6: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:39 crc kubenswrapper[3562]: E1213 00:06:39.196563 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7 podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:55.196524428 +0000 UTC m=+51.679755051 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-5rpl7" (UniqueName: "kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:39 crc kubenswrapper[3562]: E1213 00:06:39.196586 3562 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 13 00:06:39 crc kubenswrapper[3562]: E1213 00:06:39.196610 3562 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 13 00:06:39 crc kubenswrapper[3562]: E1213 00:06:39.196624 3562 projected.go:200] Error preparing data for projected volume kube-api-access-tf29r for pod openshift-marketplace/redhat-marketplace-8s8pc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:39 crc kubenswrapper[3562]: E1213 00:06:39.196670 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r podName:c782cf62-a827-4677-b3c2-6f82c5f09cbb nodeName:}" failed. No retries permitted until 2025-12-13 00:06:55.196657232 +0000 UTC m=+51.679887755 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-tf29r" (UniqueName: "kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r") pod "redhat-marketplace-8s8pc" (UID: "c782cf62-a827-4677-b3c2-6f82c5f09cbb") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:39 crc kubenswrapper[3562]: E1213 00:06:39.196814 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt podName:3f4dca86-e6ee-4ec9-8324-86aff960225e nodeName:}" failed. No retries permitted until 2025-12-13 00:06:55.196782466 +0000 UTC m=+51.680013199 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-n6sqt" (UniqueName: "kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt") pod "community-operators-8jhz6" (UID: "3f4dca86-e6ee-4ec9-8324-86aff960225e") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:39 crc kubenswrapper[3562]: I1213 00:06:39.299179 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-js87r\" (UniqueName: \"kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:06:39 crc kubenswrapper[3562]: I1213 00:06:39.299226 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pkhl4\" (UniqueName: \"kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:06:39 crc kubenswrapper[3562]: E1213 00:06:39.299346 3562 projected.go:294] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Dec 13 00:06:39 crc kubenswrapper[3562]: E1213 00:06:39.299399 3562 projected.go:294] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Dec 13 00:06:39 crc kubenswrapper[3562]: E1213 00:06:39.299411 3562 projected.go:200] Error preparing data for projected volume kube-api-access-js87r for pod openshift-service-ca/service-ca-666f99b6f-kk8kg: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Dec 13 00:06:39 crc kubenswrapper[3562]: E1213 00:06:39.299478 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-12-13 00:06:55.299458356 +0000 UTC m=+51.782688879 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-js87r" (UniqueName: "kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Dec 13 00:06:39 crc kubenswrapper[3562]: E1213 00:06:39.299551 3562 projected.go:294] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Dec 13 00:06:39 crc kubenswrapper[3562]: E1213 00:06:39.299570 3562 projected.go:294] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Dec 13 00:06:39 crc kubenswrapper[3562]: E1213 00:06:39.299583 3562 projected.go:200] Error preparing data for projected volume kube-api-access-pkhl4 for pod openshift-controller-manager/controller-manager-778975cc4f-x5vcf: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:39 crc kubenswrapper[3562]: E1213 00:06:39.299634 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4 podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-13 00:06:55.299618281 +0000 UTC m=+51.782848804 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-pkhl4" (UniqueName: "kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:39 crc kubenswrapper[3562]: I1213 00:06:39.472115 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:06:39 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:06:39 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:06:39 crc kubenswrapper[3562]: healthz check failed Dec 13 00:06:39 crc kubenswrapper[3562]: I1213 00:06:39.472218 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:06:39 crc kubenswrapper[3562]: I1213 00:06:39.874281 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 13 00:06:39 crc kubenswrapper[3562]: I1213 00:06:39.874310 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:06:39 crc kubenswrapper[3562]: I1213 00:06:39.874374 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:06:39 crc kubenswrapper[3562]: I1213 00:06:39.874409 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:06:39 crc kubenswrapper[3562]: E1213 00:06:39.874471 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 13 00:06:39 crc kubenswrapper[3562]: I1213 00:06:39.874528 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:06:39 crc kubenswrapper[3562]: I1213 00:06:39.874535 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 13 00:06:39 crc kubenswrapper[3562]: I1213 00:06:39.874558 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:06:39 crc kubenswrapper[3562]: E1213 00:06:39.874635 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 13 00:06:39 crc kubenswrapper[3562]: I1213 00:06:39.874631 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:06:39 crc kubenswrapper[3562]: I1213 00:06:39.874694 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 13 00:06:39 crc kubenswrapper[3562]: I1213 00:06:39.874703 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:06:39 crc kubenswrapper[3562]: I1213 00:06:39.874662 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:06:39 crc kubenswrapper[3562]: E1213 00:06:39.874760 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 13 00:06:39 crc kubenswrapper[3562]: E1213 00:06:39.874845 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 13 00:06:39 crc kubenswrapper[3562]: E1213 00:06:39.874949 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 13 00:06:39 crc kubenswrapper[3562]: E1213 00:06:39.875005 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 13 00:06:39 crc kubenswrapper[3562]: E1213 00:06:39.875082 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 13 00:06:39 crc kubenswrapper[3562]: E1213 00:06:39.875162 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 13 00:06:39 crc kubenswrapper[3562]: E1213 00:06:39.875218 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 13 00:06:39 crc kubenswrapper[3562]: E1213 00:06:39.875273 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 13 00:06:39 crc kubenswrapper[3562]: E1213 00:06:39.875326 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 13 00:06:40 crc kubenswrapper[3562]: I1213 00:06:40.472817 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:06:40 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:06:40 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:06:40 crc kubenswrapper[3562]: healthz check failed Dec 13 00:06:40 crc kubenswrapper[3562]: I1213 00:06:40.472976 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:06:40 crc kubenswrapper[3562]: I1213 00:06:40.875202 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:06:40 crc kubenswrapper[3562]: E1213 00:06:40.875433 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 13 00:06:40 crc kubenswrapper[3562]: I1213 00:06:40.875495 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:06:40 crc kubenswrapper[3562]: E1213 00:06:40.875582 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 13 00:06:40 crc kubenswrapper[3562]: I1213 00:06:40.875633 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 13 00:06:40 crc kubenswrapper[3562]: E1213 00:06:40.875715 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 13 00:06:40 crc kubenswrapper[3562]: I1213 00:06:40.875754 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:06:40 crc kubenswrapper[3562]: E1213 00:06:40.875834 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 13 00:06:40 crc kubenswrapper[3562]: I1213 00:06:40.875880 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 13 00:06:40 crc kubenswrapper[3562]: E1213 00:06:40.875997 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 13 00:06:40 crc kubenswrapper[3562]: I1213 00:06:40.876042 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 13 00:06:40 crc kubenswrapper[3562]: E1213 00:06:40.876142 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 13 00:06:40 crc kubenswrapper[3562]: I1213 00:06:40.876151 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 13 00:06:40 crc kubenswrapper[3562]: I1213 00:06:40.876204 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:06:40 crc kubenswrapper[3562]: I1213 00:06:40.876230 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 13 00:06:40 crc kubenswrapper[3562]: I1213 00:06:40.876284 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:06:40 crc kubenswrapper[3562]: I1213 00:06:40.876336 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:06:40 crc kubenswrapper[3562]: I1213 00:06:40.876339 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 13 00:06:40 crc kubenswrapper[3562]: I1213 00:06:40.876357 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:06:40 crc kubenswrapper[3562]: I1213 00:06:40.876331 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:06:40 crc kubenswrapper[3562]: I1213 00:06:40.876463 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 13 00:06:40 crc kubenswrapper[3562]: I1213 00:06:40.876541 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:06:40 crc kubenswrapper[3562]: E1213 00:06:40.876545 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 13 00:06:40 crc kubenswrapper[3562]: I1213 00:06:40.876608 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:06:40 crc kubenswrapper[3562]: I1213 00:06:40.876628 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:06:40 crc kubenswrapper[3562]: I1213 00:06:40.876601 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 13 00:06:40 crc kubenswrapper[3562]: I1213 00:06:40.876681 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:40 crc kubenswrapper[3562]: I1213 00:06:40.876717 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 13 00:06:40 crc kubenswrapper[3562]: I1213 00:06:40.876778 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:40 crc kubenswrapper[3562]: I1213 00:06:40.877016 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 13 00:06:40 crc kubenswrapper[3562]: E1213 00:06:40.877037 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 13 00:06:40 crc kubenswrapper[3562]: E1213 00:06:40.877142 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 13 00:06:40 crc kubenswrapper[3562]: I1213 00:06:40.877198 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 13 00:06:40 crc kubenswrapper[3562]: E1213 00:06:40.877428 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 13 00:06:40 crc kubenswrapper[3562]: E1213 00:06:40.877505 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 13 00:06:40 crc kubenswrapper[3562]: E1213 00:06:40.877635 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 13 00:06:40 crc kubenswrapper[3562]: I1213 00:06:40.877645 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:06:40 crc kubenswrapper[3562]: I1213 00:06:40.877717 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 13 00:06:40 crc kubenswrapper[3562]: E1213 00:06:40.877801 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 13 00:06:40 crc kubenswrapper[3562]: I1213 00:06:40.877823 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 13 00:06:40 crc kubenswrapper[3562]: E1213 00:06:40.877933 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 13 00:06:40 crc kubenswrapper[3562]: E1213 00:06:40.878050 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 13 00:06:40 crc kubenswrapper[3562]: I1213 00:06:40.878105 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:06:40 crc kubenswrapper[3562]: E1213 00:06:40.878224 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 13 00:06:40 crc kubenswrapper[3562]: I1213 00:06:40.878276 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:40 crc kubenswrapper[3562]: E1213 00:06:40.878416 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 13 00:06:40 crc kubenswrapper[3562]: I1213 00:06:40.878491 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 13 00:06:40 crc kubenswrapper[3562]: E1213 00:06:40.878619 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 13 00:06:40 crc kubenswrapper[3562]: I1213 00:06:40.878648 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:06:40 crc kubenswrapper[3562]: I1213 00:06:40.878685 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:06:40 crc kubenswrapper[3562]: E1213 00:06:40.878821 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 13 00:06:40 crc kubenswrapper[3562]: E1213 00:06:40.878999 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 13 00:06:40 crc kubenswrapper[3562]: I1213 00:06:40.879009 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:06:40 crc kubenswrapper[3562]: E1213 00:06:40.879152 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 13 00:06:40 crc kubenswrapper[3562]: E1213 00:06:40.879262 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 13 00:06:40 crc kubenswrapper[3562]: I1213 00:06:40.879342 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 13 00:06:40 crc kubenswrapper[3562]: E1213 00:06:40.879439 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 13 00:06:40 crc kubenswrapper[3562]: E1213 00:06:40.879528 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 13 00:06:40 crc kubenswrapper[3562]: I1213 00:06:40.879604 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:06:40 crc kubenswrapper[3562]: E1213 00:06:40.879713 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 13 00:06:40 crc kubenswrapper[3562]: E1213 00:06:40.879759 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 13 00:06:40 crc kubenswrapper[3562]: E1213 00:06:40.879812 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 13 00:06:40 crc kubenswrapper[3562]: E1213 00:06:40.879849 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 13 00:06:40 crc kubenswrapper[3562]: E1213 00:06:40.880045 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 13 00:06:40 crc kubenswrapper[3562]: E1213 00:06:40.880121 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 13 00:06:40 crc kubenswrapper[3562]: E1213 00:06:40.880199 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 13 00:06:40 crc kubenswrapper[3562]: I1213 00:06:40.880260 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 13 00:06:40 crc kubenswrapper[3562]: E1213 00:06:40.880341 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 13 00:06:40 crc kubenswrapper[3562]: E1213 00:06:40.880445 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 13 00:06:40 crc kubenswrapper[3562]: E1213 00:06:40.880528 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 13 00:06:40 crc kubenswrapper[3562]: E1213 00:06:40.880606 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 13 00:06:40 crc kubenswrapper[3562]: E1213 00:06:40.880686 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 13 00:06:41 crc kubenswrapper[3562]: I1213 00:06:41.470933 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:06:41 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:06:41 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:06:41 crc kubenswrapper[3562]: healthz check failed Dec 13 00:06:41 crc kubenswrapper[3562]: I1213 00:06:41.471028 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:06:41 crc kubenswrapper[3562]: I1213 00:06:41.875145 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 13 00:06:41 crc kubenswrapper[3562]: I1213 00:06:41.875262 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:06:41 crc kubenswrapper[3562]: I1213 00:06:41.875315 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:06:41 crc kubenswrapper[3562]: I1213 00:06:41.875347 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 13 00:06:41 crc kubenswrapper[3562]: I1213 00:06:41.875405 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 13 00:06:41 crc kubenswrapper[3562]: I1213 00:06:41.875180 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:06:41 crc kubenswrapper[3562]: I1213 00:06:41.875445 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:06:41 crc kubenswrapper[3562]: I1213 00:06:41.875347 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:06:41 crc kubenswrapper[3562]: I1213 00:06:41.875541 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:06:41 crc kubenswrapper[3562]: E1213 00:06:41.875556 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 13 00:06:41 crc kubenswrapper[3562]: I1213 00:06:41.875353 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:06:41 crc kubenswrapper[3562]: E1213 00:06:41.875692 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 13 00:06:41 crc kubenswrapper[3562]: I1213 00:06:41.875704 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:06:41 crc kubenswrapper[3562]: E1213 00:06:41.875948 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 13 00:06:41 crc kubenswrapper[3562]: E1213 00:06:41.876047 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 13 00:06:41 crc kubenswrapper[3562]: E1213 00:06:41.876168 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 13 00:06:41 crc kubenswrapper[3562]: E1213 00:06:41.876231 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 13 00:06:41 crc kubenswrapper[3562]: E1213 00:06:41.876297 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 13 00:06:41 crc kubenswrapper[3562]: E1213 00:06:41.876362 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 13 00:06:41 crc kubenswrapper[3562]: E1213 00:06:41.876433 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 13 00:06:41 crc kubenswrapper[3562]: E1213 00:06:41.876516 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 13 00:06:41 crc kubenswrapper[3562]: E1213 00:06:41.876564 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 13 00:06:42 crc kubenswrapper[3562]: I1213 00:06:42.473262 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:06:42 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:06:42 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:06:42 crc kubenswrapper[3562]: healthz check failed Dec 13 00:06:42 crc kubenswrapper[3562]: I1213 00:06:42.473370 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:06:42 crc kubenswrapper[3562]: I1213 00:06:42.875244 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:06:42 crc kubenswrapper[3562]: E1213 00:06:42.875560 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 13 00:06:42 crc kubenswrapper[3562]: I1213 00:06:42.876175 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 13 00:06:42 crc kubenswrapper[3562]: I1213 00:06:42.876222 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:06:42 crc kubenswrapper[3562]: I1213 00:06:42.876360 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:42 crc kubenswrapper[3562]: I1213 00:06:42.876469 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 13 00:06:42 crc kubenswrapper[3562]: E1213 00:06:42.876498 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 13 00:06:42 crc kubenswrapper[3562]: I1213 00:06:42.876553 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 13 00:06:42 crc kubenswrapper[3562]: I1213 00:06:42.876623 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:06:42 crc kubenswrapper[3562]: I1213 00:06:42.876686 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:06:42 crc kubenswrapper[3562]: E1213 00:06:42.876753 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 13 00:06:42 crc kubenswrapper[3562]: I1213 00:06:42.876760 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:06:42 crc kubenswrapper[3562]: I1213 00:06:42.876790 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 13 00:06:42 crc kubenswrapper[3562]: E1213 00:06:42.876619 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 13 00:06:42 crc kubenswrapper[3562]: I1213 00:06:42.876819 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 13 00:06:42 crc kubenswrapper[3562]: E1213 00:06:42.876857 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 13 00:06:42 crc kubenswrapper[3562]: I1213 00:06:42.876873 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 13 00:06:42 crc kubenswrapper[3562]: I1213 00:06:42.876927 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:06:42 crc kubenswrapper[3562]: I1213 00:06:42.876934 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:06:42 crc kubenswrapper[3562]: I1213 00:06:42.876993 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:06:42 crc kubenswrapper[3562]: I1213 00:06:42.877010 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 13 00:06:42 crc kubenswrapper[3562]: I1213 00:06:42.877076 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 13 00:06:42 crc kubenswrapper[3562]: E1213 00:06:42.876999 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 13 00:06:42 crc kubenswrapper[3562]: I1213 00:06:42.877145 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 13 00:06:42 crc kubenswrapper[3562]: E1213 00:06:42.877155 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 13 00:06:42 crc kubenswrapper[3562]: I1213 00:06:42.877170 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:06:42 crc kubenswrapper[3562]: E1213 00:06:42.877247 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 13 00:06:42 crc kubenswrapper[3562]: I1213 00:06:42.877251 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 13 00:06:42 crc kubenswrapper[3562]: I1213 00:06:42.877304 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:06:42 crc kubenswrapper[3562]: I1213 00:06:42.877343 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:06:42 crc kubenswrapper[3562]: I1213 00:06:42.877375 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:06:42 crc kubenswrapper[3562]: I1213 00:06:42.877393 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:06:42 crc kubenswrapper[3562]: I1213 00:06:42.877459 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:06:42 crc kubenswrapper[3562]: E1213 00:06:42.877481 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 13 00:06:42 crc kubenswrapper[3562]: I1213 00:06:42.877560 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 13 00:06:42 crc kubenswrapper[3562]: I1213 00:06:42.877567 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:42 crc kubenswrapper[3562]: I1213 00:06:42.877573 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:06:42 crc kubenswrapper[3562]: E1213 00:06:42.877767 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 13 00:06:42 crc kubenswrapper[3562]: I1213 00:06:42.877803 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 13 00:06:42 crc kubenswrapper[3562]: E1213 00:06:42.877874 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 13 00:06:42 crc kubenswrapper[3562]: I1213 00:06:42.877954 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 13 00:06:42 crc kubenswrapper[3562]: E1213 00:06:42.877998 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 13 00:06:42 crc kubenswrapper[3562]: I1213 00:06:42.878019 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 13 00:06:42 crc kubenswrapper[3562]: E1213 00:06:42.878108 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 13 00:06:42 crc kubenswrapper[3562]: E1213 00:06:42.878219 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 13 00:06:42 crc kubenswrapper[3562]: I1213 00:06:42.878336 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:06:42 crc kubenswrapper[3562]: E1213 00:06:42.878430 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 13 00:06:42 crc kubenswrapper[3562]: E1213 00:06:42.878520 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 13 00:06:42 crc kubenswrapper[3562]: I1213 00:06:42.878561 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 13 00:06:42 crc kubenswrapper[3562]: E1213 00:06:42.878624 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 13 00:06:42 crc kubenswrapper[3562]: I1213 00:06:42.878639 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 13 00:06:42 crc kubenswrapper[3562]: I1213 00:06:42.879047 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:42 crc kubenswrapper[3562]: E1213 00:06:42.878724 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 13 00:06:42 crc kubenswrapper[3562]: E1213 00:06:42.878802 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 13 00:06:42 crc kubenswrapper[3562]: E1213 00:06:42.878892 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 13 00:06:42 crc kubenswrapper[3562]: E1213 00:06:42.878997 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 13 00:06:42 crc kubenswrapper[3562]: E1213 00:06:42.879115 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 13 00:06:42 crc kubenswrapper[3562]: E1213 00:06:42.879183 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 13 00:06:42 crc kubenswrapper[3562]: I1213 00:06:42.879229 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:06:42 crc kubenswrapper[3562]: E1213 00:06:42.879561 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 13 00:06:42 crc kubenswrapper[3562]: E1213 00:06:42.879644 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 13 00:06:42 crc kubenswrapper[3562]: E1213 00:06:42.879281 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 13 00:06:42 crc kubenswrapper[3562]: E1213 00:06:42.879440 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 13 00:06:42 crc kubenswrapper[3562]: E1213 00:06:42.879850 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 13 00:06:42 crc kubenswrapper[3562]: E1213 00:06:42.879944 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 13 00:06:42 crc kubenswrapper[3562]: E1213 00:06:42.880079 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 13 00:06:42 crc kubenswrapper[3562]: E1213 00:06:42.880130 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 13 00:06:42 crc kubenswrapper[3562]: E1213 00:06:42.880224 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 13 00:06:42 crc kubenswrapper[3562]: E1213 00:06:42.880380 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 13 00:06:42 crc kubenswrapper[3562]: E1213 00:06:42.880488 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 13 00:06:42 crc kubenswrapper[3562]: E1213 00:06:42.880599 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 13 00:06:42 crc kubenswrapper[3562]: E1213 00:06:42.880720 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 13 00:06:43 crc kubenswrapper[3562]: I1213 00:06:43.474242 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:06:43 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:06:43 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:06:43 crc kubenswrapper[3562]: healthz check failed Dec 13 00:06:43 crc kubenswrapper[3562]: I1213 00:06:43.474349 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:06:43 crc kubenswrapper[3562]: E1213 00:06:43.867227 3562 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 13 00:06:43 crc kubenswrapper[3562]: I1213 00:06:43.874418 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 13 00:06:43 crc kubenswrapper[3562]: I1213 00:06:43.874537 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 13 00:06:43 crc kubenswrapper[3562]: I1213 00:06:43.874549 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:06:43 crc kubenswrapper[3562]: I1213 00:06:43.874573 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:06:43 crc kubenswrapper[3562]: E1213 00:06:43.874719 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 13 00:06:43 crc kubenswrapper[3562]: I1213 00:06:43.874726 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:06:43 crc kubenswrapper[3562]: E1213 00:06:43.874846 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 13 00:06:43 crc kubenswrapper[3562]: I1213 00:06:43.874881 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:06:43 crc kubenswrapper[3562]: E1213 00:06:43.874972 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 13 00:06:43 crc kubenswrapper[3562]: I1213 00:06:43.874971 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 13 00:06:43 crc kubenswrapper[3562]: I1213 00:06:43.875007 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:06:43 crc kubenswrapper[3562]: I1213 00:06:43.875040 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:06:43 crc kubenswrapper[3562]: I1213 00:06:43.874946 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:06:43 crc kubenswrapper[3562]: I1213 00:06:43.874983 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:06:43 crc kubenswrapper[3562]: E1213 00:06:43.875220 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 13 00:06:43 crc kubenswrapper[3562]: E1213 00:06:43.877658 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 13 00:06:43 crc kubenswrapper[3562]: E1213 00:06:43.877807 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 13 00:06:43 crc kubenswrapper[3562]: E1213 00:06:43.877982 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 13 00:06:43 crc kubenswrapper[3562]: E1213 00:06:43.878080 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 13 00:06:43 crc kubenswrapper[3562]: E1213 00:06:43.878274 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 13 00:06:43 crc kubenswrapper[3562]: E1213 00:06:43.878371 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 13 00:06:43 crc kubenswrapper[3562]: E1213 00:06:43.878708 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 13 00:06:44 crc kubenswrapper[3562]: I1213 00:06:44.472428 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:06:44 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:06:44 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:06:44 crc kubenswrapper[3562]: healthz check failed Dec 13 00:06:44 crc kubenswrapper[3562]: I1213 00:06:44.472586 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:06:44 crc kubenswrapper[3562]: I1213 00:06:44.874704 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:06:44 crc kubenswrapper[3562]: E1213 00:06:44.874872 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 13 00:06:44 crc kubenswrapper[3562]: I1213 00:06:44.874943 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:06:44 crc kubenswrapper[3562]: E1213 00:06:44.875000 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 13 00:06:44 crc kubenswrapper[3562]: I1213 00:06:44.875036 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:06:44 crc kubenswrapper[3562]: E1213 00:06:44.875092 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 13 00:06:44 crc kubenswrapper[3562]: I1213 00:06:44.875129 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 13 00:06:44 crc kubenswrapper[3562]: I1213 00:06:44.875177 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 13 00:06:44 crc kubenswrapper[3562]: I1213 00:06:44.875216 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 13 00:06:44 crc kubenswrapper[3562]: I1213 00:06:44.875267 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 13 00:06:44 crc kubenswrapper[3562]: I1213 00:06:44.875311 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 13 00:06:44 crc kubenswrapper[3562]: I1213 00:06:44.875354 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 13 00:06:44 crc kubenswrapper[3562]: I1213 00:06:44.875395 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:06:44 crc kubenswrapper[3562]: E1213 00:06:44.875453 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 13 00:06:44 crc kubenswrapper[3562]: I1213 00:06:44.875480 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:06:44 crc kubenswrapper[3562]: I1213 00:06:44.875516 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 13 00:06:44 crc kubenswrapper[3562]: I1213 00:06:44.875491 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 13 00:06:44 crc kubenswrapper[3562]: E1213 00:06:44.875572 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 13 00:06:44 crc kubenswrapper[3562]: I1213 00:06:44.875610 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:06:44 crc kubenswrapper[3562]: I1213 00:06:44.875622 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 13 00:06:44 crc kubenswrapper[3562]: I1213 00:06:44.875704 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:06:44 crc kubenswrapper[3562]: I1213 00:06:44.875719 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:06:44 crc kubenswrapper[3562]: E1213 00:06:44.875776 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 13 00:06:44 crc kubenswrapper[3562]: I1213 00:06:44.875808 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 13 00:06:44 crc kubenswrapper[3562]: I1213 00:06:44.875841 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:06:44 crc kubenswrapper[3562]: I1213 00:06:44.875813 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:06:44 crc kubenswrapper[3562]: E1213 00:06:44.875932 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 13 00:06:44 crc kubenswrapper[3562]: I1213 00:06:44.875950 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:06:44 crc kubenswrapper[3562]: I1213 00:06:44.875969 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:06:44 crc kubenswrapper[3562]: I1213 00:06:44.876036 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 13 00:06:44 crc kubenswrapper[3562]: I1213 00:06:44.876045 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:44 crc kubenswrapper[3562]: E1213 00:06:44.876096 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 13 00:06:44 crc kubenswrapper[3562]: I1213 00:06:44.876116 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 13 00:06:44 crc kubenswrapper[3562]: I1213 00:06:44.876142 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:44 crc kubenswrapper[3562]: I1213 00:06:44.876182 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:06:44 crc kubenswrapper[3562]: I1213 00:06:44.876203 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 13 00:06:44 crc kubenswrapper[3562]: E1213 00:06:44.875236 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 13 00:06:44 crc kubenswrapper[3562]: E1213 00:06:44.876248 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 13 00:06:44 crc kubenswrapper[3562]: I1213 00:06:44.876588 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:06:44 crc kubenswrapper[3562]: E1213 00:06:44.876811 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 13 00:06:44 crc kubenswrapper[3562]: E1213 00:06:44.876926 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 13 00:06:44 crc kubenswrapper[3562]: E1213 00:06:44.876968 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 13 00:06:44 crc kubenswrapper[3562]: I1213 00:06:44.877027 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:44 crc kubenswrapper[3562]: I1213 00:06:44.877070 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:06:44 crc kubenswrapper[3562]: I1213 00:06:44.877299 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:06:44 crc kubenswrapper[3562]: E1213 00:06:44.877327 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 13 00:06:44 crc kubenswrapper[3562]: I1213 00:06:44.877397 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 13 00:06:44 crc kubenswrapper[3562]: E1213 00:06:44.877534 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 13 00:06:44 crc kubenswrapper[3562]: E1213 00:06:44.877703 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 13 00:06:44 crc kubenswrapper[3562]: E1213 00:06:44.877951 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 13 00:06:44 crc kubenswrapper[3562]: I1213 00:06:44.878059 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:06:44 crc kubenswrapper[3562]: E1213 00:06:44.878230 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 13 00:06:44 crc kubenswrapper[3562]: E1213 00:06:44.878278 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 13 00:06:44 crc kubenswrapper[3562]: E1213 00:06:44.878294 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 13 00:06:44 crc kubenswrapper[3562]: E1213 00:06:44.878345 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 13 00:06:44 crc kubenswrapper[3562]: E1213 00:06:44.878445 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 13 00:06:44 crc kubenswrapper[3562]: E1213 00:06:44.878605 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 13 00:06:44 crc kubenswrapper[3562]: E1213 00:06:44.878715 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 13 00:06:44 crc kubenswrapper[3562]: E1213 00:06:44.878792 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 13 00:06:44 crc kubenswrapper[3562]: I1213 00:06:44.878815 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 13 00:06:44 crc kubenswrapper[3562]: E1213 00:06:44.878982 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 13 00:06:44 crc kubenswrapper[3562]: E1213 00:06:44.879074 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 13 00:06:44 crc kubenswrapper[3562]: E1213 00:06:44.879149 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 13 00:06:44 crc kubenswrapper[3562]: E1213 00:06:44.879263 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 13 00:06:44 crc kubenswrapper[3562]: E1213 00:06:44.879302 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 13 00:06:44 crc kubenswrapper[3562]: E1213 00:06:44.879369 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 13 00:06:44 crc kubenswrapper[3562]: E1213 00:06:44.879473 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 13 00:06:44 crc kubenswrapper[3562]: E1213 00:06:44.879552 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 13 00:06:44 crc kubenswrapper[3562]: I1213 00:06:44.879626 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 13 00:06:44 crc kubenswrapper[3562]: E1213 00:06:44.879709 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 13 00:06:44 crc kubenswrapper[3562]: E1213 00:06:44.879802 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 13 00:06:44 crc kubenswrapper[3562]: E1213 00:06:44.879885 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 13 00:06:45 crc kubenswrapper[3562]: I1213 00:06:45.472061 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:06:45 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:06:45 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:06:45 crc kubenswrapper[3562]: healthz check failed Dec 13 00:06:45 crc kubenswrapper[3562]: I1213 00:06:45.472138 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:06:45 crc kubenswrapper[3562]: I1213 00:06:45.874612 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 13 00:06:45 crc kubenswrapper[3562]: E1213 00:06:45.874931 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 13 00:06:45 crc kubenswrapper[3562]: I1213 00:06:45.875156 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:06:45 crc kubenswrapper[3562]: I1213 00:06:45.875156 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:06:45 crc kubenswrapper[3562]: I1213 00:06:45.875239 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:06:45 crc kubenswrapper[3562]: I1213 00:06:45.875243 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 13 00:06:45 crc kubenswrapper[3562]: I1213 00:06:45.875314 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:06:45 crc kubenswrapper[3562]: I1213 00:06:45.875339 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:06:45 crc kubenswrapper[3562]: I1213 00:06:45.875267 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:06:45 crc kubenswrapper[3562]: I1213 00:06:45.875417 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 13 00:06:45 crc kubenswrapper[3562]: E1213 00:06:45.875676 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 13 00:06:45 crc kubenswrapper[3562]: E1213 00:06:45.875741 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 13 00:06:45 crc kubenswrapper[3562]: E1213 00:06:45.875857 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 13 00:06:45 crc kubenswrapper[3562]: I1213 00:06:45.875962 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:06:45 crc kubenswrapper[3562]: E1213 00:06:45.876077 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 13 00:06:45 crc kubenswrapper[3562]: E1213 00:06:45.876181 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 13 00:06:45 crc kubenswrapper[3562]: E1213 00:06:45.876293 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 13 00:06:45 crc kubenswrapper[3562]: E1213 00:06:45.876383 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 13 00:06:45 crc kubenswrapper[3562]: E1213 00:06:45.876470 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 13 00:06:45 crc kubenswrapper[3562]: I1213 00:06:45.876502 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:06:45 crc kubenswrapper[3562]: E1213 00:06:45.876557 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 13 00:06:45 crc kubenswrapper[3562]: E1213 00:06:45.876713 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 13 00:06:46 crc kubenswrapper[3562]: I1213 00:06:46.472006 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:06:46 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:06:46 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:06:46 crc kubenswrapper[3562]: healthz check failed Dec 13 00:06:46 crc kubenswrapper[3562]: I1213 00:06:46.472148 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:06:46 crc kubenswrapper[3562]: I1213 00:06:46.874367 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:06:46 crc kubenswrapper[3562]: I1213 00:06:46.874681 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:46 crc kubenswrapper[3562]: I1213 00:06:46.874745 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:06:46 crc kubenswrapper[3562]: I1213 00:06:46.874774 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:06:46 crc kubenswrapper[3562]: I1213 00:06:46.874748 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:06:46 crc kubenswrapper[3562]: I1213 00:06:46.874867 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 13 00:06:46 crc kubenswrapper[3562]: I1213 00:06:46.874689 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 13 00:06:46 crc kubenswrapper[3562]: E1213 00:06:46.874924 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 13 00:06:46 crc kubenswrapper[3562]: I1213 00:06:46.874936 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:06:46 crc kubenswrapper[3562]: I1213 00:06:46.874964 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:06:46 crc kubenswrapper[3562]: I1213 00:06:46.875000 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:06:46 crc kubenswrapper[3562]: I1213 00:06:46.875040 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:06:46 crc kubenswrapper[3562]: I1213 00:06:46.875079 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 13 00:06:46 crc kubenswrapper[3562]: I1213 00:06:46.875111 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:46 crc kubenswrapper[3562]: I1213 00:06:46.875138 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 13 00:06:46 crc kubenswrapper[3562]: I1213 00:06:46.875163 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:06:46 crc kubenswrapper[3562]: I1213 00:06:46.875192 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:06:46 crc kubenswrapper[3562]: E1213 00:06:46.875276 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 13 00:06:46 crc kubenswrapper[3562]: I1213 00:06:46.875309 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 13 00:06:46 crc kubenswrapper[3562]: E1213 00:06:46.875355 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 13 00:06:46 crc kubenswrapper[3562]: I1213 00:06:46.875375 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:06:46 crc kubenswrapper[3562]: I1213 00:06:46.875352 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:06:46 crc kubenswrapper[3562]: I1213 00:06:46.875409 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:06:46 crc kubenswrapper[3562]: I1213 00:06:46.875456 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 13 00:06:46 crc kubenswrapper[3562]: I1213 00:06:46.875483 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:46 crc kubenswrapper[3562]: E1213 00:06:46.875493 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 13 00:06:46 crc kubenswrapper[3562]: I1213 00:06:46.875520 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 13 00:06:46 crc kubenswrapper[3562]: I1213 00:06:46.875564 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 13 00:06:46 crc kubenswrapper[3562]: I1213 00:06:46.875590 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 13 00:06:46 crc kubenswrapper[3562]: I1213 00:06:46.875636 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 13 00:06:46 crc kubenswrapper[3562]: E1213 00:06:46.875642 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 13 00:06:46 crc kubenswrapper[3562]: I1213 00:06:46.875709 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:06:46 crc kubenswrapper[3562]: I1213 00:06:46.875690 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:06:46 crc kubenswrapper[3562]: E1213 00:06:46.875778 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 13 00:06:46 crc kubenswrapper[3562]: I1213 00:06:46.875816 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 13 00:06:46 crc kubenswrapper[3562]: E1213 00:06:46.875850 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 13 00:06:46 crc kubenswrapper[3562]: I1213 00:06:46.875880 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:06:46 crc kubenswrapper[3562]: I1213 00:06:46.875934 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 13 00:06:46 crc kubenswrapper[3562]: E1213 00:06:46.875949 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 13 00:06:46 crc kubenswrapper[3562]: I1213 00:06:46.875988 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 13 00:06:46 crc kubenswrapper[3562]: E1213 00:06:46.876012 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 13 00:06:46 crc kubenswrapper[3562]: I1213 00:06:46.876018 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 13 00:06:46 crc kubenswrapper[3562]: I1213 00:06:46.876059 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 13 00:06:46 crc kubenswrapper[3562]: I1213 00:06:46.876082 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 13 00:06:46 crc kubenswrapper[3562]: E1213 00:06:46.876115 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 13 00:06:46 crc kubenswrapper[3562]: I1213 00:06:46.876134 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:06:46 crc kubenswrapper[3562]: E1213 00:06:46.876219 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 13 00:06:46 crc kubenswrapper[3562]: E1213 00:06:46.876336 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 13 00:06:46 crc kubenswrapper[3562]: E1213 00:06:46.876463 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 13 00:06:46 crc kubenswrapper[3562]: E1213 00:06:46.876628 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 13 00:06:46 crc kubenswrapper[3562]: E1213 00:06:46.876749 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 13 00:06:46 crc kubenswrapper[3562]: E1213 00:06:46.876820 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 13 00:06:46 crc kubenswrapper[3562]: E1213 00:06:46.877004 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 13 00:06:46 crc kubenswrapper[3562]: E1213 00:06:46.877005 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 13 00:06:46 crc kubenswrapper[3562]: E1213 00:06:46.877029 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 13 00:06:46 crc kubenswrapper[3562]: E1213 00:06:46.877117 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 13 00:06:46 crc kubenswrapper[3562]: E1213 00:06:46.877254 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 13 00:06:46 crc kubenswrapper[3562]: E1213 00:06:46.877370 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 13 00:06:46 crc kubenswrapper[3562]: E1213 00:06:46.877491 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 13 00:06:46 crc kubenswrapper[3562]: E1213 00:06:46.877590 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 13 00:06:46 crc kubenswrapper[3562]: E1213 00:06:46.877709 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 13 00:06:46 crc kubenswrapper[3562]: E1213 00:06:46.877765 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 13 00:06:46 crc kubenswrapper[3562]: E1213 00:06:46.877809 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 13 00:06:46 crc kubenswrapper[3562]: E1213 00:06:46.877846 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 13 00:06:46 crc kubenswrapper[3562]: E1213 00:06:46.877925 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 13 00:06:46 crc kubenswrapper[3562]: E1213 00:06:46.877995 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 13 00:06:46 crc kubenswrapper[3562]: E1213 00:06:46.878074 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 13 00:06:46 crc kubenswrapper[3562]: E1213 00:06:46.878164 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 13 00:06:46 crc kubenswrapper[3562]: E1213 00:06:46.878218 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 13 00:06:46 crc kubenswrapper[3562]: E1213 00:06:46.878327 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 13 00:06:46 crc kubenswrapper[3562]: E1213 00:06:46.878364 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 13 00:06:46 crc kubenswrapper[3562]: E1213 00:06:46.878418 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 13 00:06:47 crc kubenswrapper[3562]: I1213 00:06:47.470784 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:06:47 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:06:47 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:06:47 crc kubenswrapper[3562]: healthz check failed Dec 13 00:06:47 crc kubenswrapper[3562]: I1213 00:06:47.470851 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:06:47 crc kubenswrapper[3562]: I1213 00:06:47.874606 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 13 00:06:47 crc kubenswrapper[3562]: I1213 00:06:47.874632 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:06:47 crc kubenswrapper[3562]: I1213 00:06:47.874656 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 13 00:06:47 crc kubenswrapper[3562]: I1213 00:06:47.874672 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 13 00:06:47 crc kubenswrapper[3562]: I1213 00:06:47.874697 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:06:47 crc kubenswrapper[3562]: I1213 00:06:47.874705 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:06:47 crc kubenswrapper[3562]: I1213 00:06:47.874607 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:06:47 crc kubenswrapper[3562]: I1213 00:06:47.874744 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:06:47 crc kubenswrapper[3562]: I1213 00:06:47.874807 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:06:47 crc kubenswrapper[3562]: E1213 00:06:47.874926 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 13 00:06:47 crc kubenswrapper[3562]: I1213 00:06:47.874953 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:06:47 crc kubenswrapper[3562]: I1213 00:06:47.875010 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:06:47 crc kubenswrapper[3562]: E1213 00:06:47.875053 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 13 00:06:47 crc kubenswrapper[3562]: E1213 00:06:47.875377 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 13 00:06:47 crc kubenswrapper[3562]: E1213 00:06:47.875453 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 13 00:06:47 crc kubenswrapper[3562]: E1213 00:06:47.875402 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 13 00:06:47 crc kubenswrapper[3562]: E1213 00:06:47.875733 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 13 00:06:47 crc kubenswrapper[3562]: E1213 00:06:47.875615 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 13 00:06:47 crc kubenswrapper[3562]: E1213 00:06:47.875814 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 13 00:06:47 crc kubenswrapper[3562]: E1213 00:06:47.875879 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 13 00:06:47 crc kubenswrapper[3562]: E1213 00:06:47.875984 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 13 00:06:47 crc kubenswrapper[3562]: E1213 00:06:47.876076 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 13 00:06:48 crc kubenswrapper[3562]: I1213 00:06:48.473970 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:06:48 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:06:48 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:06:48 crc kubenswrapper[3562]: healthz check failed Dec 13 00:06:48 crc kubenswrapper[3562]: I1213 00:06:48.474093 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:06:48 crc kubenswrapper[3562]: E1213 00:06:48.868673 3562 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 13 00:06:48 crc kubenswrapper[3562]: I1213 00:06:48.875151 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:06:48 crc kubenswrapper[3562]: I1213 00:06:48.875233 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:06:48 crc kubenswrapper[3562]: I1213 00:06:48.875262 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 13 00:06:48 crc kubenswrapper[3562]: I1213 00:06:48.875275 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:06:48 crc kubenswrapper[3562]: I1213 00:06:48.875338 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:48 crc kubenswrapper[3562]: I1213 00:06:48.875467 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:06:48 crc kubenswrapper[3562]: E1213 00:06:48.875493 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 13 00:06:48 crc kubenswrapper[3562]: I1213 00:06:48.875567 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:06:48 crc kubenswrapper[3562]: I1213 00:06:48.875585 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:06:48 crc kubenswrapper[3562]: I1213 00:06:48.875626 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 13 00:06:48 crc kubenswrapper[3562]: I1213 00:06:48.875652 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:06:48 crc kubenswrapper[3562]: I1213 00:06:48.875714 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 13 00:06:48 crc kubenswrapper[3562]: I1213 00:06:48.875768 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 13 00:06:48 crc kubenswrapper[3562]: E1213 00:06:48.875788 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 13 00:06:48 crc kubenswrapper[3562]: I1213 00:06:48.875801 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 13 00:06:48 crc kubenswrapper[3562]: I1213 00:06:48.875829 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 13 00:06:48 crc kubenswrapper[3562]: I1213 00:06:48.875853 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:06:48 crc kubenswrapper[3562]: I1213 00:06:48.875834 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:06:48 crc kubenswrapper[3562]: I1213 00:06:48.875854 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:06:48 crc kubenswrapper[3562]: I1213 00:06:48.875864 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 13 00:06:48 crc kubenswrapper[3562]: E1213 00:06:48.876072 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 13 00:06:48 crc kubenswrapper[3562]: I1213 00:06:48.876109 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 13 00:06:48 crc kubenswrapper[3562]: E1213 00:06:48.876172 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 13 00:06:48 crc kubenswrapper[3562]: E1213 00:06:48.876298 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 13 00:06:48 crc kubenswrapper[3562]: I1213 00:06:48.876346 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 13 00:06:48 crc kubenswrapper[3562]: E1213 00:06:48.876410 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 13 00:06:48 crc kubenswrapper[3562]: E1213 00:06:48.876477 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 13 00:06:48 crc kubenswrapper[3562]: I1213 00:06:48.876495 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:06:48 crc kubenswrapper[3562]: I1213 00:06:48.876527 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 13 00:06:48 crc kubenswrapper[3562]: I1213 00:06:48.876591 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:06:48 crc kubenswrapper[3562]: E1213 00:06:48.876604 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 13 00:06:48 crc kubenswrapper[3562]: I1213 00:06:48.876649 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 13 00:06:48 crc kubenswrapper[3562]: I1213 00:06:48.876658 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:48 crc kubenswrapper[3562]: E1213 00:06:48.876752 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 13 00:06:48 crc kubenswrapper[3562]: E1213 00:06:48.876833 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 13 00:06:48 crc kubenswrapper[3562]: E1213 00:06:48.876937 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 13 00:06:48 crc kubenswrapper[3562]: I1213 00:06:48.876977 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:06:48 crc kubenswrapper[3562]: I1213 00:06:48.877034 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:06:48 crc kubenswrapper[3562]: E1213 00:06:48.877093 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 13 00:06:48 crc kubenswrapper[3562]: E1213 00:06:48.877191 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 13 00:06:48 crc kubenswrapper[3562]: I1213 00:06:48.877240 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 13 00:06:48 crc kubenswrapper[3562]: I1213 00:06:48.877253 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:06:48 crc kubenswrapper[3562]: E1213 00:06:48.877342 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 13 00:06:48 crc kubenswrapper[3562]: E1213 00:06:48.877403 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 13 00:06:48 crc kubenswrapper[3562]: I1213 00:06:48.877407 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:06:48 crc kubenswrapper[3562]: I1213 00:06:48.877491 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:06:48 crc kubenswrapper[3562]: I1213 00:06:48.877540 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 13 00:06:48 crc kubenswrapper[3562]: I1213 00:06:48.877547 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:48 crc kubenswrapper[3562]: E1213 00:06:48.877595 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 13 00:06:48 crc kubenswrapper[3562]: I1213 00:06:48.877602 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 13 00:06:48 crc kubenswrapper[3562]: E1213 00:06:48.877677 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 13 00:06:48 crc kubenswrapper[3562]: I1213 00:06:48.877683 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 13 00:06:48 crc kubenswrapper[3562]: E1213 00:06:48.877751 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 13 00:06:48 crc kubenswrapper[3562]: E1213 00:06:48.877861 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 13 00:06:48 crc kubenswrapper[3562]: E1213 00:06:48.878032 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 13 00:06:48 crc kubenswrapper[3562]: E1213 00:06:48.878144 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 13 00:06:48 crc kubenswrapper[3562]: E1213 00:06:48.878224 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 13 00:06:48 crc kubenswrapper[3562]: E1213 00:06:48.878352 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 13 00:06:48 crc kubenswrapper[3562]: E1213 00:06:48.878440 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 13 00:06:48 crc kubenswrapper[3562]: E1213 00:06:48.878559 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 13 00:06:48 crc kubenswrapper[3562]: E1213 00:06:48.878649 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 13 00:06:48 crc kubenswrapper[3562]: E1213 00:06:48.878714 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 13 00:06:48 crc kubenswrapper[3562]: E1213 00:06:48.878765 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 13 00:06:48 crc kubenswrapper[3562]: E1213 00:06:48.878950 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 13 00:06:48 crc kubenswrapper[3562]: I1213 00:06:48.879018 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 13 00:06:48 crc kubenswrapper[3562]: E1213 00:06:48.879117 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 13 00:06:48 crc kubenswrapper[3562]: E1213 00:06:48.879222 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 13 00:06:48 crc kubenswrapper[3562]: E1213 00:06:48.879300 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 13 00:06:48 crc kubenswrapper[3562]: E1213 00:06:48.879386 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 13 00:06:48 crc kubenswrapper[3562]: E1213 00:06:48.879511 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 13 00:06:48 crc kubenswrapper[3562]: E1213 00:06:48.879573 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 13 00:06:48 crc kubenswrapper[3562]: E1213 00:06:48.879634 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 13 00:06:49 crc kubenswrapper[3562]: I1213 00:06:49.473570 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:06:49 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:06:49 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:06:49 crc kubenswrapper[3562]: healthz check failed Dec 13 00:06:49 crc kubenswrapper[3562]: I1213 00:06:49.473674 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:06:49 crc kubenswrapper[3562]: I1213 00:06:49.874575 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 13 00:06:49 crc kubenswrapper[3562]: I1213 00:06:49.874641 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:06:49 crc kubenswrapper[3562]: I1213 00:06:49.874585 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:06:49 crc kubenswrapper[3562]: I1213 00:06:49.874702 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:06:49 crc kubenswrapper[3562]: I1213 00:06:49.874641 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:06:49 crc kubenswrapper[3562]: I1213 00:06:49.874712 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:06:49 crc kubenswrapper[3562]: I1213 00:06:49.874746 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:06:49 crc kubenswrapper[3562]: I1213 00:06:49.874818 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:06:49 crc kubenswrapper[3562]: I1213 00:06:49.874704 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 13 00:06:49 crc kubenswrapper[3562]: I1213 00:06:49.874822 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 13 00:06:49 crc kubenswrapper[3562]: E1213 00:06:49.874938 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 13 00:06:49 crc kubenswrapper[3562]: E1213 00:06:49.875026 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 13 00:06:49 crc kubenswrapper[3562]: I1213 00:06:49.875066 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:06:49 crc kubenswrapper[3562]: E1213 00:06:49.875140 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 13 00:06:49 crc kubenswrapper[3562]: E1213 00:06:49.875321 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 13 00:06:49 crc kubenswrapper[3562]: E1213 00:06:49.875438 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 13 00:06:49 crc kubenswrapper[3562]: E1213 00:06:49.875482 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 13 00:06:49 crc kubenswrapper[3562]: E1213 00:06:49.875589 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 13 00:06:49 crc kubenswrapper[3562]: E1213 00:06:49.875645 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 13 00:06:49 crc kubenswrapper[3562]: E1213 00:06:49.875682 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 13 00:06:49 crc kubenswrapper[3562]: E1213 00:06:49.875727 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 13 00:06:49 crc kubenswrapper[3562]: E1213 00:06:49.875800 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 13 00:06:50 crc kubenswrapper[3562]: I1213 00:06:50.175498 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 13 00:06:50 crc kubenswrapper[3562]: I1213 00:06:50.177052 3562 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 00:06:50 crc kubenswrapper[3562]: I1213 00:06:50.223149 3562 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="ovnkube-controller" probeResult="failure" output="" Dec 13 00:06:50 crc kubenswrapper[3562]: I1213 00:06:50.273305 3562 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="ovnkube-controller" probeResult="failure" output="" Dec 13 00:06:50 crc kubenswrapper[3562]: I1213 00:06:50.471439 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:06:50 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:06:50 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:06:50 crc kubenswrapper[3562]: healthz check failed Dec 13 00:06:50 crc kubenswrapper[3562]: I1213 00:06:50.471547 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:06:50 crc kubenswrapper[3562]: I1213 00:06:50.875200 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 13 00:06:50 crc kubenswrapper[3562]: I1213 00:06:50.875255 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 13 00:06:50 crc kubenswrapper[3562]: I1213 00:06:50.875212 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:06:50 crc kubenswrapper[3562]: I1213 00:06:50.875289 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 13 00:06:50 crc kubenswrapper[3562]: I1213 00:06:50.875261 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 13 00:06:50 crc kubenswrapper[3562]: I1213 00:06:50.875222 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 13 00:06:50 crc kubenswrapper[3562]: I1213 00:06:50.875320 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 13 00:06:50 crc kubenswrapper[3562]: I1213 00:06:50.875373 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:06:50 crc kubenswrapper[3562]: I1213 00:06:50.875384 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:06:50 crc kubenswrapper[3562]: I1213 00:06:50.875406 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:06:50 crc kubenswrapper[3562]: I1213 00:06:50.875425 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:50 crc kubenswrapper[3562]: I1213 00:06:50.875427 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 13 00:06:50 crc kubenswrapper[3562]: I1213 00:06:50.875352 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 13 00:06:50 crc kubenswrapper[3562]: I1213 00:06:50.875473 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 13 00:06:50 crc kubenswrapper[3562]: I1213 00:06:50.875473 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:06:50 crc kubenswrapper[3562]: I1213 00:06:50.875519 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:06:50 crc kubenswrapper[3562]: I1213 00:06:50.875522 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:06:50 crc kubenswrapper[3562]: I1213 00:06:50.875356 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:50 crc kubenswrapper[3562]: I1213 00:06:50.875568 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:06:50 crc kubenswrapper[3562]: I1213 00:06:50.875572 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:06:50 crc kubenswrapper[3562]: I1213 00:06:50.875522 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 13 00:06:50 crc kubenswrapper[3562]: I1213 00:06:50.875573 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 13 00:06:50 crc kubenswrapper[3562]: I1213 00:06:50.875232 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 13 00:06:50 crc kubenswrapper[3562]: I1213 00:06:50.875625 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:06:50 crc kubenswrapper[3562]: I1213 00:06:50.875535 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 13 00:06:50 crc kubenswrapper[3562]: I1213 00:06:50.875642 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:06:50 crc kubenswrapper[3562]: I1213 00:06:50.875604 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 13 00:06:50 crc kubenswrapper[3562]: I1213 00:06:50.875686 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:06:50 crc kubenswrapper[3562]: I1213 00:06:50.875626 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:06:50 crc kubenswrapper[3562]: I1213 00:06:50.875425 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:50 crc kubenswrapper[3562]: I1213 00:06:50.875689 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:06:50 crc kubenswrapper[3562]: I1213 00:06:50.875645 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:06:50 crc kubenswrapper[3562]: I1213 00:06:50.875598 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:06:50 crc kubenswrapper[3562]: I1213 00:06:50.875870 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 13 00:06:50 crc kubenswrapper[3562]: E1213 00:06:50.876009 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 13 00:06:50 crc kubenswrapper[3562]: E1213 00:06:50.876131 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 13 00:06:50 crc kubenswrapper[3562]: I1213 00:06:50.876183 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 13 00:06:50 crc kubenswrapper[3562]: E1213 00:06:50.876235 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 13 00:06:50 crc kubenswrapper[3562]: E1213 00:06:50.876319 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 13 00:06:50 crc kubenswrapper[3562]: E1213 00:06:50.876415 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 13 00:06:50 crc kubenswrapper[3562]: E1213 00:06:50.876507 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 13 00:06:50 crc kubenswrapper[3562]: E1213 00:06:50.876582 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 13 00:06:50 crc kubenswrapper[3562]: I1213 00:06:50.876635 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:06:50 crc kubenswrapper[3562]: E1213 00:06:50.876779 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 13 00:06:50 crc kubenswrapper[3562]: E1213 00:06:50.876895 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 13 00:06:50 crc kubenswrapper[3562]: E1213 00:06:50.877031 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 13 00:06:50 crc kubenswrapper[3562]: E1213 00:06:50.877139 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 13 00:06:50 crc kubenswrapper[3562]: E1213 00:06:50.877230 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 13 00:06:50 crc kubenswrapper[3562]: E1213 00:06:50.877331 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 13 00:06:50 crc kubenswrapper[3562]: E1213 00:06:50.877431 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 13 00:06:50 crc kubenswrapper[3562]: E1213 00:06:50.877514 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 13 00:06:50 crc kubenswrapper[3562]: E1213 00:06:50.877596 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 13 00:06:50 crc kubenswrapper[3562]: E1213 00:06:50.877682 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 13 00:06:50 crc kubenswrapper[3562]: E1213 00:06:50.877764 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 13 00:06:50 crc kubenswrapper[3562]: E1213 00:06:50.877833 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 13 00:06:50 crc kubenswrapper[3562]: E1213 00:06:50.877926 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 13 00:06:50 crc kubenswrapper[3562]: E1213 00:06:50.878101 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 13 00:06:50 crc kubenswrapper[3562]: E1213 00:06:50.878184 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 13 00:06:50 crc kubenswrapper[3562]: E1213 00:06:50.878246 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 13 00:06:50 crc kubenswrapper[3562]: E1213 00:06:50.878261 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 13 00:06:50 crc kubenswrapper[3562]: E1213 00:06:50.878426 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 13 00:06:50 crc kubenswrapper[3562]: E1213 00:06:50.878583 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 13 00:06:50 crc kubenswrapper[3562]: E1213 00:06:50.878618 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 13 00:06:50 crc kubenswrapper[3562]: E1213 00:06:50.878717 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 13 00:06:50 crc kubenswrapper[3562]: E1213 00:06:50.878826 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 13 00:06:50 crc kubenswrapper[3562]: E1213 00:06:50.878922 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 13 00:06:50 crc kubenswrapper[3562]: E1213 00:06:50.878996 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 13 00:06:50 crc kubenswrapper[3562]: E1213 00:06:50.879120 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 13 00:06:50 crc kubenswrapper[3562]: E1213 00:06:50.879161 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 13 00:06:50 crc kubenswrapper[3562]: E1213 00:06:50.879254 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 13 00:06:50 crc kubenswrapper[3562]: E1213 00:06:50.879391 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 13 00:06:50 crc kubenswrapper[3562]: E1213 00:06:50.879475 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 13 00:06:51 crc kubenswrapper[3562]: I1213 00:06:51.471784 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:06:51 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:06:51 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:06:51 crc kubenswrapper[3562]: healthz check failed Dec 13 00:06:51 crc kubenswrapper[3562]: I1213 00:06:51.471897 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:06:51 crc kubenswrapper[3562]: I1213 00:06:51.874850 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 13 00:06:51 crc kubenswrapper[3562]: I1213 00:06:51.874925 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:06:51 crc kubenswrapper[3562]: I1213 00:06:51.874850 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:06:51 crc kubenswrapper[3562]: I1213 00:06:51.874877 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 13 00:06:51 crc kubenswrapper[3562]: I1213 00:06:51.875022 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:06:51 crc kubenswrapper[3562]: I1213 00:06:51.875094 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:06:51 crc kubenswrapper[3562]: E1213 00:06:51.875102 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 13 00:06:51 crc kubenswrapper[3562]: I1213 00:06:51.875152 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 13 00:06:51 crc kubenswrapper[3562]: E1213 00:06:51.875225 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 13 00:06:51 crc kubenswrapper[3562]: E1213 00:06:51.875293 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 13 00:06:51 crc kubenswrapper[3562]: I1213 00:06:51.875331 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:06:51 crc kubenswrapper[3562]: E1213 00:06:51.875430 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 13 00:06:51 crc kubenswrapper[3562]: E1213 00:06:51.875495 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 13 00:06:51 crc kubenswrapper[3562]: E1213 00:06:51.875553 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 13 00:06:51 crc kubenswrapper[3562]: I1213 00:06:51.875592 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:06:51 crc kubenswrapper[3562]: I1213 00:06:51.875615 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:06:51 crc kubenswrapper[3562]: I1213 00:06:51.875597 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:06:51 crc kubenswrapper[3562]: E1213 00:06:51.875667 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 13 00:06:51 crc kubenswrapper[3562]: E1213 00:06:51.875747 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 13 00:06:51 crc kubenswrapper[3562]: E1213 00:06:51.875819 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 13 00:06:51 crc kubenswrapper[3562]: E1213 00:06:51.875916 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 13 00:06:51 crc kubenswrapper[3562]: E1213 00:06:51.875985 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 13 00:06:52 crc kubenswrapper[3562]: I1213 00:06:52.471250 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:06:52 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:06:52 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:06:52 crc kubenswrapper[3562]: healthz check failed Dec 13 00:06:52 crc kubenswrapper[3562]: I1213 00:06:52.471344 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:06:52 crc kubenswrapper[3562]: I1213 00:06:52.874554 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:06:52 crc kubenswrapper[3562]: I1213 00:06:52.874616 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 13 00:06:52 crc kubenswrapper[3562]: I1213 00:06:52.874685 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 13 00:06:52 crc kubenswrapper[3562]: I1213 00:06:52.874752 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:06:52 crc kubenswrapper[3562]: I1213 00:06:52.874781 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:06:52 crc kubenswrapper[3562]: I1213 00:06:52.874833 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 13 00:06:52 crc kubenswrapper[3562]: I1213 00:06:52.874869 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:06:52 crc kubenswrapper[3562]: I1213 00:06:52.875121 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:06:52 crc kubenswrapper[3562]: I1213 00:06:52.875179 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:06:52 crc kubenswrapper[3562]: I1213 00:06:52.874695 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 13 00:06:52 crc kubenswrapper[3562]: I1213 00:06:52.874863 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 13 00:06:52 crc kubenswrapper[3562]: I1213 00:06:52.874827 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:06:52 crc kubenswrapper[3562]: E1213 00:06:52.875293 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 13 00:06:52 crc kubenswrapper[3562]: I1213 00:06:52.874875 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:06:52 crc kubenswrapper[3562]: I1213 00:06:52.874895 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 13 00:06:52 crc kubenswrapper[3562]: I1213 00:06:52.875357 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:06:52 crc kubenswrapper[3562]: E1213 00:06:52.875426 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 13 00:06:52 crc kubenswrapper[3562]: I1213 00:06:52.874964 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 13 00:06:52 crc kubenswrapper[3562]: I1213 00:06:52.874974 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:06:52 crc kubenswrapper[3562]: I1213 00:06:52.874978 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:52 crc kubenswrapper[3562]: I1213 00:06:52.874848 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:06:52 crc kubenswrapper[3562]: I1213 00:06:52.874999 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 13 00:06:52 crc kubenswrapper[3562]: I1213 00:06:52.875001 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 13 00:06:52 crc kubenswrapper[3562]: I1213 00:06:52.875007 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 13 00:06:52 crc kubenswrapper[3562]: I1213 00:06:52.875026 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 13 00:06:52 crc kubenswrapper[3562]: I1213 00:06:52.875013 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:06:52 crc kubenswrapper[3562]: E1213 00:06:52.875660 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 13 00:06:52 crc kubenswrapper[3562]: I1213 00:06:52.875035 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 13 00:06:52 crc kubenswrapper[3562]: I1213 00:06:52.875040 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:06:52 crc kubenswrapper[3562]: I1213 00:06:52.874572 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 13 00:06:52 crc kubenswrapper[3562]: I1213 00:06:52.875053 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 13 00:06:52 crc kubenswrapper[3562]: I1213 00:06:52.875073 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:06:52 crc kubenswrapper[3562]: E1213 00:06:52.875778 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 13 00:06:52 crc kubenswrapper[3562]: I1213 00:06:52.875068 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:06:52 crc kubenswrapper[3562]: E1213 00:06:52.875826 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 13 00:06:52 crc kubenswrapper[3562]: I1213 00:06:52.875083 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:52 crc kubenswrapper[3562]: I1213 00:06:52.875090 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 13 00:06:52 crc kubenswrapper[3562]: I1213 00:06:52.875104 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 13 00:06:52 crc kubenswrapper[3562]: I1213 00:06:52.875127 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:52 crc kubenswrapper[3562]: E1213 00:06:52.876012 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 13 00:06:52 crc kubenswrapper[3562]: I1213 00:06:52.875131 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:06:52 crc kubenswrapper[3562]: I1213 00:06:52.875129 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:06:52 crc kubenswrapper[3562]: E1213 00:06:52.876106 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 13 00:06:52 crc kubenswrapper[3562]: E1213 00:06:52.875210 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 13 00:06:52 crc kubenswrapper[3562]: E1213 00:06:52.876164 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 13 00:06:52 crc kubenswrapper[3562]: E1213 00:06:52.876233 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 13 00:06:52 crc kubenswrapper[3562]: E1213 00:06:52.876296 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 13 00:06:52 crc kubenswrapper[3562]: E1213 00:06:52.876611 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 13 00:06:52 crc kubenswrapper[3562]: E1213 00:06:52.876728 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 13 00:06:52 crc kubenswrapper[3562]: E1213 00:06:52.876860 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 13 00:06:52 crc kubenswrapper[3562]: E1213 00:06:52.877091 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 13 00:06:52 crc kubenswrapper[3562]: E1213 00:06:52.877171 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 13 00:06:52 crc kubenswrapper[3562]: E1213 00:06:52.877307 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 13 00:06:52 crc kubenswrapper[3562]: E1213 00:06:52.877519 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 13 00:06:52 crc kubenswrapper[3562]: E1213 00:06:52.877632 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 13 00:06:52 crc kubenswrapper[3562]: E1213 00:06:52.877758 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 13 00:06:52 crc kubenswrapper[3562]: E1213 00:06:52.878225 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 13 00:06:52 crc kubenswrapper[3562]: E1213 00:06:52.878290 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 13 00:06:52 crc kubenswrapper[3562]: E1213 00:06:52.878350 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 13 00:06:52 crc kubenswrapper[3562]: E1213 00:06:52.878470 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 13 00:06:52 crc kubenswrapper[3562]: E1213 00:06:52.878518 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 13 00:06:52 crc kubenswrapper[3562]: E1213 00:06:52.878560 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 13 00:06:52 crc kubenswrapper[3562]: E1213 00:06:52.878672 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 13 00:06:52 crc kubenswrapper[3562]: E1213 00:06:52.878758 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 13 00:06:52 crc kubenswrapper[3562]: E1213 00:06:52.878872 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 13 00:06:52 crc kubenswrapper[3562]: E1213 00:06:52.878991 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 13 00:06:52 crc kubenswrapper[3562]: E1213 00:06:52.879107 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 13 00:06:52 crc kubenswrapper[3562]: E1213 00:06:52.879302 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 13 00:06:52 crc kubenswrapper[3562]: E1213 00:06:52.879355 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 13 00:06:52 crc kubenswrapper[3562]: E1213 00:06:52.879467 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 13 00:06:52 crc kubenswrapper[3562]: E1213 00:06:52.879647 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 13 00:06:52 crc kubenswrapper[3562]: E1213 00:06:52.879649 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 13 00:06:53 crc kubenswrapper[3562]: I1213 00:06:53.049072 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 13 00:06:53 crc kubenswrapper[3562]: I1213 00:06:53.471796 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:06:53 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:06:53 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:06:53 crc kubenswrapper[3562]: healthz check failed Dec 13 00:06:53 crc kubenswrapper[3562]: I1213 00:06:53.472300 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:06:53 crc kubenswrapper[3562]: E1213 00:06:53.869544 3562 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 13 00:06:53 crc kubenswrapper[3562]: I1213 00:06:53.874885 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:06:53 crc kubenswrapper[3562]: I1213 00:06:53.875002 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:06:53 crc kubenswrapper[3562]: I1213 00:06:53.875001 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:06:53 crc kubenswrapper[3562]: E1213 00:06:53.896362 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 13 00:06:53 crc kubenswrapper[3562]: I1213 00:06:53.896442 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 13 00:06:53 crc kubenswrapper[3562]: I1213 00:06:53.896468 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:06:53 crc kubenswrapper[3562]: I1213 00:06:53.896572 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 13 00:06:53 crc kubenswrapper[3562]: I1213 00:06:53.897292 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 13 00:06:53 crc kubenswrapper[3562]: I1213 00:06:53.897452 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:06:53 crc kubenswrapper[3562]: I1213 00:06:53.897534 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:06:53 crc kubenswrapper[3562]: I1213 00:06:53.897577 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:06:53 crc kubenswrapper[3562]: I1213 00:06:53.897534 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:06:53 crc kubenswrapper[3562]: E1213 00:06:53.897586 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 13 00:06:53 crc kubenswrapper[3562]: E1213 00:06:53.897702 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 13 00:06:53 crc kubenswrapper[3562]: E1213 00:06:53.897821 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 13 00:06:53 crc kubenswrapper[3562]: E1213 00:06:53.897919 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 13 00:06:53 crc kubenswrapper[3562]: E1213 00:06:53.897978 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 13 00:06:53 crc kubenswrapper[3562]: E1213 00:06:53.898065 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 13 00:06:53 crc kubenswrapper[3562]: E1213 00:06:53.898297 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 13 00:06:53 crc kubenswrapper[3562]: E1213 00:06:53.898304 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 13 00:06:53 crc kubenswrapper[3562]: E1213 00:06:53.898359 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 13 00:06:53 crc kubenswrapper[3562]: E1213 00:06:53.898463 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.471584 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:06:54 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:06:54 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:06:54 crc kubenswrapper[3562]: healthz check failed Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.471653 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.670321 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.670391 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.670414 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.670437 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.670460 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.670482 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.670503 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.670526 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.670558 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.670590 3562 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.670683 3562 secret.go:194] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.670704 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.670674425 +0000 UTC m=+83.153904958 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.670731 3562 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.670775 3562 secret.go:194] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.670801 3562 configmap.go:199] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.670744 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.670724586 +0000 UTC m=+83.153955109 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.670866 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.67084263 +0000 UTC m=+83.154073153 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-client" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.670876 3562 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-idp-0-file-data: object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.670878 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.670871861 +0000 UTC m=+83.154102384 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-client" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.670729 3562 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.670998 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.670819 3562 secret.go:194] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.670678 3562 secret.go:194] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.671052 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.671021695 +0000 UTC m=+83.154252218 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.671100 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.671087147 +0000 UTC m=+83.154317750 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-idp-0-file-data" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.671062 3562 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.671123 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.671113147 +0000 UTC m=+83.154343770 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.671165 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.671134618 +0000 UTC m=+83.154365141 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.671218 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.671222 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.67121188 +0000 UTC m=+83.154442403 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.671255 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.671249711 +0000 UTC m=+83.154480234 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.671298 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.671303 3562 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.671324 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.671376 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.671391 3562 configmap.go:199] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.671407 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.671421 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.671410426 +0000 UTC m=+83.154640949 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"console-operator-config" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.671471 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.671457567 +0000 UTC m=+83.154688090 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.671474 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.671506 3562 secret.go:194] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.671516 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.671526 3562 secret.go:194] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.671537 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.671528609 +0000 UTC m=+83.154759222 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-serving-cert" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.671560 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.671550999 +0000 UTC m=+83.154781732 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.671453 3562 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.671569 3562 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.671567 3562 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.671595 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.67158675 +0000 UTC m=+83.154817373 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.671612 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.671604541 +0000 UTC m=+83.154835174 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.671630 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.671622481 +0000 UTC m=+83.154853104 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.671656 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.671684 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.671714 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.671835 3562 configmap.go:199] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.671939 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.671889269 +0000 UTC m=+83.155119872 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"image-import-ca" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.671964 3562 secret.go:194] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.671972 3562 secret.go:194] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.672016 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs podName:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.672005403 +0000 UTC m=+83.155235996 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs") pod "multus-admission-controller-6c7c885997-4hbbc" (UID: "d5025cb4-ddb0-4107-88c1-bcbcdb779ac0") : object "openshift-multus"/"multus-admission-controller-secret" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.672139 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.672124876 +0000 UTC m=+83.155355399 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"etcd-client" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.672851 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.672997 3562 configmap.go:199] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.673063 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.673051772 +0000 UTC m=+83.156282365 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.774130 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.774197 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.774225 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.774247 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.774271 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.774296 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.774355 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.774378 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.774400 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.774416 3562 secret.go:194] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.774449 3562 configmap.go:199] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.774521 3562 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.774479 3562 configmap.go:199] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.774567 3562 configmap.go:199] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.774530 3562 secret.go:194] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.774574 3562 secret.go:194] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.774637 3562 secret.go:194] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.774532 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.774501688 +0000 UTC m=+83.257732281 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.774717 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.774698753 +0000 UTC m=+83.257929276 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.774750 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.774765 3562 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.774775 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.774796 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.774818 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.774848 3562 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.774860 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.774914 3562 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.774929 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.774868358 +0000 UTC m=+83.258098881 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.774921 3562 secret.go:194] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.774971 3562 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.774983 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.774977061 +0000 UTC m=+83.258207584 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.775002 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.774991871 +0000 UTC m=+83.258222384 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"config" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.774970 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.775014 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.775008412 +0000 UTC m=+83.258238935 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default-metrics-tls" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.775026 3562 secret.go:194] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.775026 3562 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.775027 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.775021772 +0000 UTC m=+83.258252295 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-oauth-config" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.775110 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.775096094 +0000 UTC m=+83.258326707 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"serving-cert" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.775129 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.775118225 +0000 UTC m=+83.258348868 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.775158 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.775148916 +0000 UTC m=+83.258379439 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"client-ca" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.775182 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.775172366 +0000 UTC m=+83.258402969 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"encryption-config-1" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.775206 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.775195137 +0000 UTC m=+83.258425720 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.775272 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.775316 3562 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.775330 3562 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.775341 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.775349 3562 projected.go:200] Error preparing data for projected volume kube-api-access-ncrf5 for pod openshift-marketplace/certified-operators-7287f: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.775351 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.775342041 +0000 UTC m=+83.258572564 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-session" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.775368 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.775362272 +0000 UTC m=+83.258592795 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"metrics-tls" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.775384 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.775377332 +0000 UTC m=+83.258607855 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.775397 3562 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.775432 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.775423063 +0000 UTC m=+83.258653646 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"trusted-ca" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.775436 3562 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.775447 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.775475 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.775455674 +0000 UTC m=+83.258686197 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.775512 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.775558 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.775580 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.775585 3562 secret.go:194] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.775602 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.775648 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.775662 3562 secret.go:194] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.775664 3562 secret.go:194] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.775695 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.775684952 +0000 UTC m=+83.258915565 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.775696 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.775711 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.775703642 +0000 UTC m=+83.258934165 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : object "openshift-service-ca"/"signing-key" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.775728 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5 podName:887d596e-c519-4bfa-af90-3edd9e1b2f0f nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.775722103 +0000 UTC m=+83.258952626 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-ncrf5" (UniqueName: "kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5") pod "certified-operators-7287f" (UID: "887d596e-c519-4bfa-af90-3edd9e1b2f0f") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.775765 3562 secret.go:194] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.775771 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.775763724 +0000 UTC m=+83.258994247 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.775737 3562 projected.go:269] Couldn't get secret openshift-image-registry/image-registry-tls: object "openshift-image-registry"/"image-registry-tls" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.775778 3562 configmap.go:199] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.775788 3562 projected.go:200] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-75779c45fd-v2j2v: object "openshift-image-registry"/"image-registry-tls" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.775803 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.775811 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.775801755 +0000 UTC m=+83.259032278 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.775813 3562 configmap.go:199] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.775837 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.775830796 +0000 UTC m=+83.259061319 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"client-ca" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.775843 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.775852 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.775844476 +0000 UTC m=+83.259074999 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"image-registry-tls" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.775865 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.775860347 +0000 UTC m=+83.259090870 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : object "openshift-dns-operator"/"metrics-tls" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.775883 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.775897 3562 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.775929 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.775952 3562 configmap.go:199] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.775960 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.775946389 +0000 UTC m=+83.259176912 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.775964 3562 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.775979 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.77597182 +0000 UTC m=+83.259202343 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"audit-1" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.776004 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.77598518 +0000 UTC m=+83.259215803 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.776026 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.776053 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.776088 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.776101 3562 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.776124 3562 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.776134 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.776143 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.776132494 +0000 UTC m=+83.259363017 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.776170 3562 configmap.go:199] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.776171 3562 configmap.go:199] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.776194 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.776184736 +0000 UTC m=+83.259415259 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : object "openshift-service-ca"/"signing-cabundle" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.776217 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.776241 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.776264 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.776288 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.776310 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-76gl8\" (UniqueName: \"kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8\") pod \"network-check-target-v54bt\" (UID: \"34a48baf-1bee-4921-8bb2-9b7320e76f79\") " pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.776321 3562 secret.go:194] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.776331 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.776334 3562 secret.go:194] Couldn't get secret openshift-image-registry/installation-pull-secrets: object "openshift-image-registry"/"installation-pull-secrets" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.776370 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.776358211 +0000 UTC m=+83.259588734 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.776381 3562 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.776389 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.776380871 +0000 UTC m=+83.259611394 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "installation-pull-secrets" (UniqueName: "kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"installation-pull-secrets" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.776353 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.776401 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.776395972 +0000 UTC m=+83.259626495 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.776428 3562 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.776441 3562 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.776448 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.776467 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.776456783 +0000 UTC m=+83.259687416 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.776479 3562 secret.go:194] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.776498 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.776528 3562 configmap.go:199] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.776501 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.776495384 +0000 UTC m=+83.259725897 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"serving-cert" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.776560 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.776554056 +0000 UTC m=+83.259784579 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.776572 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.776566946 +0000 UTC m=+83.259797469 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"etcd-serving-ca" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.776591 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.776623 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.776624 3562 configmap.go:199] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.776647 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.776665 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.776656339 +0000 UTC m=+83.259886862 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.776673 3562 secret.go:194] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.776680 3562 configmap.go:199] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.776516 3562 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.776701 3562 configmap.go:199] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.776736 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.776452 3562 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.776738 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.776724971 +0000 UTC m=+83.259955554 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"trusted-ca-bundle" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.776807 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.776797633 +0000 UTC m=+83.260028156 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"audit-1" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.776820 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.776814333 +0000 UTC m=+83.260044856 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"serving-cert" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.776768 3562 projected.go:200] Error preparing data for projected volume kube-api-access-76gl8 for pod openshift-network-diagnostics/network-check-target-v54bt: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.776837 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.776827064 +0000 UTC m=+83.260057587 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"service-ca" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.776859 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8 podName:34a48baf-1bee-4921-8bb2-9b7320e76f79 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.776849484 +0000 UTC m=+83.260079997 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-76gl8" (UniqueName: "kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8") pod "network-check-target-v54bt" (UID: "34a48baf-1bee-4921-8bb2-9b7320e76f79") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.776774 3562 secret.go:194] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.776881 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.776889 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.776883675 +0000 UTC m=+83.260114198 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.776940 3562 configmap.go:199] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.776945 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.776961 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.776955497 +0000 UTC m=+83.260186020 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.776992 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.777015 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.777019 3562 secret.go:194] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.777054 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.7770378 +0000 UTC m=+83.260268463 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-images" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.777060 3562 configmap.go:199] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.777065 3562 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.777098 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.777107 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.777085101 +0000 UTC m=+83.260315624 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-config" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.777116 3562 configmap.go:199] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.777123 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.777116592 +0000 UTC m=+83.260347115 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.777159 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.777193 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4w8wh\" (UniqueName: \"kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.777207 3562 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.777255 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.777225585 +0000 UTC m=+83.260456108 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"trusted-ca" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.777282 3562 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.777294 3562 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.777305 3562 projected.go:200] Error preparing data for projected volume kube-api-access-4w8wh for pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.777328 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.777320947 +0000 UTC m=+83.260551470 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-4w8wh" (UniqueName: "kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.777408 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.777392149 +0000 UTC m=+83.260622732 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.777438 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.77742906 +0000 UTC m=+83.260659713 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.777612 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.777650 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.777679 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.777869 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.777931 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.777943 3562 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.777967 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.777990 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.777977716 +0000 UTC m=+83.261208299 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.778020 3562 configmap.go:199] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.778043 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.778055 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.778046238 +0000 UTC m=+83.261276821 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.778085 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.778105 3562 configmap.go:199] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.778123 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.778135 3562 configmap.go:199] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.778152 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.77813786 +0000 UTC m=+83.261368443 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"openshift-global-ca" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.778182 3562 configmap.go:199] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.778191 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.778211 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.778201222 +0000 UTC m=+83.261431865 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"trusted-ca" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.778251 3562 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.778222 3562 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.778296 3562 configmap.go:199] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.778180 3562 secret.go:194] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.778314 3562 configmap.go:199] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.778087 3562 secret.go:194] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.778277 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.778287 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.778281024 +0000 UTC m=+83.261511547 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.778365 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.778354506 +0000 UTC m=+83.261585099 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"config" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.778377 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.778371537 +0000 UTC m=+83.261602060 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.778389 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.778382617 +0000 UTC m=+83.261613140 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.778404 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs podName:a702c6d2-4dde-4077-ab8c-0f8df804bf7a nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.778396637 +0000 UTC m=+83.261627270 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs") pod "network-metrics-daemon-qdfr4" (UID: "a702c6d2-4dde-4077-ab8c-0f8df804bf7a") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.778425 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.778416568 +0000 UTC m=+83.261647181 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.778446 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.778438439 +0000 UTC m=+83.261669062 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.778473 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.778503 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.778532 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.778553 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.778617 3562 configmap.go:199] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.778653 3562 configmap.go:199] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.778656 3562 secret.go:194] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.778656 3562 secret.go:194] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.778655 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.778644785 +0000 UTC m=+83.261875368 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.778739 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.778765 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.778800 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.778802 3562 secret.go:194] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.778843 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.778832191 +0000 UTC m=+83.262062794 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"serving-cert" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.778844 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.778876 3562 configmap.go:199] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.778923 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.778894392 +0000 UTC m=+83.262124915 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"oauth-serving-cert" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.778927 3562 configmap.go:199] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.778955 3562 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.778969 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.778958334 +0000 UTC m=+83.262188957 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"audit" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.778988 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.778993 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.778983845 +0000 UTC m=+83.262214488 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.779012 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.779014 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.779004475 +0000 UTC m=+83.262235118 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"config" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.779034 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.779024866 +0000 UTC m=+83.262255499 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.779044 3562 configmap.go:199] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.779064 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.779058077 +0000 UTC m=+83.262288600 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.779067 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2h9\" (UniqueName: \"kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.779097 3562 secret.go:194] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.779107 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-serving-cert\" (UniqueName: \"kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.779119 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.779112378 +0000 UTC m=+83.262342901 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"serving-cert" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.779135 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.779129229 +0000 UTC m=+83.262359742 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.779146 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.779176 3562 projected.go:294] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.779182 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.779187 3562 projected.go:294] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.779197 3562 projected.go:200] Error preparing data for projected volume kube-api-access-lx2h9 for pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.779217 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9 podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.779211421 +0000 UTC m=+83.262442054 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-lx2h9" (UniqueName: "kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.779218 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.779246 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.779264 3562 secret.go:194] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.779279 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.779300 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls podName:297ab9b6-2186-4d5b-a952-2bfd59af63c4 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.779288783 +0000 UTC m=+83.262519316 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls") pod "machine-config-controller-6df6df6b6b-58shh" (UID: "297ab9b6-2186-4d5b-a952-2bfd59af63c4") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.779313 3562 secret.go:194] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.779329 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.779333 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.779327584 +0000 UTC m=+83.262558107 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"serving-cert" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.779359 3562 configmap.go:199] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.779379 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.779373226 +0000 UTC m=+83.262603739 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.779461 3562 secret.go:194] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.779494 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.779485079 +0000 UTC m=+83.262715712 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.779542 3562 secret.go:194] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.779574 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.779563341 +0000 UTC m=+83.262793964 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"serving-cert" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.779601 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 podName: nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.779592322 +0000 UTC m=+83.262822925 (durationBeforeRetry 32s). Error: MountVolume.MountDevice failed for volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.779617 3562 secret.go:194] Couldn't get secret openshift-console-operator/webhook-serving-cert: object "openshift-console-operator"/"webhook-serving-cert" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.779660 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.779649063 +0000 UTC m=+83.262879676 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "webhook-serving-cert" (UniqueName: "kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : object "openshift-console-operator"/"webhook-serving-cert" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.875109 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.875151 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.875152 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.875434 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.875565 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.875466 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.875193 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.875598 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.875176 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.875204 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.875692 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.875776 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.875607 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.875214 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.875870 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.875227 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.875233 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.875244 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.875256 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.875266 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.876051 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.875262 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.875422 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.875479 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.875480 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.875490 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.875155 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.875504 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.875499 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.875519 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.875497 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.876228 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.875532 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.875543 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.876321 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.875540 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.875552 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.875545 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.875618 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.876527 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.875630 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.875642 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.875654 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.876622 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.876156 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.876734 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.876855 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.876998 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.877082 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.877178 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.877263 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.877342 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.877426 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.877486 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.877550 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.877646 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.877712 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.877773 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.877829 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.877931 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.878057 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.878097 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.878253 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.878348 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.878427 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.878524 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.878562 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.878575 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.878642 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.878698 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.878771 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.878829 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.881172 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8hpxx\" (UniqueName: \"kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.881205 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kp86\" (UniqueName: \"kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.881226 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-v7vkr\" (UniqueName: \"kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.881258 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.881639 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tvc4r\" (UniqueName: \"kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.881681 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-fqnmc\" (UniqueName: \"kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.881719 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-7ggjm\" (UniqueName: \"kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.881802 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9p8gt\" (UniqueName: \"kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt\") pod \"community-operators-sdddl\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " pod="openshift-marketplace/community-operators-sdddl" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.881842 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bm986\" (UniqueName: \"kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.881924 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j7zrh\" (UniqueName: \"kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.881991 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxcp\" (UniqueName: \"kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp\") pod \"network-check-source-5c5478f8c-vqvt7\" (UID: \"d0f40333-c860-4c04-8058-a0bf572dcf12\") " pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.882043 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.882089 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-x5d97\" (UniqueName: \"kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.882135 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8dcvj\" (UniqueName: \"kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.882184 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtrv\" (UniqueName: \"kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.882214 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2nz92\" (UniqueName: \"kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.882242 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kgvs\" (UniqueName: \"kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.882271 3562 projected.go:294] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.882289 3562 projected.go:294] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.882300 3562 projected.go:200] Error preparing data for projected volume kube-api-access-7ggjm for pod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.882375 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nf4t2\" (UniqueName: \"kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.882379 3562 projected.go:294] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.882391 3562 projected.go:294] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.882400 3562 projected.go:200] Error preparing data for projected volume kube-api-access-8hpxx for pod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.882414 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.882437 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.882418347 +0000 UTC m=+83.365648870 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-8hpxx" (UniqueName: "kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.882465 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hjlhw\" (UniqueName: \"kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.882486 3562 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.882498 3562 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.882507 3562 projected.go:200] Error preparing data for projected volume kube-api-access-rg2zg for pod openshift-marketplace/marketplace-operator-8b455464d-f9xdt: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.882568 3562 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.882581 3562 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.882588 3562 projected.go:200] Error preparing data for projected volume kube-api-access-9p8gt for pod openshift-marketplace/community-operators-sdddl: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.882613 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt podName:fc9c9ba0-fcbb-4e78-8cf5-a059ec435760 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.882603892 +0000 UTC m=+83.365834415 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-9p8gt" (UniqueName: "kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt") pod "community-operators-sdddl" (UID: "fc9c9ba0-fcbb-4e78-8cf5-a059ec435760") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.882617 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bxr\" (UniqueName: \"kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.882656 3562 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.882667 3562 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.882674 3562 projected.go:200] Error preparing data for projected volume kube-api-access-bm986 for pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.882727 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9724w\" (UniqueName: \"kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.882734 3562 projected.go:294] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.882746 3562 projected.go:294] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.882753 3562 projected.go:200] Error preparing data for projected volume kube-api-access-j7zrh for pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.882778 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.882769326 +0000 UTC m=+83.365999849 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-j7zrh" (UniqueName: "kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.882797 3562 projected.go:294] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.882809 3562 projected.go:294] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.882818 3562 projected.go:200] Error preparing data for projected volume kube-api-access-9724w for pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.882820 3562 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.882832 3562 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.882839 3562 projected.go:200] Error preparing data for projected volume kube-api-access-qcxcp for pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.882845 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.882835668 +0000 UTC m=+83.366066191 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-9724w" (UniqueName: "kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.882862 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp podName:d0f40333-c860-4c04-8058-a0bf572dcf12 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.882854569 +0000 UTC m=+83.366085092 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-qcxcp" (UniqueName: "kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp") pod "network-check-source-5c5478f8c-vqvt7" (UID: "d0f40333-c860-4c04-8058-a0bf572dcf12") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.882891 3562 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.882921 3562 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.882925 3562 projected.go:294] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.882930 3562 projected.go:200] Error preparing data for projected volume kube-api-access-6kp86 for pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.882934 3562 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.882957 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.882948751 +0000 UTC m=+83.366179274 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.882969 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86 podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.882963072 +0000 UTC m=+83.366193595 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kp86" (UniqueName: "kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.882992 3562 projected.go:294] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.883004 3562 projected.go:294] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.883011 3562 projected.go:200] Error preparing data for projected volume kube-api-access-v7vkr for pod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.883013 3562 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.883025 3562 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.883032 3562 projected.go:200] Error preparing data for projected volume kube-api-access-x5d97 for pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.883040 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.883028684 +0000 UTC m=+83.366259207 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-v7vkr" (UniqueName: "kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.883055 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97 podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.883046974 +0000 UTC m=+83.366277497 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-x5d97" (UniqueName: "kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.883088 3562 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.883097 3562 projected.go:294] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.883109 3562 projected.go:294] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.883116 3562 projected.go:200] Error preparing data for projected volume kube-api-access-8dcvj for pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.883139 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.883131787 +0000 UTC m=+83.366362310 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-8dcvj" (UniqueName: "kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.883099 3562 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.883150 3562 projected.go:200] Error preparing data for projected volume kube-api-access-ptdrb for pod openshift-marketplace/redhat-operators-f4jkp: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.883171 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb podName:4092a9f8-5acc-4932-9e90-ef962eeb301a nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.883164667 +0000 UTC m=+83.366395190 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-ptdrb" (UniqueName: "kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb") pod "redhat-operators-f4jkp" (UID: "4092a9f8-5acc-4932-9e90-ef962eeb301a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.883142 3562 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.883181 3562 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.883186 3562 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.883192 3562 projected.go:200] Error preparing data for projected volume kube-api-access-tvc4r for pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.883215 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.883207389 +0000 UTC m=+83.366437912 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-tvc4r" (UniqueName: "kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.883250 3562 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.883259 3562 projected.go:294] Couldn't get configMap hostpath-provisioner/kube-root-ca.crt: object "hostpath-provisioner"/"kube-root-ca.crt" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.883271 3562 projected.go:294] Couldn't get configMap hostpath-provisioner/openshift-service-ca.crt: object "hostpath-provisioner"/"openshift-service-ca.crt" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.883275 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.88326664 +0000 UTC m=+83.366497163 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-rg2zg" (UniqueName: "kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.883263 3562 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.883289 3562 projected.go:200] Error preparing data for projected volume kube-api-access-hjlhw for pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.883314 3562 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.883324 3562 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.883332 3562 projected.go:200] Error preparing data for projected volume kube-api-access-2nz92 for pod openshift-console/console-644bb77b49-5x5xk: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.883362 3562 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.883375 3562 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.883383 3562 projected.go:200] Error preparing data for projected volume kube-api-access-l8bxr for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.883389 3562 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.883401 3562 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.883278 3562 projected.go:200] Error preparing data for projected volume kube-api-access-vvtrv for pod hostpath-provisioner/csi-hostpathplugin-hvm8g: [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.883409 3562 projected.go:200] Error preparing data for projected volume kube-api-access-6kgvs for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.883193 3562 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.883434 3562 projected.go:200] Error preparing data for projected volume kube-api-access-fqnmc for pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.883316 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.883308431 +0000 UTC m=+83.366538964 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-hjlhw" (UniqueName: "kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.883465 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986 podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.883457916 +0000 UTC m=+83.366688439 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-bm986" (UniqueName: "kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.883467 3562 projected.go:294] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.883481 3562 projected.go:294] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.883489 3562 projected.go:200] Error preparing data for projected volume kube-api-access-nf4t2 for pod openshift-dns-operator/dns-operator-75f687757b-nz2xb: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.883512 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92 podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.883502777 +0000 UTC m=+83.366733300 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-2nz92" (UniqueName: "kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.883527 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.883520117 +0000 UTC m=+83.366750630 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-l8bxr" (UniqueName: "kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.883580 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv podName:12e733dd-0939-4f1b-9cbb-13897e093787 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.883571279 +0000 UTC m=+83.366801802 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-vvtrv" (UniqueName: "kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv") pod "csi-hostpathplugin-hvm8g" (UID: "12e733dd-0939-4f1b-9cbb-13897e093787") : [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.883595 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.883588289 +0000 UTC m=+83.366818822 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kgvs" (UniqueName: "kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.883609 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.88360134 +0000 UTC m=+83.366831863 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-fqnmc" (UniqueName: "kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.883624 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.88361679 +0000 UTC m=+83.366847313 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-7ggjm" (UniqueName: "kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.883637 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2 podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.88363089 +0000 UTC m=+83.366861413 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-nf4t2" (UniqueName: "kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.984791 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.984880 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:06:54 crc kubenswrapper[3562]: I1213 00:06:54.985111 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dt5cx\" (UniqueName: \"kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx\") pod \"ingress-canary-2vhcn\" (UID: \"0b5d722a-1123-4935-9740-52a08d018bc9\") " pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.985123 3562 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.985183 3562 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.985197 3562 projected.go:200] Error preparing data for projected volume kube-api-access-55f7t for pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.985120 3562 projected.go:294] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.985289 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.985260841 +0000 UTC m=+83.468491504 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-55f7t" (UniqueName: "kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.985331 3562 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.985278 3562 projected.go:294] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.985388 3562 projected.go:294] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.985401 3562 projected.go:200] Error preparing data for projected volume kube-api-access-dt5cx for pod openshift-ingress-canary/ingress-canary-2vhcn: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.985478 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.985445106 +0000 UTC m=+83.468675629 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:54 crc kubenswrapper[3562]: E1213 00:06:54.985508 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx podName:0b5d722a-1123-4935-9740-52a08d018bc9 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:26.985493977 +0000 UTC m=+83.468724690 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-dt5cx" (UniqueName: "kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx") pod "ingress-canary-2vhcn" (UID: "0b5d722a-1123-4935-9740-52a08d018bc9") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Dec 13 00:06:55 crc kubenswrapper[3562]: I1213 00:06:55.089122 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2zpsk\" (UniqueName: \"kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk\") pod \"downloads-65476884b9-9wcvx\" (UID: \"6268b7fe-8910-4505-b404-6f1df638105c\") " pod="openshift-console/downloads-65476884b9-9wcvx" Dec 13 00:06:55 crc kubenswrapper[3562]: I1213 00:06:55.089370 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmhq\" (UniqueName: \"kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq\") pod \"migrator-f7c6d88df-q2fnv\" (UID: \"cf1a8966-f594-490a-9fbb-eec5bafd13d3\") " pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 13 00:06:55 crc kubenswrapper[3562]: E1213 00:06:55.089398 3562 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 13 00:06:55 crc kubenswrapper[3562]: E1213 00:06:55.089493 3562 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 13 00:06:55 crc kubenswrapper[3562]: E1213 00:06:55.089507 3562 projected.go:200] Error preparing data for projected volume kube-api-access-2zpsk for pod openshift-console/downloads-65476884b9-9wcvx: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 13 00:06:55 crc kubenswrapper[3562]: E1213 00:06:55.089750 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk podName:6268b7fe-8910-4505-b404-6f1df638105c nodeName:}" failed. No retries permitted until 2025-12-13 00:07:27.089720072 +0000 UTC m=+83.572950595 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-2zpsk" (UniqueName: "kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk") pod "downloads-65476884b9-9wcvx" (UID: "6268b7fe-8910-4505-b404-6f1df638105c") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 13 00:06:55 crc kubenswrapper[3562]: E1213 00:06:55.089744 3562 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Dec 13 00:06:55 crc kubenswrapper[3562]: E1213 00:06:55.089819 3562 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Dec 13 00:06:55 crc kubenswrapper[3562]: E1213 00:06:55.089850 3562 projected.go:200] Error preparing data for projected volume kube-api-access-hqmhq for pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:55 crc kubenswrapper[3562]: E1213 00:06:55.090035 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq podName:cf1a8966-f594-490a-9fbb-eec5bafd13d3 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:27.08999517 +0000 UTC m=+83.573225843 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqmhq" (UniqueName: "kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq") pod "migrator-f7c6d88df-q2fnv" (UID: "cf1a8966-f594-490a-9fbb-eec5bafd13d3") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:55 crc kubenswrapper[3562]: I1213 00:06:55.193496 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wrd8h\" (UniqueName: \"kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:06:55 crc kubenswrapper[3562]: I1213 00:06:55.193578 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:06:55 crc kubenswrapper[3562]: I1213 00:06:55.193615 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vhj\" (UniqueName: \"kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:06:55 crc kubenswrapper[3562]: E1213 00:06:55.193679 3562 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 13 00:06:55 crc kubenswrapper[3562]: E1213 00:06:55.193734 3562 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 13 00:06:55 crc kubenswrapper[3562]: E1213 00:06:55.193746 3562 projected.go:200] Error preparing data for projected volume kube-api-access-wrd8h for pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:55 crc kubenswrapper[3562]: E1213 00:06:55.193795 3562 projected.go:294] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:55 crc kubenswrapper[3562]: E1213 00:06:55.193808 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-12-13 00:07:27.193786032 +0000 UTC m=+83.677016555 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-wrd8h" (UniqueName: "kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:55 crc kubenswrapper[3562]: E1213 00:06:55.193813 3562 projected.go:294] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:55 crc kubenswrapper[3562]: E1213 00:06:55.193826 3562 projected.go:200] Error preparing data for projected volume kube-api-access-tl5kg for pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:55 crc kubenswrapper[3562]: I1213 00:06:55.193691 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tl5kg\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:06:55 crc kubenswrapper[3562]: E1213 00:06:55.193875 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:27.193859544 +0000 UTC m=+83.677090067 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-tl5kg" (UniqueName: "kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:55 crc kubenswrapper[3562]: E1213 00:06:55.194097 3562 projected.go:294] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:55 crc kubenswrapper[3562]: E1213 00:06:55.194120 3562 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:55 crc kubenswrapper[3562]: E1213 00:06:55.194113 3562 projected.go:294] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:55 crc kubenswrapper[3562]: E1213 00:06:55.194179 3562 projected.go:294] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:55 crc kubenswrapper[3562]: E1213 00:06:55.194205 3562 projected.go:200] Error preparing data for projected volume kube-api-access-d9vhj for pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:55 crc kubenswrapper[3562]: E1213 00:06:55.194334 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:27.194302386 +0000 UTC m=+83.677532949 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-d9vhj" (UniqueName: "kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:55 crc kubenswrapper[3562]: E1213 00:06:55.194590 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:27.194574524 +0000 UTC m=+83.677805047 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:55 crc kubenswrapper[3562]: I1213 00:06:55.300127 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Dec 13 00:06:55 crc kubenswrapper[3562]: I1213 00:06:55.300210 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5rpl7\" (UniqueName: \"kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:06:55 crc kubenswrapper[3562]: I1213 00:06:55.300271 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pkhl4\" (UniqueName: \"kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:06:55 crc kubenswrapper[3562]: I1213 00:06:55.300316 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-js87r\" (UniqueName: \"kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:06:55 crc kubenswrapper[3562]: I1213 00:06:55.300382 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 13 00:06:55 crc kubenswrapper[3562]: E1213 00:06:55.300414 3562 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 13 00:06:55 crc kubenswrapper[3562]: E1213 00:06:55.300452 3562 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 13 00:06:55 crc kubenswrapper[3562]: E1213 00:06:55.300476 3562 projected.go:200] Error preparing data for projected volume kube-api-access-n6sqt for pod openshift-marketplace/community-operators-8jhz6: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:55 crc kubenswrapper[3562]: E1213 00:06:55.300577 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt podName:3f4dca86-e6ee-4ec9-8324-86aff960225e nodeName:}" failed. No retries permitted until 2025-12-13 00:07:27.300548618 +0000 UTC m=+83.783779151 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-n6sqt" (UniqueName: "kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt") pod "community-operators-8jhz6" (UID: "3f4dca86-e6ee-4ec9-8324-86aff960225e") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:55 crc kubenswrapper[3562]: E1213 00:06:55.300646 3562 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 13 00:06:55 crc kubenswrapper[3562]: E1213 00:06:55.300666 3562 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 13 00:06:55 crc kubenswrapper[3562]: E1213 00:06:55.300665 3562 projected.go:294] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Dec 13 00:06:55 crc kubenswrapper[3562]: E1213 00:06:55.300736 3562 projected.go:294] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Dec 13 00:06:55 crc kubenswrapper[3562]: E1213 00:06:55.300766 3562 projected.go:200] Error preparing data for projected volume kube-api-access-pkhl4 for pod openshift-controller-manager/controller-manager-778975cc4f-x5vcf: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:55 crc kubenswrapper[3562]: E1213 00:06:55.300678 3562 projected.go:200] Error preparing data for projected volume kube-api-access-tf29r for pod openshift-marketplace/redhat-marketplace-8s8pc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:55 crc kubenswrapper[3562]: E1213 00:06:55.300890 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4 podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-13 00:07:27.300855366 +0000 UTC m=+83.784085959 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-pkhl4" (UniqueName: "kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:06:55 crc kubenswrapper[3562]: E1213 00:06:55.300665 3562 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Dec 13 00:06:55 crc kubenswrapper[3562]: E1213 00:06:55.300997 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r podName:c782cf62-a827-4677-b3c2-6f82c5f09cbb nodeName:}" failed. No retries permitted until 2025-12-13 00:07:27.30097918 +0000 UTC m=+83.784209923 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-tf29r" (UniqueName: "kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r") pod "redhat-marketplace-8s8pc" (UID: "c782cf62-a827-4677-b3c2-6f82c5f09cbb") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:06:55 crc kubenswrapper[3562]: E1213 00:06:55.300998 3562 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Dec 13 00:06:55 crc kubenswrapper[3562]: E1213 00:06:55.301028 3562 projected.go:200] Error preparing data for projected volume kube-api-access-5rpl7 for pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:55 crc kubenswrapper[3562]: E1213 00:06:55.301077 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7 podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:27.301063482 +0000 UTC m=+83.784294205 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-5rpl7" (UniqueName: "kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:06:55 crc kubenswrapper[3562]: E1213 00:06:55.300678 3562 projected.go:294] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Dec 13 00:06:55 crc kubenswrapper[3562]: E1213 00:06:55.301122 3562 projected.go:294] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Dec 13 00:06:55 crc kubenswrapper[3562]: E1213 00:06:55.301138 3562 projected.go:200] Error preparing data for projected volume kube-api-access-js87r for pod openshift-service-ca/service-ca-666f99b6f-kk8kg: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Dec 13 00:06:55 crc kubenswrapper[3562]: E1213 00:06:55.301195 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-12-13 00:07:27.301182465 +0000 UTC m=+83.784413208 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-js87r" (UniqueName: "kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Dec 13 00:06:55 crc kubenswrapper[3562]: I1213 00:06:55.472731 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:06:55 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:06:55 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:06:55 crc kubenswrapper[3562]: healthz check failed Dec 13 00:06:55 crc kubenswrapper[3562]: I1213 00:06:55.472833 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:06:55 crc kubenswrapper[3562]: I1213 00:06:55.875143 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:06:55 crc kubenswrapper[3562]: I1213 00:06:55.875218 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 13 00:06:55 crc kubenswrapper[3562]: I1213 00:06:55.875278 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:06:55 crc kubenswrapper[3562]: I1213 00:06:55.875314 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:06:55 crc kubenswrapper[3562]: I1213 00:06:55.875342 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 13 00:06:55 crc kubenswrapper[3562]: I1213 00:06:55.875231 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:06:55 crc kubenswrapper[3562]: E1213 00:06:55.875678 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 13 00:06:55 crc kubenswrapper[3562]: I1213 00:06:55.875181 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 13 00:06:55 crc kubenswrapper[3562]: E1213 00:06:55.875956 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 13 00:06:55 crc kubenswrapper[3562]: I1213 00:06:55.876057 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:06:55 crc kubenswrapper[3562]: E1213 00:06:55.876166 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 13 00:06:55 crc kubenswrapper[3562]: I1213 00:06:55.876235 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:06:55 crc kubenswrapper[3562]: E1213 00:06:55.876318 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 13 00:06:55 crc kubenswrapper[3562]: E1213 00:06:55.876456 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 13 00:06:55 crc kubenswrapper[3562]: E1213 00:06:55.876558 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 13 00:06:55 crc kubenswrapper[3562]: E1213 00:06:55.876627 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 13 00:06:55 crc kubenswrapper[3562]: I1213 00:06:55.876703 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:06:55 crc kubenswrapper[3562]: I1213 00:06:55.876759 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:06:55 crc kubenswrapper[3562]: E1213 00:06:55.876856 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 13 00:06:55 crc kubenswrapper[3562]: E1213 00:06:55.876945 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 13 00:06:55 crc kubenswrapper[3562]: E1213 00:06:55.877018 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 13 00:06:55 crc kubenswrapper[3562]: E1213 00:06:55.877074 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 13 00:06:56 crc kubenswrapper[3562]: I1213 00:06:56.470956 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:06:56 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:06:56 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:06:56 crc kubenswrapper[3562]: healthz check failed Dec 13 00:06:56 crc kubenswrapper[3562]: I1213 00:06:56.471034 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:06:56 crc kubenswrapper[3562]: I1213 00:06:56.874573 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:06:56 crc kubenswrapper[3562]: I1213 00:06:56.874610 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 13 00:06:56 crc kubenswrapper[3562]: I1213 00:06:56.874739 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:06:56 crc kubenswrapper[3562]: E1213 00:06:56.874756 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 13 00:06:56 crc kubenswrapper[3562]: I1213 00:06:56.874842 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 13 00:06:56 crc kubenswrapper[3562]: I1213 00:06:56.874964 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 13 00:06:56 crc kubenswrapper[3562]: E1213 00:06:56.874983 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 13 00:06:56 crc kubenswrapper[3562]: I1213 00:06:56.875023 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 13 00:06:56 crc kubenswrapper[3562]: I1213 00:06:56.875059 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:06:56 crc kubenswrapper[3562]: E1213 00:06:56.875081 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 13 00:06:56 crc kubenswrapper[3562]: I1213 00:06:56.875122 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 13 00:06:56 crc kubenswrapper[3562]: I1213 00:06:56.875158 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 13 00:06:56 crc kubenswrapper[3562]: I1213 00:06:56.875179 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:06:56 crc kubenswrapper[3562]: E1213 00:06:56.875218 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 13 00:06:56 crc kubenswrapper[3562]: I1213 00:06:56.875163 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 13 00:06:56 crc kubenswrapper[3562]: I1213 00:06:56.875247 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:06:56 crc kubenswrapper[3562]: I1213 00:06:56.875267 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 13 00:06:56 crc kubenswrapper[3562]: I1213 00:06:56.875280 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:06:56 crc kubenswrapper[3562]: I1213 00:06:56.875294 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:56 crc kubenswrapper[3562]: I1213 00:06:56.875250 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 13 00:06:56 crc kubenswrapper[3562]: I1213 00:06:56.875293 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:06:56 crc kubenswrapper[3562]: I1213 00:06:56.875333 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 13 00:06:56 crc kubenswrapper[3562]: I1213 00:06:56.875347 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 13 00:06:56 crc kubenswrapper[3562]: I1213 00:06:56.875381 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 13 00:06:56 crc kubenswrapper[3562]: E1213 00:06:56.875368 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 13 00:06:56 crc kubenswrapper[3562]: E1213 00:06:56.875449 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 13 00:06:56 crc kubenswrapper[3562]: I1213 00:06:56.875271 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 13 00:06:56 crc kubenswrapper[3562]: I1213 00:06:56.875519 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:06:56 crc kubenswrapper[3562]: I1213 00:06:56.875544 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:06:56 crc kubenswrapper[3562]: I1213 00:06:56.875554 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 13 00:06:56 crc kubenswrapper[3562]: E1213 00:06:56.875831 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 13 00:06:56 crc kubenswrapper[3562]: I1213 00:06:56.875862 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 13 00:06:56 crc kubenswrapper[3562]: I1213 00:06:56.875842 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:06:56 crc kubenswrapper[3562]: E1213 00:06:56.875947 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 13 00:06:56 crc kubenswrapper[3562]: E1213 00:06:56.876035 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 13 00:06:56 crc kubenswrapper[3562]: E1213 00:06:56.876148 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 13 00:06:56 crc kubenswrapper[3562]: E1213 00:06:56.876253 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 13 00:06:56 crc kubenswrapper[3562]: I1213 00:06:56.876323 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:06:56 crc kubenswrapper[3562]: E1213 00:06:56.876475 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 13 00:06:56 crc kubenswrapper[3562]: I1213 00:06:56.876533 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 13 00:06:56 crc kubenswrapper[3562]: E1213 00:06:56.876569 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 13 00:06:56 crc kubenswrapper[3562]: E1213 00:06:56.876665 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 13 00:06:56 crc kubenswrapper[3562]: I1213 00:06:56.876705 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:06:56 crc kubenswrapper[3562]: E1213 00:06:56.876755 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 13 00:06:56 crc kubenswrapper[3562]: I1213 00:06:56.876806 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:06:56 crc kubenswrapper[3562]: E1213 00:06:56.876857 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 13 00:06:56 crc kubenswrapper[3562]: I1213 00:06:56.876931 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:06:56 crc kubenswrapper[3562]: E1213 00:06:56.877021 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 13 00:06:56 crc kubenswrapper[3562]: I1213 00:06:56.877059 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:06:56 crc kubenswrapper[3562]: E1213 00:06:56.877091 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 13 00:06:56 crc kubenswrapper[3562]: I1213 00:06:56.877126 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:06:56 crc kubenswrapper[3562]: E1213 00:06:56.877233 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 13 00:06:56 crc kubenswrapper[3562]: I1213 00:06:56.877278 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:06:56 crc kubenswrapper[3562]: I1213 00:06:56.877335 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:56 crc kubenswrapper[3562]: E1213 00:06:56.877501 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 13 00:06:56 crc kubenswrapper[3562]: E1213 00:06:56.877548 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 13 00:06:56 crc kubenswrapper[3562]: E1213 00:06:56.877642 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 13 00:06:56 crc kubenswrapper[3562]: E1213 00:06:56.877690 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 13 00:06:56 crc kubenswrapper[3562]: I1213 00:06:56.877731 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:56 crc kubenswrapper[3562]: E1213 00:06:56.877778 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 13 00:06:56 crc kubenswrapper[3562]: E1213 00:06:56.877853 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 13 00:06:56 crc kubenswrapper[3562]: E1213 00:06:56.878029 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 13 00:06:56 crc kubenswrapper[3562]: E1213 00:06:56.878130 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 13 00:06:56 crc kubenswrapper[3562]: E1213 00:06:56.878252 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 13 00:06:56 crc kubenswrapper[3562]: E1213 00:06:56.878311 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 13 00:06:56 crc kubenswrapper[3562]: E1213 00:06:56.878514 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 13 00:06:56 crc kubenswrapper[3562]: E1213 00:06:56.878699 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 13 00:06:56 crc kubenswrapper[3562]: E1213 00:06:56.878773 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 13 00:06:56 crc kubenswrapper[3562]: E1213 00:06:56.878894 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 13 00:06:56 crc kubenswrapper[3562]: E1213 00:06:56.878964 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 13 00:06:56 crc kubenswrapper[3562]: E1213 00:06:56.879069 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 13 00:06:56 crc kubenswrapper[3562]: E1213 00:06:56.879098 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 13 00:06:57 crc kubenswrapper[3562]: I1213 00:06:57.471508 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:06:57 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:06:57 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:06:57 crc kubenswrapper[3562]: healthz check failed Dec 13 00:06:57 crc kubenswrapper[3562]: I1213 00:06:57.471593 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:06:57 crc kubenswrapper[3562]: I1213 00:06:57.874256 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 13 00:06:57 crc kubenswrapper[3562]: I1213 00:06:57.874343 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 13 00:06:57 crc kubenswrapper[3562]: I1213 00:06:57.874288 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:06:57 crc kubenswrapper[3562]: E1213 00:06:57.874658 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 13 00:06:57 crc kubenswrapper[3562]: I1213 00:06:57.874999 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:06:57 crc kubenswrapper[3562]: I1213 00:06:57.875077 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:06:57 crc kubenswrapper[3562]: I1213 00:06:57.875082 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:06:57 crc kubenswrapper[3562]: I1213 00:06:57.875115 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:06:57 crc kubenswrapper[3562]: I1213 00:06:57.875028 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:06:57 crc kubenswrapper[3562]: I1213 00:06:57.875071 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:06:57 crc kubenswrapper[3562]: I1213 00:06:57.875087 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 13 00:06:57 crc kubenswrapper[3562]: E1213 00:06:57.875331 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 13 00:06:57 crc kubenswrapper[3562]: I1213 00:06:57.875653 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:06:57 crc kubenswrapper[3562]: E1213 00:06:57.875674 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 13 00:06:57 crc kubenswrapper[3562]: E1213 00:06:57.875767 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 13 00:06:57 crc kubenswrapper[3562]: E1213 00:06:57.875789 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 13 00:06:57 crc kubenswrapper[3562]: E1213 00:06:57.876159 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 13 00:06:57 crc kubenswrapper[3562]: E1213 00:06:57.876389 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 13 00:06:57 crc kubenswrapper[3562]: E1213 00:06:57.876435 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 13 00:06:57 crc kubenswrapper[3562]: E1213 00:06:57.876494 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 13 00:06:57 crc kubenswrapper[3562]: E1213 00:06:57.876588 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 13 00:06:57 crc kubenswrapper[3562]: E1213 00:06:57.876648 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 13 00:06:58 crc kubenswrapper[3562]: I1213 00:06:58.471136 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:06:58 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:06:58 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:06:58 crc kubenswrapper[3562]: healthz check failed Dec 13 00:06:58 crc kubenswrapper[3562]: I1213 00:06:58.471219 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:06:58 crc kubenswrapper[3562]: E1213 00:06:58.870664 3562 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 13 00:06:58 crc kubenswrapper[3562]: I1213 00:06:58.874197 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 13 00:06:58 crc kubenswrapper[3562]: I1213 00:06:58.874308 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 13 00:06:58 crc kubenswrapper[3562]: I1213 00:06:58.874327 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:06:58 crc kubenswrapper[3562]: I1213 00:06:58.874597 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:06:58 crc kubenswrapper[3562]: I1213 00:06:58.874641 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:06:58 crc kubenswrapper[3562]: I1213 00:06:58.874592 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:06:58 crc kubenswrapper[3562]: I1213 00:06:58.874712 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 13 00:06:58 crc kubenswrapper[3562]: I1213 00:06:58.874769 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:06:58 crc kubenswrapper[3562]: I1213 00:06:58.874774 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:06:58 crc kubenswrapper[3562]: I1213 00:06:58.874814 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 13 00:06:58 crc kubenswrapper[3562]: I1213 00:06:58.874879 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:06:58 crc kubenswrapper[3562]: I1213 00:06:58.874891 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 13 00:06:58 crc kubenswrapper[3562]: I1213 00:06:58.874929 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:06:58 crc kubenswrapper[3562]: I1213 00:06:58.874955 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 13 00:06:58 crc kubenswrapper[3562]: I1213 00:06:58.875003 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:06:58 crc kubenswrapper[3562]: I1213 00:06:58.874739 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 13 00:06:58 crc kubenswrapper[3562]: I1213 00:06:58.874879 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:06:58 crc kubenswrapper[3562]: I1213 00:06:58.875038 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 13 00:06:58 crc kubenswrapper[3562]: I1213 00:06:58.875052 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 13 00:06:58 crc kubenswrapper[3562]: I1213 00:06:58.875002 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 13 00:06:58 crc kubenswrapper[3562]: I1213 00:06:58.874959 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 13 00:06:58 crc kubenswrapper[3562]: I1213 00:06:58.875083 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:06:58 crc kubenswrapper[3562]: I1213 00:06:58.874882 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 13 00:06:58 crc kubenswrapper[3562]: I1213 00:06:58.874753 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:06:58 crc kubenswrapper[3562]: I1213 00:06:58.874891 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:06:58 crc kubenswrapper[3562]: I1213 00:06:58.874969 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:06:58 crc kubenswrapper[3562]: I1213 00:06:58.875178 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 13 00:06:58 crc kubenswrapper[3562]: I1213 00:06:58.875220 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 13 00:06:58 crc kubenswrapper[3562]: E1213 00:06:58.875225 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 13 00:06:58 crc kubenswrapper[3562]: I1213 00:06:58.875236 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 13 00:06:58 crc kubenswrapper[3562]: I1213 00:06:58.875019 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:06:58 crc kubenswrapper[3562]: I1213 00:06:58.875021 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 13 00:06:58 crc kubenswrapper[3562]: E1213 00:06:58.874748 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 13 00:06:58 crc kubenswrapper[3562]: I1213 00:06:58.875329 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:06:58 crc kubenswrapper[3562]: I1213 00:06:58.875380 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:06:58 crc kubenswrapper[3562]: E1213 00:06:58.875396 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 13 00:06:58 crc kubenswrapper[3562]: I1213 00:06:58.875495 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:06:58 crc kubenswrapper[3562]: I1213 00:06:58.875526 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:06:58 crc kubenswrapper[3562]: E1213 00:06:58.875694 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 13 00:06:58 crc kubenswrapper[3562]: E1213 00:06:58.876002 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 13 00:06:58 crc kubenswrapper[3562]: E1213 00:06:58.876080 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 13 00:06:58 crc kubenswrapper[3562]: E1213 00:06:58.876268 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 13 00:06:58 crc kubenswrapper[3562]: E1213 00:06:58.876382 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 13 00:06:58 crc kubenswrapper[3562]: I1213 00:06:58.876418 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:06:58 crc kubenswrapper[3562]: E1213 00:06:58.876570 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 13 00:06:58 crc kubenswrapper[3562]: E1213 00:06:58.876768 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 13 00:06:58 crc kubenswrapper[3562]: E1213 00:06:58.876947 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 13 00:06:58 crc kubenswrapper[3562]: E1213 00:06:58.877077 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 13 00:06:58 crc kubenswrapper[3562]: E1213 00:06:58.877164 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 13 00:06:58 crc kubenswrapper[3562]: E1213 00:06:58.877310 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 13 00:06:58 crc kubenswrapper[3562]: E1213 00:06:58.877437 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 13 00:06:58 crc kubenswrapper[3562]: E1213 00:06:58.877509 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 13 00:06:58 crc kubenswrapper[3562]: E1213 00:06:58.877585 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 13 00:06:58 crc kubenswrapper[3562]: E1213 00:06:58.877655 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 13 00:06:58 crc kubenswrapper[3562]: E1213 00:06:58.877755 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 13 00:06:58 crc kubenswrapper[3562]: E1213 00:06:58.877925 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 13 00:06:58 crc kubenswrapper[3562]: E1213 00:06:58.878081 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 13 00:06:58 crc kubenswrapper[3562]: E1213 00:06:58.878084 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 13 00:06:58 crc kubenswrapper[3562]: E1213 00:06:58.878159 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 13 00:06:58 crc kubenswrapper[3562]: E1213 00:06:58.878195 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 13 00:06:58 crc kubenswrapper[3562]: E1213 00:06:58.878316 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 13 00:06:58 crc kubenswrapper[3562]: E1213 00:06:58.878509 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 13 00:06:58 crc kubenswrapper[3562]: E1213 00:06:58.878530 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 13 00:06:58 crc kubenswrapper[3562]: E1213 00:06:58.878663 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 13 00:06:58 crc kubenswrapper[3562]: E1213 00:06:58.878798 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 13 00:06:58 crc kubenswrapper[3562]: E1213 00:06:58.878873 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 13 00:06:58 crc kubenswrapper[3562]: E1213 00:06:58.878990 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 13 00:06:58 crc kubenswrapper[3562]: E1213 00:06:58.879081 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 13 00:06:58 crc kubenswrapper[3562]: E1213 00:06:58.879168 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 13 00:06:58 crc kubenswrapper[3562]: E1213 00:06:58.879265 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 13 00:06:58 crc kubenswrapper[3562]: E1213 00:06:58.879401 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 13 00:06:58 crc kubenswrapper[3562]: E1213 00:06:58.879458 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 13 00:06:59 crc kubenswrapper[3562]: I1213 00:06:59.472064 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:06:59 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:06:59 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:06:59 crc kubenswrapper[3562]: healthz check failed Dec 13 00:06:59 crc kubenswrapper[3562]: I1213 00:06:59.472164 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:06:59 crc kubenswrapper[3562]: I1213 00:06:59.874852 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:06:59 crc kubenswrapper[3562]: I1213 00:06:59.874978 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 13 00:06:59 crc kubenswrapper[3562]: I1213 00:06:59.875006 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:06:59 crc kubenswrapper[3562]: I1213 00:06:59.875033 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:06:59 crc kubenswrapper[3562]: I1213 00:06:59.875083 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:06:59 crc kubenswrapper[3562]: I1213 00:06:59.875089 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 13 00:06:59 crc kubenswrapper[3562]: I1213 00:06:59.874865 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:06:59 crc kubenswrapper[3562]: I1213 00:06:59.874865 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:06:59 crc kubenswrapper[3562]: I1213 00:06:59.874891 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 13 00:06:59 crc kubenswrapper[3562]: I1213 00:06:59.874918 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:06:59 crc kubenswrapper[3562]: I1213 00:06:59.874949 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:06:59 crc kubenswrapper[3562]: E1213 00:06:59.875385 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 13 00:06:59 crc kubenswrapper[3562]: E1213 00:06:59.875530 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 13 00:06:59 crc kubenswrapper[3562]: E1213 00:06:59.875674 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 13 00:06:59 crc kubenswrapper[3562]: E1213 00:06:59.875785 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 13 00:06:59 crc kubenswrapper[3562]: E1213 00:06:59.875896 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 13 00:06:59 crc kubenswrapper[3562]: E1213 00:06:59.875968 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 13 00:06:59 crc kubenswrapper[3562]: E1213 00:06:59.876115 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 13 00:06:59 crc kubenswrapper[3562]: E1213 00:06:59.876280 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 13 00:06:59 crc kubenswrapper[3562]: E1213 00:06:59.876299 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 13 00:06:59 crc kubenswrapper[3562]: E1213 00:06:59.876383 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 13 00:06:59 crc kubenswrapper[3562]: E1213 00:06:59.876469 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 13 00:07:00 crc kubenswrapper[3562]: I1213 00:07:00.471811 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:07:00 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:07:00 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:07:00 crc kubenswrapper[3562]: healthz check failed Dec 13 00:07:00 crc kubenswrapper[3562]: I1213 00:07:00.471963 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:07:00 crc kubenswrapper[3562]: I1213 00:07:00.875324 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:07:00 crc kubenswrapper[3562]: I1213 00:07:00.875391 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:07:00 crc kubenswrapper[3562]: I1213 00:07:00.875464 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 13 00:07:00 crc kubenswrapper[3562]: I1213 00:07:00.875500 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:07:00 crc kubenswrapper[3562]: I1213 00:07:00.875391 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 13 00:07:00 crc kubenswrapper[3562]: I1213 00:07:00.875683 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 13 00:07:00 crc kubenswrapper[3562]: I1213 00:07:00.875695 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:07:00 crc kubenswrapper[3562]: I1213 00:07:00.875709 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:07:00 crc kubenswrapper[3562]: E1213 00:07:00.875821 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 13 00:07:00 crc kubenswrapper[3562]: I1213 00:07:00.875837 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 13 00:07:00 crc kubenswrapper[3562]: I1213 00:07:00.875896 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:07:00 crc kubenswrapper[3562]: I1213 00:07:00.876090 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:07:00 crc kubenswrapper[3562]: I1213 00:07:00.876163 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 13 00:07:00 crc kubenswrapper[3562]: I1213 00:07:00.876210 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 13 00:07:00 crc kubenswrapper[3562]: I1213 00:07:00.876244 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 13 00:07:00 crc kubenswrapper[3562]: E1213 00:07:00.876089 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 13 00:07:00 crc kubenswrapper[3562]: I1213 00:07:00.875693 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 13 00:07:00 crc kubenswrapper[3562]: I1213 00:07:00.876344 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 13 00:07:00 crc kubenswrapper[3562]: E1213 00:07:00.876361 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 13 00:07:00 crc kubenswrapper[3562]: I1213 00:07:00.876393 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 13 00:07:00 crc kubenswrapper[3562]: E1213 00:07:00.876409 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 13 00:07:00 crc kubenswrapper[3562]: I1213 00:07:00.876463 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 13 00:07:00 crc kubenswrapper[3562]: I1213 00:07:00.876464 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:07:00 crc kubenswrapper[3562]: I1213 00:07:00.876308 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:07:00 crc kubenswrapper[3562]: E1213 00:07:00.876665 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 13 00:07:00 crc kubenswrapper[3562]: E1213 00:07:00.876779 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 13 00:07:00 crc kubenswrapper[3562]: I1213 00:07:00.876863 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:07:00 crc kubenswrapper[3562]: I1213 00:07:00.876947 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:07:00 crc kubenswrapper[3562]: E1213 00:07:00.877006 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 13 00:07:00 crc kubenswrapper[3562]: I1213 00:07:00.877039 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 13 00:07:00 crc kubenswrapper[3562]: I1213 00:07:00.877084 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 13 00:07:00 crc kubenswrapper[3562]: E1213 00:07:00.877141 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 13 00:07:00 crc kubenswrapper[3562]: E1213 00:07:00.877215 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 13 00:07:00 crc kubenswrapper[3562]: I1213 00:07:00.877259 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:07:00 crc kubenswrapper[3562]: E1213 00:07:00.877377 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 13 00:07:00 crc kubenswrapper[3562]: I1213 00:07:00.877414 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:07:00 crc kubenswrapper[3562]: I1213 00:07:00.877428 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 13 00:07:00 crc kubenswrapper[3562]: E1213 00:07:00.877497 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 13 00:07:00 crc kubenswrapper[3562]: I1213 00:07:00.877509 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 13 00:07:00 crc kubenswrapper[3562]: I1213 00:07:00.877550 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:07:00 crc kubenswrapper[3562]: E1213 00:07:00.877742 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 13 00:07:00 crc kubenswrapper[3562]: E1213 00:07:00.877827 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 13 00:07:00 crc kubenswrapper[3562]: I1213 00:07:00.877862 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:07:00 crc kubenswrapper[3562]: E1213 00:07:00.877838 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 13 00:07:00 crc kubenswrapper[3562]: E1213 00:07:00.877976 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 13 00:07:00 crc kubenswrapper[3562]: I1213 00:07:00.877986 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:07:00 crc kubenswrapper[3562]: E1213 00:07:00.878141 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 13 00:07:00 crc kubenswrapper[3562]: I1213 00:07:00.878242 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:07:00 crc kubenswrapper[3562]: I1213 00:07:00.878299 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:07:00 crc kubenswrapper[3562]: E1213 00:07:00.878322 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 13 00:07:00 crc kubenswrapper[3562]: E1213 00:07:00.878490 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 13 00:07:00 crc kubenswrapper[3562]: E1213 00:07:00.878535 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 13 00:07:00 crc kubenswrapper[3562]: E1213 00:07:00.878699 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 13 00:07:00 crc kubenswrapper[3562]: E1213 00:07:00.878806 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 13 00:07:00 crc kubenswrapper[3562]: E1213 00:07:00.878854 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 13 00:07:00 crc kubenswrapper[3562]: E1213 00:07:00.879043 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 13 00:07:00 crc kubenswrapper[3562]: E1213 00:07:00.879063 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 13 00:07:00 crc kubenswrapper[3562]: I1213 00:07:00.879085 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:07:00 crc kubenswrapper[3562]: E1213 00:07:00.879084 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 13 00:07:00 crc kubenswrapper[3562]: I1213 00:07:00.879140 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 13 00:07:00 crc kubenswrapper[3562]: E1213 00:07:00.879180 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 13 00:07:00 crc kubenswrapper[3562]: I1213 00:07:00.879267 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:07:00 crc kubenswrapper[3562]: E1213 00:07:00.879266 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 13 00:07:00 crc kubenswrapper[3562]: E1213 00:07:00.879356 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 13 00:07:00 crc kubenswrapper[3562]: E1213 00:07:00.879400 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 13 00:07:00 crc kubenswrapper[3562]: E1213 00:07:00.879479 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 13 00:07:00 crc kubenswrapper[3562]: E1213 00:07:00.879516 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 13 00:07:00 crc kubenswrapper[3562]: E1213 00:07:00.879593 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 13 00:07:00 crc kubenswrapper[3562]: E1213 00:07:00.879627 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 13 00:07:00 crc kubenswrapper[3562]: E1213 00:07:00.879706 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 13 00:07:00 crc kubenswrapper[3562]: E1213 00:07:00.879823 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 13 00:07:00 crc kubenswrapper[3562]: E1213 00:07:00.879949 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 13 00:07:01 crc kubenswrapper[3562]: I1213 00:07:01.472743 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:07:01 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:07:01 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:07:01 crc kubenswrapper[3562]: healthz check failed Dec 13 00:07:01 crc kubenswrapper[3562]: I1213 00:07:01.472847 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:07:01 crc kubenswrapper[3562]: I1213 00:07:01.874201 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:07:01 crc kubenswrapper[3562]: I1213 00:07:01.874270 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 13 00:07:01 crc kubenswrapper[3562]: I1213 00:07:01.874316 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:07:01 crc kubenswrapper[3562]: I1213 00:07:01.874329 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:07:01 crc kubenswrapper[3562]: I1213 00:07:01.874277 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:07:01 crc kubenswrapper[3562]: I1213 00:07:01.874372 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:07:01 crc kubenswrapper[3562]: I1213 00:07:01.874326 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 13 00:07:01 crc kubenswrapper[3562]: I1213 00:07:01.874223 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:07:01 crc kubenswrapper[3562]: I1213 00:07:01.874399 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:07:01 crc kubenswrapper[3562]: E1213 00:07:01.874466 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 13 00:07:01 crc kubenswrapper[3562]: E1213 00:07:01.874600 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 13 00:07:01 crc kubenswrapper[3562]: E1213 00:07:01.874698 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 13 00:07:01 crc kubenswrapper[3562]: E1213 00:07:01.874757 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 13 00:07:01 crc kubenswrapper[3562]: I1213 00:07:01.874788 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 13 00:07:01 crc kubenswrapper[3562]: E1213 00:07:01.874853 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 13 00:07:01 crc kubenswrapper[3562]: I1213 00:07:01.874874 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:07:01 crc kubenswrapper[3562]: E1213 00:07:01.874982 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 13 00:07:01 crc kubenswrapper[3562]: E1213 00:07:01.875083 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 13 00:07:01 crc kubenswrapper[3562]: E1213 00:07:01.875150 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 13 00:07:01 crc kubenswrapper[3562]: E1213 00:07:01.875250 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 13 00:07:01 crc kubenswrapper[3562]: E1213 00:07:01.875328 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 13 00:07:01 crc kubenswrapper[3562]: E1213 00:07:01.875404 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 13 00:07:02 crc kubenswrapper[3562]: I1213 00:07:02.471504 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:07:02 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:07:02 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:07:02 crc kubenswrapper[3562]: healthz check failed Dec 13 00:07:02 crc kubenswrapper[3562]: I1213 00:07:02.471581 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:07:02 crc kubenswrapper[3562]: I1213 00:07:02.874206 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:07:02 crc kubenswrapper[3562]: I1213 00:07:02.874246 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 13 00:07:02 crc kubenswrapper[3562]: I1213 00:07:02.874287 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:07:02 crc kubenswrapper[3562]: I1213 00:07:02.874294 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:07:02 crc kubenswrapper[3562]: I1213 00:07:02.874326 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 13 00:07:02 crc kubenswrapper[3562]: I1213 00:07:02.874257 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 13 00:07:02 crc kubenswrapper[3562]: I1213 00:07:02.874365 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:07:02 crc kubenswrapper[3562]: I1213 00:07:02.874372 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:07:02 crc kubenswrapper[3562]: I1213 00:07:02.874329 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 13 00:07:02 crc kubenswrapper[3562]: I1213 00:07:02.874411 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:07:02 crc kubenswrapper[3562]: I1213 00:07:02.874348 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 13 00:07:02 crc kubenswrapper[3562]: I1213 00:07:02.874426 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:07:02 crc kubenswrapper[3562]: I1213 00:07:02.874392 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:07:02 crc kubenswrapper[3562]: I1213 00:07:02.874465 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 13 00:07:02 crc kubenswrapper[3562]: I1213 00:07:02.874374 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:07:02 crc kubenswrapper[3562]: I1213 00:07:02.874406 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 13 00:07:02 crc kubenswrapper[3562]: I1213 00:07:02.874431 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:07:02 crc kubenswrapper[3562]: I1213 00:07:02.874576 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 13 00:07:02 crc kubenswrapper[3562]: E1213 00:07:02.874580 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 13 00:07:02 crc kubenswrapper[3562]: I1213 00:07:02.874599 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:07:02 crc kubenswrapper[3562]: I1213 00:07:02.874603 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:07:02 crc kubenswrapper[3562]: I1213 00:07:02.874601 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 13 00:07:02 crc kubenswrapper[3562]: I1213 00:07:02.874649 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 13 00:07:02 crc kubenswrapper[3562]: I1213 00:07:02.874663 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 13 00:07:02 crc kubenswrapper[3562]: I1213 00:07:02.874585 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 13 00:07:02 crc kubenswrapper[3562]: I1213 00:07:02.874606 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 13 00:07:02 crc kubenswrapper[3562]: I1213 00:07:02.874711 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:07:02 crc kubenswrapper[3562]: I1213 00:07:02.874462 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:07:02 crc kubenswrapper[3562]: I1213 00:07:02.874479 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:07:02 crc kubenswrapper[3562]: I1213 00:07:02.874434 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:07:02 crc kubenswrapper[3562]: I1213 00:07:02.874622 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 13 00:07:02 crc kubenswrapper[3562]: I1213 00:07:02.874634 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 13 00:07:02 crc kubenswrapper[3562]: E1213 00:07:02.874754 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 13 00:07:02 crc kubenswrapper[3562]: I1213 00:07:02.874934 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:07:02 crc kubenswrapper[3562]: E1213 00:07:02.874933 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 13 00:07:02 crc kubenswrapper[3562]: E1213 00:07:02.874999 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 13 00:07:02 crc kubenswrapper[3562]: I1213 00:07:02.875032 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:07:02 crc kubenswrapper[3562]: I1213 00:07:02.875081 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:07:02 crc kubenswrapper[3562]: I1213 00:07:02.875122 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 13 00:07:02 crc kubenswrapper[3562]: E1213 00:07:02.875098 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 13 00:07:02 crc kubenswrapper[3562]: E1213 00:07:02.875307 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 13 00:07:02 crc kubenswrapper[3562]: E1213 00:07:02.875331 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 13 00:07:02 crc kubenswrapper[3562]: E1213 00:07:02.875450 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 13 00:07:02 crc kubenswrapper[3562]: E1213 00:07:02.875511 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 13 00:07:02 crc kubenswrapper[3562]: E1213 00:07:02.875674 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 13 00:07:02 crc kubenswrapper[3562]: E1213 00:07:02.875711 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 13 00:07:02 crc kubenswrapper[3562]: E1213 00:07:02.875783 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 13 00:07:02 crc kubenswrapper[3562]: I1213 00:07:02.875889 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:07:02 crc kubenswrapper[3562]: E1213 00:07:02.876007 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 13 00:07:02 crc kubenswrapper[3562]: E1213 00:07:02.876083 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 13 00:07:02 crc kubenswrapper[3562]: E1213 00:07:02.876128 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 13 00:07:02 crc kubenswrapper[3562]: E1213 00:07:02.876186 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 13 00:07:02 crc kubenswrapper[3562]: E1213 00:07:02.876384 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 13 00:07:02 crc kubenswrapper[3562]: E1213 00:07:02.876494 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 13 00:07:02 crc kubenswrapper[3562]: E1213 00:07:02.876805 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 13 00:07:02 crc kubenswrapper[3562]: E1213 00:07:02.876857 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 13 00:07:02 crc kubenswrapper[3562]: E1213 00:07:02.877075 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 13 00:07:02 crc kubenswrapper[3562]: E1213 00:07:02.877109 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 13 00:07:02 crc kubenswrapper[3562]: E1213 00:07:02.877250 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 13 00:07:02 crc kubenswrapper[3562]: E1213 00:07:02.877357 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 13 00:07:02 crc kubenswrapper[3562]: E1213 00:07:02.877368 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 13 00:07:02 crc kubenswrapper[3562]: E1213 00:07:02.877387 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 13 00:07:02 crc kubenswrapper[3562]: E1213 00:07:02.877467 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 13 00:07:02 crc kubenswrapper[3562]: E1213 00:07:02.877557 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 13 00:07:02 crc kubenswrapper[3562]: E1213 00:07:02.877580 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 13 00:07:02 crc kubenswrapper[3562]: E1213 00:07:02.877607 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 13 00:07:02 crc kubenswrapper[3562]: E1213 00:07:02.877678 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 13 00:07:02 crc kubenswrapper[3562]: E1213 00:07:02.877792 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 13 00:07:02 crc kubenswrapper[3562]: E1213 00:07:02.877826 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 13 00:07:02 crc kubenswrapper[3562]: E1213 00:07:02.877898 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 13 00:07:02 crc kubenswrapper[3562]: E1213 00:07:02.877989 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 13 00:07:02 crc kubenswrapper[3562]: E1213 00:07:02.877996 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 13 00:07:03 crc kubenswrapper[3562]: I1213 00:07:03.471381 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:07:03 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:07:03 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:07:03 crc kubenswrapper[3562]: healthz check failed Dec 13 00:07:03 crc kubenswrapper[3562]: I1213 00:07:03.471461 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:07:03 crc kubenswrapper[3562]: I1213 00:07:03.725329 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 13 00:07:03 crc kubenswrapper[3562]: I1213 00:07:03.725413 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 13 00:07:03 crc kubenswrapper[3562]: I1213 00:07:03.725446 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 13 00:07:03 crc kubenswrapper[3562]: I1213 00:07:03.725485 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 13 00:07:03 crc kubenswrapper[3562]: I1213 00:07:03.725550 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 13 00:07:03 crc kubenswrapper[3562]: E1213 00:07:03.871579 3562 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 13 00:07:03 crc kubenswrapper[3562]: I1213 00:07:03.874557 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:07:03 crc kubenswrapper[3562]: I1213 00:07:03.874605 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 13 00:07:03 crc kubenswrapper[3562]: I1213 00:07:03.874632 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:07:03 crc kubenswrapper[3562]: I1213 00:07:03.874731 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:07:03 crc kubenswrapper[3562]: I1213 00:07:03.874746 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:07:03 crc kubenswrapper[3562]: I1213 00:07:03.874796 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:07:03 crc kubenswrapper[3562]: I1213 00:07:03.874797 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:07:03 crc kubenswrapper[3562]: I1213 00:07:03.874825 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:07:03 crc kubenswrapper[3562]: I1213 00:07:03.874832 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:07:03 crc kubenswrapper[3562]: I1213 00:07:03.875040 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 13 00:07:03 crc kubenswrapper[3562]: E1213 00:07:03.876589 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 13 00:07:03 crc kubenswrapper[3562]: I1213 00:07:03.876673 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 13 00:07:03 crc kubenswrapper[3562]: E1213 00:07:03.876815 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 13 00:07:03 crc kubenswrapper[3562]: E1213 00:07:03.876851 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 13 00:07:03 crc kubenswrapper[3562]: E1213 00:07:03.876950 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 13 00:07:03 crc kubenswrapper[3562]: E1213 00:07:03.877010 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 13 00:07:03 crc kubenswrapper[3562]: E1213 00:07:03.877071 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 13 00:07:03 crc kubenswrapper[3562]: E1213 00:07:03.877145 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 13 00:07:03 crc kubenswrapper[3562]: E1213 00:07:03.877205 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 13 00:07:03 crc kubenswrapper[3562]: E1213 00:07:03.877273 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 13 00:07:03 crc kubenswrapper[3562]: E1213 00:07:03.877568 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 13 00:07:03 crc kubenswrapper[3562]: E1213 00:07:03.877888 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 13 00:07:04 crc kubenswrapper[3562]: I1213 00:07:04.471476 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:07:04 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:07:04 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:07:04 crc kubenswrapper[3562]: healthz check failed Dec 13 00:07:04 crc kubenswrapper[3562]: I1213 00:07:04.471552 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:07:04 crc kubenswrapper[3562]: I1213 00:07:04.874841 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 13 00:07:04 crc kubenswrapper[3562]: I1213 00:07:04.874898 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 13 00:07:04 crc kubenswrapper[3562]: I1213 00:07:04.874979 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 13 00:07:04 crc kubenswrapper[3562]: I1213 00:07:04.874995 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 13 00:07:04 crc kubenswrapper[3562]: I1213 00:07:04.875052 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:07:04 crc kubenswrapper[3562]: I1213 00:07:04.875055 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 13 00:07:04 crc kubenswrapper[3562]: I1213 00:07:04.874859 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:07:04 crc kubenswrapper[3562]: I1213 00:07:04.874931 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 13 00:07:04 crc kubenswrapper[3562]: I1213 00:07:04.875131 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 13 00:07:04 crc kubenswrapper[3562]: I1213 00:07:04.875154 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 13 00:07:04 crc kubenswrapper[3562]: I1213 00:07:04.875166 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 13 00:07:04 crc kubenswrapper[3562]: I1213 00:07:04.875310 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:07:04 crc kubenswrapper[3562]: E1213 00:07:04.875438 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 13 00:07:04 crc kubenswrapper[3562]: E1213 00:07:04.875522 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 13 00:07:04 crc kubenswrapper[3562]: I1213 00:07:04.875610 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:07:04 crc kubenswrapper[3562]: E1213 00:07:04.875629 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 13 00:07:04 crc kubenswrapper[3562]: E1213 00:07:04.875794 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 13 00:07:04 crc kubenswrapper[3562]: E1213 00:07:04.875870 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 13 00:07:04 crc kubenswrapper[3562]: I1213 00:07:04.875978 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:07:04 crc kubenswrapper[3562]: E1213 00:07:04.876001 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 13 00:07:04 crc kubenswrapper[3562]: I1213 00:07:04.876016 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 13 00:07:04 crc kubenswrapper[3562]: I1213 00:07:04.876043 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:07:04 crc kubenswrapper[3562]: I1213 00:07:04.876046 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:07:04 crc kubenswrapper[3562]: I1213 00:07:04.876112 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:07:04 crc kubenswrapper[3562]: I1213 00:07:04.876112 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:07:04 crc kubenswrapper[3562]: E1213 00:07:04.876083 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 13 00:07:04 crc kubenswrapper[3562]: I1213 00:07:04.876129 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:07:04 crc kubenswrapper[3562]: I1213 00:07:04.876145 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:07:04 crc kubenswrapper[3562]: I1213 00:07:04.876155 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:07:04 crc kubenswrapper[3562]: I1213 00:07:04.876193 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 13 00:07:04 crc kubenswrapper[3562]: I1213 00:07:04.876201 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 13 00:07:04 crc kubenswrapper[3562]: I1213 00:07:04.876202 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 13 00:07:04 crc kubenswrapper[3562]: I1213 00:07:04.876194 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:07:04 crc kubenswrapper[3562]: E1213 00:07:04.876338 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 13 00:07:04 crc kubenswrapper[3562]: I1213 00:07:04.876372 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 13 00:07:04 crc kubenswrapper[3562]: I1213 00:07:04.876428 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:07:04 crc kubenswrapper[3562]: I1213 00:07:04.876553 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:07:04 crc kubenswrapper[3562]: E1213 00:07:04.876562 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 13 00:07:04 crc kubenswrapper[3562]: I1213 00:07:04.876603 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 13 00:07:04 crc kubenswrapper[3562]: E1213 00:07:04.876731 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 13 00:07:04 crc kubenswrapper[3562]: I1213 00:07:04.876744 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:07:04 crc kubenswrapper[3562]: E1213 00:07:04.876839 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 13 00:07:04 crc kubenswrapper[3562]: E1213 00:07:04.876944 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 13 00:07:04 crc kubenswrapper[3562]: I1213 00:07:04.876988 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:07:04 crc kubenswrapper[3562]: E1213 00:07:04.877073 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 13 00:07:04 crc kubenswrapper[3562]: E1213 00:07:04.877119 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 13 00:07:04 crc kubenswrapper[3562]: I1213 00:07:04.877158 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:07:04 crc kubenswrapper[3562]: I1213 00:07:04.877186 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:07:04 crc kubenswrapper[3562]: E1213 00:07:04.877243 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 13 00:07:04 crc kubenswrapper[3562]: E1213 00:07:04.877191 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 13 00:07:04 crc kubenswrapper[3562]: E1213 00:07:04.877313 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 13 00:07:04 crc kubenswrapper[3562]: E1213 00:07:04.877390 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 13 00:07:04 crc kubenswrapper[3562]: E1213 00:07:04.877456 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 13 00:07:04 crc kubenswrapper[3562]: E1213 00:07:04.877533 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 13 00:07:04 crc kubenswrapper[3562]: I1213 00:07:04.877579 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 13 00:07:04 crc kubenswrapper[3562]: E1213 00:07:04.877706 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 13 00:07:04 crc kubenswrapper[3562]: E1213 00:07:04.877784 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 13 00:07:04 crc kubenswrapper[3562]: E1213 00:07:04.877921 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 13 00:07:04 crc kubenswrapper[3562]: E1213 00:07:04.877982 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 13 00:07:04 crc kubenswrapper[3562]: E1213 00:07:04.878086 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 13 00:07:04 crc kubenswrapper[3562]: I1213 00:07:04.878156 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:07:04 crc kubenswrapper[3562]: E1213 00:07:04.878231 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 13 00:07:04 crc kubenswrapper[3562]: E1213 00:07:04.878241 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 13 00:07:04 crc kubenswrapper[3562]: E1213 00:07:04.878277 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 13 00:07:04 crc kubenswrapper[3562]: E1213 00:07:04.878428 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 13 00:07:04 crc kubenswrapper[3562]: E1213 00:07:04.878313 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 13 00:07:04 crc kubenswrapper[3562]: E1213 00:07:04.878389 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 13 00:07:04 crc kubenswrapper[3562]: E1213 00:07:04.878409 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 13 00:07:04 crc kubenswrapper[3562]: E1213 00:07:04.878463 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 13 00:07:04 crc kubenswrapper[3562]: E1213 00:07:04.878517 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 13 00:07:04 crc kubenswrapper[3562]: E1213 00:07:04.878682 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 13 00:07:04 crc kubenswrapper[3562]: E1213 00:07:04.878763 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 13 00:07:05 crc kubenswrapper[3562]: I1213 00:07:05.470827 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:07:05 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:07:05 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:07:05 crc kubenswrapper[3562]: healthz check failed Dec 13 00:07:05 crc kubenswrapper[3562]: I1213 00:07:05.470887 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:07:05 crc kubenswrapper[3562]: I1213 00:07:05.874321 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:07:05 crc kubenswrapper[3562]: I1213 00:07:05.874443 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 13 00:07:05 crc kubenswrapper[3562]: E1213 00:07:05.874507 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 13 00:07:05 crc kubenswrapper[3562]: I1213 00:07:05.874539 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:07:05 crc kubenswrapper[3562]: I1213 00:07:05.874556 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 13 00:07:05 crc kubenswrapper[3562]: E1213 00:07:05.874667 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 13 00:07:05 crc kubenswrapper[3562]: I1213 00:07:05.874693 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:07:05 crc kubenswrapper[3562]: I1213 00:07:05.874705 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:07:05 crc kubenswrapper[3562]: I1213 00:07:05.874708 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:07:05 crc kubenswrapper[3562]: I1213 00:07:05.874772 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 13 00:07:05 crc kubenswrapper[3562]: E1213 00:07:05.874798 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 13 00:07:05 crc kubenswrapper[3562]: I1213 00:07:05.874820 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:07:05 crc kubenswrapper[3562]: E1213 00:07:05.874894 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 13 00:07:05 crc kubenswrapper[3562]: I1213 00:07:05.874947 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:07:05 crc kubenswrapper[3562]: I1213 00:07:05.874966 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:07:05 crc kubenswrapper[3562]: E1213 00:07:05.875088 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 13 00:07:05 crc kubenswrapper[3562]: E1213 00:07:05.875146 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 13 00:07:05 crc kubenswrapper[3562]: E1213 00:07:05.875220 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 13 00:07:05 crc kubenswrapper[3562]: E1213 00:07:05.875303 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 13 00:07:05 crc kubenswrapper[3562]: E1213 00:07:05.875391 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 13 00:07:05 crc kubenswrapper[3562]: E1213 00:07:05.875470 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 13 00:07:05 crc kubenswrapper[3562]: E1213 00:07:05.875537 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 13 00:07:06 crc kubenswrapper[3562]: I1213 00:07:06.470564 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:07:06 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:07:06 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:07:06 crc kubenswrapper[3562]: healthz check failed Dec 13 00:07:06 crc kubenswrapper[3562]: I1213 00:07:06.470640 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:07:06 crc kubenswrapper[3562]: I1213 00:07:06.874348 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 13 00:07:06 crc kubenswrapper[3562]: I1213 00:07:06.874425 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 13 00:07:06 crc kubenswrapper[3562]: I1213 00:07:06.874492 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:07:06 crc kubenswrapper[3562]: I1213 00:07:06.874545 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:07:06 crc kubenswrapper[3562]: I1213 00:07:06.874578 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:07:06 crc kubenswrapper[3562]: I1213 00:07:06.874522 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:07:06 crc kubenswrapper[3562]: I1213 00:07:06.874599 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:07:06 crc kubenswrapper[3562]: I1213 00:07:06.874619 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 13 00:07:06 crc kubenswrapper[3562]: I1213 00:07:06.874682 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:07:06 crc kubenswrapper[3562]: I1213 00:07:06.874704 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:07:06 crc kubenswrapper[3562]: I1213 00:07:06.874727 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 13 00:07:06 crc kubenswrapper[3562]: I1213 00:07:06.874747 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 13 00:07:06 crc kubenswrapper[3562]: I1213 00:07:06.874767 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 13 00:07:06 crc kubenswrapper[3562]: I1213 00:07:06.874774 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:07:06 crc kubenswrapper[3562]: I1213 00:07:06.874577 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 13 00:07:06 crc kubenswrapper[3562]: I1213 00:07:06.874752 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:07:06 crc kubenswrapper[3562]: I1213 00:07:06.874541 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:07:06 crc kubenswrapper[3562]: I1213 00:07:06.874840 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 13 00:07:06 crc kubenswrapper[3562]: I1213 00:07:06.874849 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:07:06 crc kubenswrapper[3562]: I1213 00:07:06.874933 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 13 00:07:06 crc kubenswrapper[3562]: I1213 00:07:06.874944 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:07:06 crc kubenswrapper[3562]: I1213 00:07:06.874958 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 13 00:07:06 crc kubenswrapper[3562]: I1213 00:07:06.874967 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 13 00:07:06 crc kubenswrapper[3562]: I1213 00:07:06.875014 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:07:06 crc kubenswrapper[3562]: I1213 00:07:06.875022 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:07:06 crc kubenswrapper[3562]: I1213 00:07:06.874939 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:07:06 crc kubenswrapper[3562]: I1213 00:07:06.874553 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:07:06 crc kubenswrapper[3562]: E1213 00:07:06.874854 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 13 00:07:06 crc kubenswrapper[3562]: I1213 00:07:06.875071 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 13 00:07:06 crc kubenswrapper[3562]: I1213 00:07:06.874947 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:07:06 crc kubenswrapper[3562]: I1213 00:07:06.874859 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:07:06 crc kubenswrapper[3562]: E1213 00:07:06.874546 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 13 00:07:06 crc kubenswrapper[3562]: I1213 00:07:06.874888 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 13 00:07:06 crc kubenswrapper[3562]: E1213 00:07:06.875150 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 13 00:07:06 crc kubenswrapper[3562]: I1213 00:07:06.874885 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:07:06 crc kubenswrapper[3562]: I1213 00:07:06.875170 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 13 00:07:06 crc kubenswrapper[3562]: E1213 00:07:06.875280 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 13 00:07:06 crc kubenswrapper[3562]: I1213 00:07:06.875291 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 13 00:07:06 crc kubenswrapper[3562]: I1213 00:07:06.875305 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 13 00:07:06 crc kubenswrapper[3562]: E1213 00:07:06.875420 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 13 00:07:06 crc kubenswrapper[3562]: E1213 00:07:06.875487 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 13 00:07:06 crc kubenswrapper[3562]: E1213 00:07:06.875722 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 13 00:07:06 crc kubenswrapper[3562]: E1213 00:07:06.875760 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 13 00:07:06 crc kubenswrapper[3562]: I1213 00:07:06.875840 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:07:06 crc kubenswrapper[3562]: E1213 00:07:06.875953 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 13 00:07:06 crc kubenswrapper[3562]: E1213 00:07:06.876005 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 13 00:07:06 crc kubenswrapper[3562]: E1213 00:07:06.876082 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 13 00:07:06 crc kubenswrapper[3562]: E1213 00:07:06.876165 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 13 00:07:06 crc kubenswrapper[3562]: E1213 00:07:06.876410 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 13 00:07:06 crc kubenswrapper[3562]: E1213 00:07:06.876478 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 13 00:07:06 crc kubenswrapper[3562]: E1213 00:07:06.876683 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 13 00:07:06 crc kubenswrapper[3562]: E1213 00:07:06.876765 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 13 00:07:06 crc kubenswrapper[3562]: E1213 00:07:06.876856 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 13 00:07:06 crc kubenswrapper[3562]: E1213 00:07:06.876962 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 13 00:07:06 crc kubenswrapper[3562]: E1213 00:07:06.877026 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 13 00:07:06 crc kubenswrapper[3562]: E1213 00:07:06.877121 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 13 00:07:06 crc kubenswrapper[3562]: E1213 00:07:06.877215 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 13 00:07:06 crc kubenswrapper[3562]: E1213 00:07:06.877273 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 13 00:07:06 crc kubenswrapper[3562]: E1213 00:07:06.877321 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 13 00:07:06 crc kubenswrapper[3562]: E1213 00:07:06.877410 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 13 00:07:06 crc kubenswrapper[3562]: E1213 00:07:06.877505 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 13 00:07:06 crc kubenswrapper[3562]: E1213 00:07:06.877668 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 13 00:07:06 crc kubenswrapper[3562]: E1213 00:07:06.877693 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 13 00:07:06 crc kubenswrapper[3562]: E1213 00:07:06.877732 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 13 00:07:06 crc kubenswrapper[3562]: E1213 00:07:06.877758 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 13 00:07:06 crc kubenswrapper[3562]: E1213 00:07:06.877813 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 13 00:07:06 crc kubenswrapper[3562]: E1213 00:07:06.877888 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 13 00:07:06 crc kubenswrapper[3562]: E1213 00:07:06.878015 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 13 00:07:06 crc kubenswrapper[3562]: E1213 00:07:06.878072 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 13 00:07:06 crc kubenswrapper[3562]: E1213 00:07:06.878150 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 13 00:07:06 crc kubenswrapper[3562]: E1213 00:07:06.878222 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 13 00:07:06 crc kubenswrapper[3562]: E1213 00:07:06.878296 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 13 00:07:07 crc kubenswrapper[3562]: I1213 00:07:07.471143 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:07:07 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:07:07 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:07:07 crc kubenswrapper[3562]: healthz check failed Dec 13 00:07:07 crc kubenswrapper[3562]: I1213 00:07:07.471277 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:07:07 crc kubenswrapper[3562]: I1213 00:07:07.874608 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:07:07 crc kubenswrapper[3562]: I1213 00:07:07.874642 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:07:07 crc kubenswrapper[3562]: I1213 00:07:07.874771 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:07:07 crc kubenswrapper[3562]: I1213 00:07:07.874787 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 13 00:07:07 crc kubenswrapper[3562]: I1213 00:07:07.874859 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:07:07 crc kubenswrapper[3562]: I1213 00:07:07.874885 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:07:07 crc kubenswrapper[3562]: I1213 00:07:07.874961 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 13 00:07:07 crc kubenswrapper[3562]: E1213 00:07:07.875032 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 13 00:07:07 crc kubenswrapper[3562]: I1213 00:07:07.875165 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:07:07 crc kubenswrapper[3562]: I1213 00:07:07.875224 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:07:07 crc kubenswrapper[3562]: I1213 00:07:07.875270 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:07:07 crc kubenswrapper[3562]: E1213 00:07:07.875354 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 13 00:07:07 crc kubenswrapper[3562]: I1213 00:07:07.875598 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 13 00:07:07 crc kubenswrapper[3562]: E1213 00:07:07.875598 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 13 00:07:07 crc kubenswrapper[3562]: E1213 00:07:07.875880 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 13 00:07:07 crc kubenswrapper[3562]: E1213 00:07:07.876115 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 13 00:07:07 crc kubenswrapper[3562]: E1213 00:07:07.876243 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 13 00:07:07 crc kubenswrapper[3562]: E1213 00:07:07.876360 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 13 00:07:07 crc kubenswrapper[3562]: E1213 00:07:07.876523 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 13 00:07:07 crc kubenswrapper[3562]: E1213 00:07:07.876671 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 13 00:07:07 crc kubenswrapper[3562]: E1213 00:07:07.876802 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 13 00:07:07 crc kubenswrapper[3562]: E1213 00:07:07.876983 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 13 00:07:08 crc kubenswrapper[3562]: I1213 00:07:08.471373 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:07:08 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:07:08 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:07:08 crc kubenswrapper[3562]: healthz check failed Dec 13 00:07:08 crc kubenswrapper[3562]: I1213 00:07:08.471497 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:07:08 crc kubenswrapper[3562]: E1213 00:07:08.873277 3562 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 13 00:07:08 crc kubenswrapper[3562]: I1213 00:07:08.874898 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 13 00:07:08 crc kubenswrapper[3562]: E1213 00:07:08.875301 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 13 00:07:08 crc kubenswrapper[3562]: I1213 00:07:08.875322 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:07:08 crc kubenswrapper[3562]: I1213 00:07:08.875385 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:07:08 crc kubenswrapper[3562]: I1213 00:07:08.875376 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:07:08 crc kubenswrapper[3562]: I1213 00:07:08.875348 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:07:08 crc kubenswrapper[3562]: I1213 00:07:08.875442 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:07:08 crc kubenswrapper[3562]: I1213 00:07:08.875490 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 13 00:07:08 crc kubenswrapper[3562]: I1213 00:07:08.875511 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:07:08 crc kubenswrapper[3562]: I1213 00:07:08.875553 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 13 00:07:08 crc kubenswrapper[3562]: I1213 00:07:08.875322 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 13 00:07:08 crc kubenswrapper[3562]: I1213 00:07:08.875519 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 13 00:07:08 crc kubenswrapper[3562]: I1213 00:07:08.875612 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 13 00:07:08 crc kubenswrapper[3562]: I1213 00:07:08.875385 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 13 00:07:08 crc kubenswrapper[3562]: I1213 00:07:08.875456 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:07:08 crc kubenswrapper[3562]: I1213 00:07:08.875453 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:07:08 crc kubenswrapper[3562]: I1213 00:07:08.875483 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 13 00:07:08 crc kubenswrapper[3562]: E1213 00:07:08.875822 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 13 00:07:08 crc kubenswrapper[3562]: I1213 00:07:08.875859 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 13 00:07:08 crc kubenswrapper[3562]: I1213 00:07:08.875929 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:07:08 crc kubenswrapper[3562]: I1213 00:07:08.875977 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 13 00:07:08 crc kubenswrapper[3562]: I1213 00:07:08.875983 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:07:08 crc kubenswrapper[3562]: I1213 00:07:08.876020 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 13 00:07:08 crc kubenswrapper[3562]: I1213 00:07:08.876057 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 13 00:07:08 crc kubenswrapper[3562]: I1213 00:07:08.876081 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 13 00:07:08 crc kubenswrapper[3562]: E1213 00:07:08.876243 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 13 00:07:08 crc kubenswrapper[3562]: E1213 00:07:08.876399 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 13 00:07:08 crc kubenswrapper[3562]: E1213 00:07:08.876537 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 13 00:07:08 crc kubenswrapper[3562]: I1213 00:07:08.876599 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:07:08 crc kubenswrapper[3562]: I1213 00:07:08.876642 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:07:08 crc kubenswrapper[3562]: I1213 00:07:08.876602 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 13 00:07:08 crc kubenswrapper[3562]: I1213 00:07:08.876667 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:07:08 crc kubenswrapper[3562]: E1213 00:07:08.876750 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 13 00:07:08 crc kubenswrapper[3562]: I1213 00:07:08.876746 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:07:08 crc kubenswrapper[3562]: E1213 00:07:08.877026 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 13 00:07:08 crc kubenswrapper[3562]: E1213 00:07:08.877037 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 13 00:07:08 crc kubenswrapper[3562]: E1213 00:07:08.877122 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 13 00:07:08 crc kubenswrapper[3562]: E1213 00:07:08.877212 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 13 00:07:08 crc kubenswrapper[3562]: I1213 00:07:08.877261 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:07:08 crc kubenswrapper[3562]: E1213 00:07:08.877330 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 13 00:07:08 crc kubenswrapper[3562]: E1213 00:07:08.877403 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 13 00:07:08 crc kubenswrapper[3562]: E1213 00:07:08.877455 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 13 00:07:08 crc kubenswrapper[3562]: E1213 00:07:08.877520 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 13 00:07:08 crc kubenswrapper[3562]: I1213 00:07:08.877562 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:07:08 crc kubenswrapper[3562]: E1213 00:07:08.877624 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 13 00:07:08 crc kubenswrapper[3562]: I1213 00:07:08.877661 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:07:08 crc kubenswrapper[3562]: I1213 00:07:08.877712 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:07:08 crc kubenswrapper[3562]: E1213 00:07:08.877770 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 13 00:07:08 crc kubenswrapper[3562]: E1213 00:07:08.877826 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 13 00:07:08 crc kubenswrapper[3562]: I1213 00:07:08.877866 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:07:08 crc kubenswrapper[3562]: E1213 00:07:08.877942 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 13 00:07:08 crc kubenswrapper[3562]: E1213 00:07:08.878014 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 13 00:07:08 crc kubenswrapper[3562]: I1213 00:07:08.878065 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 13 00:07:08 crc kubenswrapper[3562]: E1213 00:07:08.878165 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 13 00:07:08 crc kubenswrapper[3562]: E1213 00:07:08.878297 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 13 00:07:08 crc kubenswrapper[3562]: E1213 00:07:08.878452 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 13 00:07:08 crc kubenswrapper[3562]: E1213 00:07:08.878542 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 13 00:07:08 crc kubenswrapper[3562]: E1213 00:07:08.878608 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 13 00:07:08 crc kubenswrapper[3562]: E1213 00:07:08.878629 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 13 00:07:08 crc kubenswrapper[3562]: E1213 00:07:08.878679 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 13 00:07:08 crc kubenswrapper[3562]: E1213 00:07:08.878763 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 13 00:07:08 crc kubenswrapper[3562]: I1213 00:07:08.878847 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:07:08 crc kubenswrapper[3562]: E1213 00:07:08.879003 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 13 00:07:08 crc kubenswrapper[3562]: E1213 00:07:08.879147 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 13 00:07:08 crc kubenswrapper[3562]: I1213 00:07:08.879215 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 13 00:07:08 crc kubenswrapper[3562]: E1213 00:07:08.879344 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 13 00:07:08 crc kubenswrapper[3562]: E1213 00:07:08.879455 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 13 00:07:08 crc kubenswrapper[3562]: E1213 00:07:08.879646 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 13 00:07:08 crc kubenswrapper[3562]: E1213 00:07:08.879789 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 13 00:07:08 crc kubenswrapper[3562]: E1213 00:07:08.880001 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 13 00:07:08 crc kubenswrapper[3562]: E1213 00:07:08.880170 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 13 00:07:08 crc kubenswrapper[3562]: E1213 00:07:08.880321 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 13 00:07:09 crc kubenswrapper[3562]: I1213 00:07:09.471003 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:07:09 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:07:09 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:07:09 crc kubenswrapper[3562]: healthz check failed Dec 13 00:07:09 crc kubenswrapper[3562]: I1213 00:07:09.471108 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:07:09 crc kubenswrapper[3562]: I1213 00:07:09.874750 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:07:09 crc kubenswrapper[3562]: I1213 00:07:09.874790 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 13 00:07:09 crc kubenswrapper[3562]: I1213 00:07:09.874832 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:07:09 crc kubenswrapper[3562]: I1213 00:07:09.874939 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:07:09 crc kubenswrapper[3562]: I1213 00:07:09.874950 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:07:09 crc kubenswrapper[3562]: I1213 00:07:09.874998 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:07:09 crc kubenswrapper[3562]: I1213 00:07:09.874852 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 13 00:07:09 crc kubenswrapper[3562]: I1213 00:07:09.875121 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 13 00:07:09 crc kubenswrapper[3562]: I1213 00:07:09.875117 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:07:09 crc kubenswrapper[3562]: E1213 00:07:09.875578 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 13 00:07:09 crc kubenswrapper[3562]: I1213 00:07:09.875695 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:07:09 crc kubenswrapper[3562]: E1213 00:07:09.875846 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 13 00:07:09 crc kubenswrapper[3562]: I1213 00:07:09.875938 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:07:09 crc kubenswrapper[3562]: E1213 00:07:09.876300 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 13 00:07:09 crc kubenswrapper[3562]: E1213 00:07:09.876511 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 13 00:07:09 crc kubenswrapper[3562]: E1213 00:07:09.876774 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 13 00:07:09 crc kubenswrapper[3562]: E1213 00:07:09.876875 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 13 00:07:09 crc kubenswrapper[3562]: E1213 00:07:09.877068 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 13 00:07:09 crc kubenswrapper[3562]: E1213 00:07:09.877154 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 13 00:07:09 crc kubenswrapper[3562]: E1213 00:07:09.877288 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 13 00:07:09 crc kubenswrapper[3562]: E1213 00:07:09.877501 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 13 00:07:09 crc kubenswrapper[3562]: E1213 00:07:09.877661 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 13 00:07:10 crc kubenswrapper[3562]: I1213 00:07:10.480685 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:07:10 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:07:10 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:07:10 crc kubenswrapper[3562]: healthz check failed Dec 13 00:07:10 crc kubenswrapper[3562]: I1213 00:07:10.480832 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:07:10 crc kubenswrapper[3562]: I1213 00:07:10.875056 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:07:10 crc kubenswrapper[3562]: I1213 00:07:10.875115 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 13 00:07:10 crc kubenswrapper[3562]: I1213 00:07:10.875072 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:07:10 crc kubenswrapper[3562]: I1213 00:07:10.875167 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:07:10 crc kubenswrapper[3562]: I1213 00:07:10.875072 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 13 00:07:10 crc kubenswrapper[3562]: I1213 00:07:10.875096 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:07:10 crc kubenswrapper[3562]: I1213 00:07:10.875250 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 13 00:07:10 crc kubenswrapper[3562]: I1213 00:07:10.875258 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 13 00:07:10 crc kubenswrapper[3562]: I1213 00:07:10.875315 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:07:10 crc kubenswrapper[3562]: I1213 00:07:10.875134 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:07:10 crc kubenswrapper[3562]: I1213 00:07:10.875314 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 13 00:07:10 crc kubenswrapper[3562]: I1213 00:07:10.875250 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 13 00:07:10 crc kubenswrapper[3562]: E1213 00:07:10.875528 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 13 00:07:10 crc kubenswrapper[3562]: I1213 00:07:10.875284 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:07:10 crc kubenswrapper[3562]: I1213 00:07:10.875608 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:07:10 crc kubenswrapper[3562]: I1213 00:07:10.875625 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 13 00:07:10 crc kubenswrapper[3562]: I1213 00:07:10.875559 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:07:10 crc kubenswrapper[3562]: I1213 00:07:10.875719 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:07:10 crc kubenswrapper[3562]: I1213 00:07:10.875598 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:07:10 crc kubenswrapper[3562]: E1213 00:07:10.875820 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 13 00:07:10 crc kubenswrapper[3562]: I1213 00:07:10.875860 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:07:10 crc kubenswrapper[3562]: E1213 00:07:10.876030 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 13 00:07:10 crc kubenswrapper[3562]: I1213 00:07:10.876112 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 13 00:07:10 crc kubenswrapper[3562]: I1213 00:07:10.876152 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 13 00:07:10 crc kubenswrapper[3562]: E1213 00:07:10.876349 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 13 00:07:10 crc kubenswrapper[3562]: E1213 00:07:10.876508 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 13 00:07:10 crc kubenswrapper[3562]: E1213 00:07:10.876626 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 13 00:07:10 crc kubenswrapper[3562]: I1213 00:07:10.876671 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 13 00:07:10 crc kubenswrapper[3562]: I1213 00:07:10.876673 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 13 00:07:10 crc kubenswrapper[3562]: E1213 00:07:10.876978 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 13 00:07:10 crc kubenswrapper[3562]: I1213 00:07:10.877006 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:07:10 crc kubenswrapper[3562]: I1213 00:07:10.877052 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:07:10 crc kubenswrapper[3562]: I1213 00:07:10.877098 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 13 00:07:10 crc kubenswrapper[3562]: I1213 00:07:10.877115 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 13 00:07:10 crc kubenswrapper[3562]: I1213 00:07:10.877135 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:07:10 crc kubenswrapper[3562]: I1213 00:07:10.877150 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:07:10 crc kubenswrapper[3562]: I1213 00:07:10.877190 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:07:10 crc kubenswrapper[3562]: I1213 00:07:10.877221 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:07:10 crc kubenswrapper[3562]: I1213 00:07:10.877190 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 13 00:07:10 crc kubenswrapper[3562]: I1213 00:07:10.877236 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 13 00:07:10 crc kubenswrapper[3562]: E1213 00:07:10.877291 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 13 00:07:10 crc kubenswrapper[3562]: I1213 00:07:10.877426 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:07:10 crc kubenswrapper[3562]: I1213 00:07:10.877453 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:07:10 crc kubenswrapper[3562]: E1213 00:07:10.877632 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 13 00:07:10 crc kubenswrapper[3562]: E1213 00:07:10.877741 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 13 00:07:10 crc kubenswrapper[3562]: I1213 00:07:10.877768 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 13 00:07:10 crc kubenswrapper[3562]: E1213 00:07:10.877961 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 13 00:07:10 crc kubenswrapper[3562]: E1213 00:07:10.877967 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 13 00:07:10 crc kubenswrapper[3562]: E1213 00:07:10.878273 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 13 00:07:10 crc kubenswrapper[3562]: E1213 00:07:10.878352 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 13 00:07:10 crc kubenswrapper[3562]: E1213 00:07:10.878452 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 13 00:07:10 crc kubenswrapper[3562]: E1213 00:07:10.878551 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 13 00:07:10 crc kubenswrapper[3562]: E1213 00:07:10.878745 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 13 00:07:10 crc kubenswrapper[3562]: E1213 00:07:10.878840 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 13 00:07:10 crc kubenswrapper[3562]: E1213 00:07:10.878994 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 13 00:07:10 crc kubenswrapper[3562]: E1213 00:07:10.879097 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 13 00:07:10 crc kubenswrapper[3562]: E1213 00:07:10.879211 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 13 00:07:10 crc kubenswrapper[3562]: E1213 00:07:10.879255 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 13 00:07:10 crc kubenswrapper[3562]: E1213 00:07:10.879359 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 13 00:07:10 crc kubenswrapper[3562]: E1213 00:07:10.879371 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 13 00:07:10 crc kubenswrapper[3562]: E1213 00:07:10.879427 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 13 00:07:10 crc kubenswrapper[3562]: E1213 00:07:10.879461 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 13 00:07:10 crc kubenswrapper[3562]: E1213 00:07:10.879621 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 13 00:07:10 crc kubenswrapper[3562]: E1213 00:07:10.879697 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 13 00:07:10 crc kubenswrapper[3562]: E1213 00:07:10.879761 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 13 00:07:10 crc kubenswrapper[3562]: E1213 00:07:10.879845 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 13 00:07:10 crc kubenswrapper[3562]: E1213 00:07:10.880013 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 13 00:07:10 crc kubenswrapper[3562]: E1213 00:07:10.880107 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 13 00:07:10 crc kubenswrapper[3562]: E1213 00:07:10.880227 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 13 00:07:10 crc kubenswrapper[3562]: E1213 00:07:10.880338 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 13 00:07:10 crc kubenswrapper[3562]: E1213 00:07:10.880388 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 13 00:07:10 crc kubenswrapper[3562]: E1213 00:07:10.880469 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 13 00:07:11 crc kubenswrapper[3562]: I1213 00:07:11.471870 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:07:11 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:07:11 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:07:11 crc kubenswrapper[3562]: healthz check failed Dec 13 00:07:11 crc kubenswrapper[3562]: I1213 00:07:11.472004 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:07:11 crc kubenswrapper[3562]: I1213 00:07:11.874759 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:07:11 crc kubenswrapper[3562]: I1213 00:07:11.874847 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:07:11 crc kubenswrapper[3562]: I1213 00:07:11.874880 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:07:11 crc kubenswrapper[3562]: I1213 00:07:11.874937 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:07:11 crc kubenswrapper[3562]: I1213 00:07:11.874959 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 13 00:07:11 crc kubenswrapper[3562]: I1213 00:07:11.874853 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:07:11 crc kubenswrapper[3562]: I1213 00:07:11.875060 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 13 00:07:11 crc kubenswrapper[3562]: I1213 00:07:11.875077 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:07:11 crc kubenswrapper[3562]: I1213 00:07:11.875109 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:07:11 crc kubenswrapper[3562]: I1213 00:07:11.875180 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 13 00:07:11 crc kubenswrapper[3562]: I1213 00:07:11.875238 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:07:11 crc kubenswrapper[3562]: E1213 00:07:11.875494 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 13 00:07:11 crc kubenswrapper[3562]: E1213 00:07:11.875732 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 13 00:07:11 crc kubenswrapper[3562]: E1213 00:07:11.875829 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 13 00:07:11 crc kubenswrapper[3562]: E1213 00:07:11.876216 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 13 00:07:11 crc kubenswrapper[3562]: E1213 00:07:11.876233 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 13 00:07:11 crc kubenswrapper[3562]: E1213 00:07:11.876307 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 13 00:07:11 crc kubenswrapper[3562]: E1213 00:07:11.876515 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 13 00:07:11 crc kubenswrapper[3562]: E1213 00:07:11.876641 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 13 00:07:11 crc kubenswrapper[3562]: E1213 00:07:11.876720 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 13 00:07:11 crc kubenswrapper[3562]: E1213 00:07:11.877028 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 13 00:07:11 crc kubenswrapper[3562]: E1213 00:07:11.876873 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 13 00:07:12 crc kubenswrapper[3562]: I1213 00:07:12.213026 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/6.log" Dec 13 00:07:12 crc kubenswrapper[3562]: I1213 00:07:12.213393 3562 generic.go:334] "Generic (PLEG): container finished" podID="475321a1-8b7e-4033-8f72-b05a8b377347" containerID="958f2eff55abf2b420e4008cfd5b0306001e5f2c7ce3d7533e59c3fad562aeae" exitCode=1 Dec 13 00:07:12 crc kubenswrapper[3562]: I1213 00:07:12.213445 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q88th" event={"ID":"475321a1-8b7e-4033-8f72-b05a8b377347","Type":"ContainerDied","Data":"958f2eff55abf2b420e4008cfd5b0306001e5f2c7ce3d7533e59c3fad562aeae"} Dec 13 00:07:12 crc kubenswrapper[3562]: I1213 00:07:12.214374 3562 scope.go:117] "RemoveContainer" containerID="958f2eff55abf2b420e4008cfd5b0306001e5f2c7ce3d7533e59c3fad562aeae" Dec 13 00:07:12 crc kubenswrapper[3562]: I1213 00:07:12.474308 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:07:12 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:07:12 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:07:12 crc kubenswrapper[3562]: healthz check failed Dec 13 00:07:12 crc kubenswrapper[3562]: I1213 00:07:12.474428 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:07:12 crc kubenswrapper[3562]: I1213 00:07:12.874776 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:07:12 crc kubenswrapper[3562]: I1213 00:07:12.874776 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:07:12 crc kubenswrapper[3562]: I1213 00:07:12.875085 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 13 00:07:12 crc kubenswrapper[3562]: I1213 00:07:12.875156 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 13 00:07:12 crc kubenswrapper[3562]: I1213 00:07:12.875180 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 13 00:07:12 crc kubenswrapper[3562]: I1213 00:07:12.875283 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:07:12 crc kubenswrapper[3562]: I1213 00:07:12.875341 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:07:12 crc kubenswrapper[3562]: I1213 00:07:12.875154 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:07:12 crc kubenswrapper[3562]: E1213 00:07:12.875376 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 13 00:07:12 crc kubenswrapper[3562]: I1213 00:07:12.875405 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 13 00:07:12 crc kubenswrapper[3562]: I1213 00:07:12.875434 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:07:12 crc kubenswrapper[3562]: I1213 00:07:12.875448 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:07:12 crc kubenswrapper[3562]: I1213 00:07:12.875448 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 13 00:07:12 crc kubenswrapper[3562]: I1213 00:07:12.875469 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 13 00:07:12 crc kubenswrapper[3562]: I1213 00:07:12.875479 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 13 00:07:12 crc kubenswrapper[3562]: I1213 00:07:12.875507 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 13 00:07:12 crc kubenswrapper[3562]: I1213 00:07:12.875409 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 13 00:07:12 crc kubenswrapper[3562]: I1213 00:07:12.875552 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 13 00:07:12 crc kubenswrapper[3562]: I1213 00:07:12.875574 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:07:12 crc kubenswrapper[3562]: E1213 00:07:12.875163 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 13 00:07:12 crc kubenswrapper[3562]: I1213 00:07:12.875611 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:07:12 crc kubenswrapper[3562]: I1213 00:07:12.875557 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:07:12 crc kubenswrapper[3562]: I1213 00:07:12.875289 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 13 00:07:12 crc kubenswrapper[3562]: I1213 00:07:12.875640 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:07:12 crc kubenswrapper[3562]: I1213 00:07:12.875217 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:07:12 crc kubenswrapper[3562]: I1213 00:07:12.875665 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:07:12 crc kubenswrapper[3562]: I1213 00:07:12.875233 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:07:12 crc kubenswrapper[3562]: I1213 00:07:12.875678 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:07:12 crc kubenswrapper[3562]: I1213 00:07:12.875687 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 13 00:07:12 crc kubenswrapper[3562]: E1213 00:07:12.875694 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 13 00:07:12 crc kubenswrapper[3562]: I1213 00:07:12.875643 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:07:12 crc kubenswrapper[3562]: I1213 00:07:12.875242 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:07:12 crc kubenswrapper[3562]: I1213 00:07:12.875258 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 13 00:07:12 crc kubenswrapper[3562]: I1213 00:07:12.875266 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 13 00:07:12 crc kubenswrapper[3562]: E1213 00:07:12.876001 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 13 00:07:12 crc kubenswrapper[3562]: I1213 00:07:12.875257 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:07:12 crc kubenswrapper[3562]: E1213 00:07:12.876101 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 13 00:07:12 crc kubenswrapper[3562]: I1213 00:07:12.875534 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 13 00:07:12 crc kubenswrapper[3562]: E1213 00:07:12.876176 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 13 00:07:12 crc kubenswrapper[3562]: I1213 00:07:12.875204 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:07:12 crc kubenswrapper[3562]: E1213 00:07:12.876242 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 13 00:07:12 crc kubenswrapper[3562]: I1213 00:07:12.875615 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 13 00:07:12 crc kubenswrapper[3562]: I1213 00:07:12.875658 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:07:12 crc kubenswrapper[3562]: E1213 00:07:12.876449 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 13 00:07:12 crc kubenswrapper[3562]: E1213 00:07:12.876720 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 13 00:07:12 crc kubenswrapper[3562]: E1213 00:07:12.876815 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 13 00:07:12 crc kubenswrapper[3562]: E1213 00:07:12.876938 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 13 00:07:12 crc kubenswrapper[3562]: E1213 00:07:12.877059 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 13 00:07:12 crc kubenswrapper[3562]: E1213 00:07:12.877266 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 13 00:07:12 crc kubenswrapper[3562]: E1213 00:07:12.877400 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 13 00:07:12 crc kubenswrapper[3562]: E1213 00:07:12.877478 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 13 00:07:12 crc kubenswrapper[3562]: E1213 00:07:12.877535 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 13 00:07:12 crc kubenswrapper[3562]: E1213 00:07:12.877645 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 13 00:07:12 crc kubenswrapper[3562]: E1213 00:07:12.877762 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 13 00:07:12 crc kubenswrapper[3562]: E1213 00:07:12.877833 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 13 00:07:12 crc kubenswrapper[3562]: E1213 00:07:12.877847 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 13 00:07:12 crc kubenswrapper[3562]: E1213 00:07:12.878036 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 13 00:07:12 crc kubenswrapper[3562]: E1213 00:07:12.878166 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 13 00:07:12 crc kubenswrapper[3562]: E1213 00:07:12.878291 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 13 00:07:12 crc kubenswrapper[3562]: E1213 00:07:12.878403 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 13 00:07:12 crc kubenswrapper[3562]: E1213 00:07:12.878506 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 13 00:07:12 crc kubenswrapper[3562]: E1213 00:07:12.878621 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 13 00:07:12 crc kubenswrapper[3562]: E1213 00:07:12.878685 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 13 00:07:12 crc kubenswrapper[3562]: E1213 00:07:12.878772 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 13 00:07:12 crc kubenswrapper[3562]: E1213 00:07:12.878881 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 13 00:07:12 crc kubenswrapper[3562]: E1213 00:07:12.879014 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 13 00:07:12 crc kubenswrapper[3562]: E1213 00:07:12.879023 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 13 00:07:12 crc kubenswrapper[3562]: E1213 00:07:12.879130 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 13 00:07:12 crc kubenswrapper[3562]: E1213 00:07:12.879257 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 13 00:07:12 crc kubenswrapper[3562]: E1213 00:07:12.879364 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 13 00:07:12 crc kubenswrapper[3562]: E1213 00:07:12.879409 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 13 00:07:12 crc kubenswrapper[3562]: E1213 00:07:12.879496 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 13 00:07:13 crc kubenswrapper[3562]: I1213 00:07:13.218274 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/6.log" Dec 13 00:07:13 crc kubenswrapper[3562]: I1213 00:07:13.218401 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q88th" event={"ID":"475321a1-8b7e-4033-8f72-b05a8b377347","Type":"ContainerStarted","Data":"cbff4d1914227c5f07b10f5345c567907d0e1f2bf0a32861c6c049395796485b"} Dec 13 00:07:13 crc kubenswrapper[3562]: I1213 00:07:13.472095 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:07:13 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:07:13 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:07:13 crc kubenswrapper[3562]: healthz check failed Dec 13 00:07:13 crc kubenswrapper[3562]: I1213 00:07:13.472231 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:07:13 crc kubenswrapper[3562]: I1213 00:07:13.874109 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 13 00:07:13 crc kubenswrapper[3562]: I1213 00:07:13.874149 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:07:13 crc kubenswrapper[3562]: I1213 00:07:13.874195 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 13 00:07:13 crc kubenswrapper[3562]: E1213 00:07:13.874783 3562 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 13 00:07:13 crc kubenswrapper[3562]: E1213 00:07:13.875864 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 13 00:07:13 crc kubenswrapper[3562]: I1213 00:07:13.875971 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:07:13 crc kubenswrapper[3562]: E1213 00:07:13.876116 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 13 00:07:13 crc kubenswrapper[3562]: I1213 00:07:13.876323 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:07:13 crc kubenswrapper[3562]: E1213 00:07:13.876393 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 13 00:07:13 crc kubenswrapper[3562]: I1213 00:07:13.876523 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:07:13 crc kubenswrapper[3562]: I1213 00:07:13.876584 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:07:13 crc kubenswrapper[3562]: E1213 00:07:13.876656 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 13 00:07:13 crc kubenswrapper[3562]: I1213 00:07:13.876670 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:07:13 crc kubenswrapper[3562]: I1213 00:07:13.876723 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:07:13 crc kubenswrapper[3562]: E1213 00:07:13.876782 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 13 00:07:13 crc kubenswrapper[3562]: E1213 00:07:13.876850 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 13 00:07:13 crc kubenswrapper[3562]: E1213 00:07:13.876993 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 13 00:07:13 crc kubenswrapper[3562]: I1213 00:07:13.876986 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 13 00:07:13 crc kubenswrapper[3562]: I1213 00:07:13.877016 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:07:13 crc kubenswrapper[3562]: E1213 00:07:13.877084 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 13 00:07:13 crc kubenswrapper[3562]: E1213 00:07:13.877300 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 13 00:07:13 crc kubenswrapper[3562]: E1213 00:07:13.877319 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 13 00:07:13 crc kubenswrapper[3562]: E1213 00:07:13.877377 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 13 00:07:14 crc kubenswrapper[3562]: I1213 00:07:14.470947 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:07:14 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:07:14 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:07:14 crc kubenswrapper[3562]: healthz check failed Dec 13 00:07:14 crc kubenswrapper[3562]: I1213 00:07:14.471037 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:07:14 crc kubenswrapper[3562]: I1213 00:07:14.874851 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 13 00:07:14 crc kubenswrapper[3562]: I1213 00:07:14.875181 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:07:14 crc kubenswrapper[3562]: I1213 00:07:14.874872 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:07:14 crc kubenswrapper[3562]: I1213 00:07:14.874879 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:07:14 crc kubenswrapper[3562]: I1213 00:07:14.874884 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:07:14 crc kubenswrapper[3562]: I1213 00:07:14.874913 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 13 00:07:14 crc kubenswrapper[3562]: I1213 00:07:14.874935 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:07:14 crc kubenswrapper[3562]: I1213 00:07:14.874942 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:07:14 crc kubenswrapper[3562]: I1213 00:07:14.874949 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:07:14 crc kubenswrapper[3562]: I1213 00:07:14.874954 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:07:14 crc kubenswrapper[3562]: I1213 00:07:14.874954 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:07:14 crc kubenswrapper[3562]: I1213 00:07:14.875011 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 13 00:07:14 crc kubenswrapper[3562]: I1213 00:07:14.875015 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:07:14 crc kubenswrapper[3562]: I1213 00:07:14.875019 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 13 00:07:14 crc kubenswrapper[3562]: I1213 00:07:14.875016 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 13 00:07:14 crc kubenswrapper[3562]: I1213 00:07:14.875027 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:07:14 crc kubenswrapper[3562]: I1213 00:07:14.875029 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 13 00:07:14 crc kubenswrapper[3562]: I1213 00:07:14.875029 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 13 00:07:14 crc kubenswrapper[3562]: I1213 00:07:14.875044 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 13 00:07:14 crc kubenswrapper[3562]: I1213 00:07:14.875047 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 13 00:07:14 crc kubenswrapper[3562]: I1213 00:07:14.875046 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 13 00:07:14 crc kubenswrapper[3562]: I1213 00:07:14.875061 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 13 00:07:14 crc kubenswrapper[3562]: I1213 00:07:14.875065 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:07:14 crc kubenswrapper[3562]: I1213 00:07:14.875063 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:07:14 crc kubenswrapper[3562]: I1213 00:07:14.875072 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:07:14 crc kubenswrapper[3562]: I1213 00:07:14.875079 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:07:14 crc kubenswrapper[3562]: I1213 00:07:14.875078 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 13 00:07:14 crc kubenswrapper[3562]: I1213 00:07:14.875087 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:07:14 crc kubenswrapper[3562]: I1213 00:07:14.875088 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 13 00:07:14 crc kubenswrapper[3562]: I1213 00:07:14.875092 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 13 00:07:14 crc kubenswrapper[3562]: I1213 00:07:14.875098 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 13 00:07:14 crc kubenswrapper[3562]: I1213 00:07:14.875104 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:07:14 crc kubenswrapper[3562]: I1213 00:07:14.875110 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 13 00:07:14 crc kubenswrapper[3562]: I1213 00:07:14.875109 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:07:14 crc kubenswrapper[3562]: I1213 00:07:14.875121 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:07:14 crc kubenswrapper[3562]: I1213 00:07:14.875119 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:07:14 crc kubenswrapper[3562]: E1213 00:07:14.876354 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 13 00:07:14 crc kubenswrapper[3562]: E1213 00:07:14.876417 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 13 00:07:14 crc kubenswrapper[3562]: E1213 00:07:14.876482 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 13 00:07:14 crc kubenswrapper[3562]: E1213 00:07:14.876536 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 13 00:07:14 crc kubenswrapper[3562]: E1213 00:07:14.876698 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 13 00:07:14 crc kubenswrapper[3562]: E1213 00:07:14.876945 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 13 00:07:14 crc kubenswrapper[3562]: E1213 00:07:14.877031 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 13 00:07:14 crc kubenswrapper[3562]: E1213 00:07:14.877139 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 13 00:07:14 crc kubenswrapper[3562]: E1213 00:07:14.877209 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 13 00:07:14 crc kubenswrapper[3562]: E1213 00:07:14.877287 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 13 00:07:14 crc kubenswrapper[3562]: E1213 00:07:14.877367 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 13 00:07:14 crc kubenswrapper[3562]: E1213 00:07:14.877464 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 13 00:07:14 crc kubenswrapper[3562]: E1213 00:07:14.877627 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 13 00:07:14 crc kubenswrapper[3562]: E1213 00:07:14.877681 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 13 00:07:14 crc kubenswrapper[3562]: E1213 00:07:14.877740 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 13 00:07:14 crc kubenswrapper[3562]: E1213 00:07:14.877843 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 13 00:07:14 crc kubenswrapper[3562]: E1213 00:07:14.877923 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 13 00:07:14 crc kubenswrapper[3562]: E1213 00:07:14.877993 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 13 00:07:14 crc kubenswrapper[3562]: E1213 00:07:14.878089 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 13 00:07:14 crc kubenswrapper[3562]: E1213 00:07:14.878176 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 13 00:07:14 crc kubenswrapper[3562]: E1213 00:07:14.878251 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 13 00:07:14 crc kubenswrapper[3562]: E1213 00:07:14.878334 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 13 00:07:14 crc kubenswrapper[3562]: E1213 00:07:14.878411 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 13 00:07:14 crc kubenswrapper[3562]: E1213 00:07:14.878446 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 13 00:07:14 crc kubenswrapper[3562]: E1213 00:07:14.878477 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 13 00:07:14 crc kubenswrapper[3562]: E1213 00:07:14.878535 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 13 00:07:14 crc kubenswrapper[3562]: E1213 00:07:14.878626 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 13 00:07:14 crc kubenswrapper[3562]: E1213 00:07:14.878694 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 13 00:07:14 crc kubenswrapper[3562]: E1213 00:07:14.878750 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 13 00:07:14 crc kubenswrapper[3562]: E1213 00:07:14.878788 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 13 00:07:14 crc kubenswrapper[3562]: E1213 00:07:14.878860 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 13 00:07:14 crc kubenswrapper[3562]: E1213 00:07:14.878960 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 13 00:07:14 crc kubenswrapper[3562]: E1213 00:07:14.879001 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 13 00:07:14 crc kubenswrapper[3562]: E1213 00:07:14.879057 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 13 00:07:14 crc kubenswrapper[3562]: E1213 00:07:14.879105 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 13 00:07:14 crc kubenswrapper[3562]: E1213 00:07:14.879172 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 13 00:07:15 crc kubenswrapper[3562]: I1213 00:07:15.471413 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:07:15 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:07:15 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:07:15 crc kubenswrapper[3562]: healthz check failed Dec 13 00:07:15 crc kubenswrapper[3562]: I1213 00:07:15.471504 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:07:15 crc kubenswrapper[3562]: I1213 00:07:15.874476 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:07:15 crc kubenswrapper[3562]: I1213 00:07:15.874506 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:07:15 crc kubenswrapper[3562]: I1213 00:07:15.874477 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:07:15 crc kubenswrapper[3562]: I1213 00:07:15.874566 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:07:15 crc kubenswrapper[3562]: I1213 00:07:15.874507 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 13 00:07:15 crc kubenswrapper[3562]: E1213 00:07:15.874664 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 13 00:07:15 crc kubenswrapper[3562]: I1213 00:07:15.874780 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 13 00:07:15 crc kubenswrapper[3562]: I1213 00:07:15.874819 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 13 00:07:15 crc kubenswrapper[3562]: I1213 00:07:15.874922 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:07:15 crc kubenswrapper[3562]: I1213 00:07:15.874934 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:07:15 crc kubenswrapper[3562]: I1213 00:07:15.874833 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:07:15 crc kubenswrapper[3562]: I1213 00:07:15.874808 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:07:15 crc kubenswrapper[3562]: E1213 00:07:15.875173 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 13 00:07:15 crc kubenswrapper[3562]: E1213 00:07:15.875288 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 13 00:07:15 crc kubenswrapper[3562]: E1213 00:07:15.875408 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 13 00:07:15 crc kubenswrapper[3562]: E1213 00:07:15.875481 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 13 00:07:15 crc kubenswrapper[3562]: E1213 00:07:15.875552 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 13 00:07:15 crc kubenswrapper[3562]: E1213 00:07:15.875770 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 13 00:07:15 crc kubenswrapper[3562]: E1213 00:07:15.875969 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 13 00:07:15 crc kubenswrapper[3562]: E1213 00:07:15.876054 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 13 00:07:15 crc kubenswrapper[3562]: E1213 00:07:15.876154 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 13 00:07:15 crc kubenswrapper[3562]: E1213 00:07:15.876225 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 13 00:07:16 crc kubenswrapper[3562]: I1213 00:07:16.470385 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:07:16 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:07:16 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:07:16 crc kubenswrapper[3562]: healthz check failed Dec 13 00:07:16 crc kubenswrapper[3562]: I1213 00:07:16.470716 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:07:16 crc kubenswrapper[3562]: I1213 00:07:16.874473 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:07:16 crc kubenswrapper[3562]: I1213 00:07:16.874528 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 13 00:07:16 crc kubenswrapper[3562]: I1213 00:07:16.874572 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:07:16 crc kubenswrapper[3562]: I1213 00:07:16.874623 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:07:16 crc kubenswrapper[3562]: I1213 00:07:16.874572 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:07:16 crc kubenswrapper[3562]: I1213 00:07:16.874627 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:07:16 crc kubenswrapper[3562]: I1213 00:07:16.874572 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:07:16 crc kubenswrapper[3562]: I1213 00:07:16.874699 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:07:16 crc kubenswrapper[3562]: I1213 00:07:16.874720 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:07:16 crc kubenswrapper[3562]: E1213 00:07:16.874724 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 13 00:07:16 crc kubenswrapper[3562]: I1213 00:07:16.874738 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 13 00:07:16 crc kubenswrapper[3562]: I1213 00:07:16.874760 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 13 00:07:16 crc kubenswrapper[3562]: I1213 00:07:16.874796 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:07:16 crc kubenswrapper[3562]: I1213 00:07:16.874799 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 13 00:07:16 crc kubenswrapper[3562]: I1213 00:07:16.874937 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 13 00:07:16 crc kubenswrapper[3562]: I1213 00:07:16.874952 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:07:16 crc kubenswrapper[3562]: I1213 00:07:16.874954 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:07:16 crc kubenswrapper[3562]: I1213 00:07:16.875003 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:07:16 crc kubenswrapper[3562]: I1213 00:07:16.875008 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:07:16 crc kubenswrapper[3562]: I1213 00:07:16.875013 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 13 00:07:16 crc kubenswrapper[3562]: I1213 00:07:16.875008 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:07:16 crc kubenswrapper[3562]: I1213 00:07:16.874944 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:07:16 crc kubenswrapper[3562]: I1213 00:07:16.874723 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 13 00:07:16 crc kubenswrapper[3562]: I1213 00:07:16.875065 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:07:16 crc kubenswrapper[3562]: I1213 00:07:16.874990 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:07:16 crc kubenswrapper[3562]: I1213 00:07:16.874966 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:07:16 crc kubenswrapper[3562]: I1213 00:07:16.875108 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:07:16 crc kubenswrapper[3562]: I1213 00:07:16.874738 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 13 00:07:16 crc kubenswrapper[3562]: I1213 00:07:16.875138 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 13 00:07:16 crc kubenswrapper[3562]: I1213 00:07:16.875134 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 13 00:07:16 crc kubenswrapper[3562]: I1213 00:07:16.875167 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 13 00:07:16 crc kubenswrapper[3562]: I1213 00:07:16.874831 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 13 00:07:16 crc kubenswrapper[3562]: I1213 00:07:16.875049 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 13 00:07:16 crc kubenswrapper[3562]: I1213 00:07:16.875242 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:07:16 crc kubenswrapper[3562]: I1213 00:07:16.875254 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 13 00:07:16 crc kubenswrapper[3562]: I1213 00:07:16.875163 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 13 00:07:16 crc kubenswrapper[3562]: E1213 00:07:16.875395 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 13 00:07:16 crc kubenswrapper[3562]: E1213 00:07:16.875501 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 13 00:07:16 crc kubenswrapper[3562]: E1213 00:07:16.875657 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 13 00:07:16 crc kubenswrapper[3562]: E1213 00:07:16.875820 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 13 00:07:16 crc kubenswrapper[3562]: E1213 00:07:16.875867 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 13 00:07:16 crc kubenswrapper[3562]: E1213 00:07:16.876078 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 13 00:07:16 crc kubenswrapper[3562]: E1213 00:07:16.876137 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 13 00:07:16 crc kubenswrapper[3562]: E1213 00:07:16.876196 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 13 00:07:16 crc kubenswrapper[3562]: E1213 00:07:16.876269 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 13 00:07:16 crc kubenswrapper[3562]: E1213 00:07:16.876332 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 13 00:07:16 crc kubenswrapper[3562]: E1213 00:07:16.876460 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 13 00:07:16 crc kubenswrapper[3562]: I1213 00:07:16.876482 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 13 00:07:16 crc kubenswrapper[3562]: E1213 00:07:16.876556 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 13 00:07:16 crc kubenswrapper[3562]: E1213 00:07:16.876642 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 13 00:07:16 crc kubenswrapper[3562]: E1213 00:07:16.876749 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 13 00:07:16 crc kubenswrapper[3562]: E1213 00:07:16.876829 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 13 00:07:16 crc kubenswrapper[3562]: E1213 00:07:16.876863 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 13 00:07:16 crc kubenswrapper[3562]: E1213 00:07:16.876882 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 13 00:07:16 crc kubenswrapper[3562]: E1213 00:07:16.876975 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 13 00:07:16 crc kubenswrapper[3562]: E1213 00:07:16.877085 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 13 00:07:16 crc kubenswrapper[3562]: E1213 00:07:16.877171 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 13 00:07:16 crc kubenswrapper[3562]: E1213 00:07:16.877241 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 13 00:07:16 crc kubenswrapper[3562]: E1213 00:07:16.877296 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 13 00:07:16 crc kubenswrapper[3562]: E1213 00:07:16.877414 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 13 00:07:16 crc kubenswrapper[3562]: E1213 00:07:16.877455 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 13 00:07:16 crc kubenswrapper[3562]: E1213 00:07:16.877500 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 13 00:07:16 crc kubenswrapper[3562]: E1213 00:07:16.877511 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 13 00:07:16 crc kubenswrapper[3562]: E1213 00:07:16.877579 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 13 00:07:16 crc kubenswrapper[3562]: E1213 00:07:16.877665 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 13 00:07:16 crc kubenswrapper[3562]: E1213 00:07:16.877721 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 13 00:07:16 crc kubenswrapper[3562]: E1213 00:07:16.877793 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 13 00:07:16 crc kubenswrapper[3562]: E1213 00:07:16.877834 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 13 00:07:16 crc kubenswrapper[3562]: E1213 00:07:16.877873 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 13 00:07:16 crc kubenswrapper[3562]: E1213 00:07:16.877933 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 13 00:07:16 crc kubenswrapper[3562]: E1213 00:07:16.878023 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 13 00:07:16 crc kubenswrapper[3562]: E1213 00:07:16.878085 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 13 00:07:17 crc kubenswrapper[3562]: I1213 00:07:17.470194 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:07:17 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:07:17 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:07:17 crc kubenswrapper[3562]: healthz check failed Dec 13 00:07:17 crc kubenswrapper[3562]: I1213 00:07:17.470262 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:07:17 crc kubenswrapper[3562]: I1213 00:07:17.874438 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:07:17 crc kubenswrapper[3562]: I1213 00:07:17.874584 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:07:17 crc kubenswrapper[3562]: I1213 00:07:17.874666 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 13 00:07:17 crc kubenswrapper[3562]: E1213 00:07:17.874717 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 13 00:07:17 crc kubenswrapper[3562]: I1213 00:07:17.874739 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:07:17 crc kubenswrapper[3562]: I1213 00:07:17.874775 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:07:17 crc kubenswrapper[3562]: I1213 00:07:17.874681 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:07:17 crc kubenswrapper[3562]: I1213 00:07:17.874809 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 13 00:07:17 crc kubenswrapper[3562]: I1213 00:07:17.874827 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:07:17 crc kubenswrapper[3562]: E1213 00:07:17.875092 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 13 00:07:17 crc kubenswrapper[3562]: I1213 00:07:17.875177 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 13 00:07:17 crc kubenswrapper[3562]: E1213 00:07:17.875369 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 13 00:07:17 crc kubenswrapper[3562]: I1213 00:07:17.875530 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:07:17 crc kubenswrapper[3562]: I1213 00:07:17.875531 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:07:17 crc kubenswrapper[3562]: E1213 00:07:17.875547 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 13 00:07:17 crc kubenswrapper[3562]: E1213 00:07:17.875716 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 13 00:07:17 crc kubenswrapper[3562]: E1213 00:07:17.875778 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 13 00:07:17 crc kubenswrapper[3562]: E1213 00:07:17.875991 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 13 00:07:17 crc kubenswrapper[3562]: E1213 00:07:17.876122 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 13 00:07:17 crc kubenswrapper[3562]: E1213 00:07:17.876315 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 13 00:07:17 crc kubenswrapper[3562]: E1213 00:07:17.876415 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 13 00:07:17 crc kubenswrapper[3562]: E1213 00:07:17.876576 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 13 00:07:18 crc kubenswrapper[3562]: I1213 00:07:18.471075 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:07:18 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:07:18 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:07:18 crc kubenswrapper[3562]: healthz check failed Dec 13 00:07:18 crc kubenswrapper[3562]: I1213 00:07:18.471162 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:07:18 crc kubenswrapper[3562]: I1213 00:07:18.875464 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:07:18 crc kubenswrapper[3562]: I1213 00:07:18.875510 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:07:18 crc kubenswrapper[3562]: I1213 00:07:18.875569 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:07:18 crc kubenswrapper[3562]: I1213 00:07:18.875622 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 13 00:07:18 crc kubenswrapper[3562]: I1213 00:07:18.875658 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:07:18 crc kubenswrapper[3562]: I1213 00:07:18.875719 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 13 00:07:18 crc kubenswrapper[3562]: I1213 00:07:18.875756 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:07:18 crc kubenswrapper[3562]: I1213 00:07:18.875772 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 13 00:07:18 crc kubenswrapper[3562]: I1213 00:07:18.875802 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:07:18 crc kubenswrapper[3562]: I1213 00:07:18.875865 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 13 00:07:18 crc kubenswrapper[3562]: I1213 00:07:18.875566 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 13 00:07:18 crc kubenswrapper[3562]: I1213 00:07:18.875999 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 13 00:07:18 crc kubenswrapper[3562]: I1213 00:07:18.876012 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:07:18 crc kubenswrapper[3562]: E1213 00:07:18.876030 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 13 00:07:18 crc kubenswrapper[3562]: I1213 00:07:18.876108 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:07:18 crc kubenswrapper[3562]: I1213 00:07:18.875718 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:07:18 crc kubenswrapper[3562]: E1213 00:07:18.876180 3562 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 13 00:07:18 crc kubenswrapper[3562]: E1213 00:07:18.876375 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 13 00:07:18 crc kubenswrapper[3562]: I1213 00:07:18.876429 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:07:18 crc kubenswrapper[3562]: I1213 00:07:18.876476 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 13 00:07:18 crc kubenswrapper[3562]: I1213 00:07:18.876521 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:07:18 crc kubenswrapper[3562]: I1213 00:07:18.876529 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:07:18 crc kubenswrapper[3562]: I1213 00:07:18.876646 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:07:18 crc kubenswrapper[3562]: I1213 00:07:18.876652 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 13 00:07:18 crc kubenswrapper[3562]: E1213 00:07:18.876687 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 13 00:07:18 crc kubenswrapper[3562]: I1213 00:07:18.876661 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 13 00:07:18 crc kubenswrapper[3562]: I1213 00:07:18.876788 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 13 00:07:18 crc kubenswrapper[3562]: I1213 00:07:18.876583 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:07:18 crc kubenswrapper[3562]: I1213 00:07:18.876836 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:07:18 crc kubenswrapper[3562]: I1213 00:07:18.876599 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:07:18 crc kubenswrapper[3562]: I1213 00:07:18.876793 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:07:18 crc kubenswrapper[3562]: I1213 00:07:18.876834 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 13 00:07:18 crc kubenswrapper[3562]: I1213 00:07:18.876568 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:07:18 crc kubenswrapper[3562]: I1213 00:07:18.877039 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 13 00:07:18 crc kubenswrapper[3562]: I1213 00:07:18.877103 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 13 00:07:18 crc kubenswrapper[3562]: E1213 00:07:18.877035 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 13 00:07:18 crc kubenswrapper[3562]: I1213 00:07:18.876596 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:07:18 crc kubenswrapper[3562]: E1213 00:07:18.877330 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 13 00:07:18 crc kubenswrapper[3562]: I1213 00:07:18.877404 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 13 00:07:18 crc kubenswrapper[3562]: I1213 00:07:18.877445 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 13 00:07:18 crc kubenswrapper[3562]: I1213 00:07:18.877517 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:07:18 crc kubenswrapper[3562]: I1213 00:07:18.877523 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 13 00:07:18 crc kubenswrapper[3562]: E1213 00:07:18.877799 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 13 00:07:18 crc kubenswrapper[3562]: E1213 00:07:18.878048 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 13 00:07:18 crc kubenswrapper[3562]: E1213 00:07:18.878205 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 13 00:07:18 crc kubenswrapper[3562]: E1213 00:07:18.878412 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 13 00:07:18 crc kubenswrapper[3562]: E1213 00:07:18.878559 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 13 00:07:18 crc kubenswrapper[3562]: E1213 00:07:18.878784 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 13 00:07:18 crc kubenswrapper[3562]: E1213 00:07:18.879048 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 13 00:07:18 crc kubenswrapper[3562]: E1213 00:07:18.879122 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 13 00:07:18 crc kubenswrapper[3562]: E1213 00:07:18.879210 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 13 00:07:18 crc kubenswrapper[3562]: E1213 00:07:18.879359 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 13 00:07:18 crc kubenswrapper[3562]: E1213 00:07:18.879580 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 13 00:07:18 crc kubenswrapper[3562]: E1213 00:07:18.879593 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 13 00:07:18 crc kubenswrapper[3562]: E1213 00:07:18.879716 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 13 00:07:18 crc kubenswrapper[3562]: E1213 00:07:18.879839 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 13 00:07:18 crc kubenswrapper[3562]: E1213 00:07:18.879981 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 13 00:07:18 crc kubenswrapper[3562]: E1213 00:07:18.880113 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 13 00:07:18 crc kubenswrapper[3562]: E1213 00:07:18.880320 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 13 00:07:18 crc kubenswrapper[3562]: E1213 00:07:18.880465 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 13 00:07:18 crc kubenswrapper[3562]: E1213 00:07:18.880660 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 13 00:07:18 crc kubenswrapper[3562]: E1213 00:07:18.880840 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 13 00:07:18 crc kubenswrapper[3562]: E1213 00:07:18.881061 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 13 00:07:18 crc kubenswrapper[3562]: E1213 00:07:18.881202 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 13 00:07:18 crc kubenswrapper[3562]: E1213 00:07:18.881358 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 13 00:07:18 crc kubenswrapper[3562]: E1213 00:07:18.881486 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 13 00:07:18 crc kubenswrapper[3562]: E1213 00:07:18.881658 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 13 00:07:18 crc kubenswrapper[3562]: E1213 00:07:18.881792 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 13 00:07:18 crc kubenswrapper[3562]: E1213 00:07:18.881986 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 13 00:07:18 crc kubenswrapper[3562]: E1213 00:07:18.882011 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 13 00:07:18 crc kubenswrapper[3562]: E1213 00:07:18.882058 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 13 00:07:18 crc kubenswrapper[3562]: E1213 00:07:18.882095 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 13 00:07:18 crc kubenswrapper[3562]: E1213 00:07:18.882203 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 13 00:07:19 crc kubenswrapper[3562]: I1213 00:07:19.471613 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:07:19 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:07:19 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:07:19 crc kubenswrapper[3562]: healthz check failed Dec 13 00:07:19 crc kubenswrapper[3562]: I1213 00:07:19.471787 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:07:19 crc kubenswrapper[3562]: I1213 00:07:19.874663 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:07:19 crc kubenswrapper[3562]: E1213 00:07:19.874875 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 13 00:07:19 crc kubenswrapper[3562]: I1213 00:07:19.875124 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:07:19 crc kubenswrapper[3562]: I1213 00:07:19.875199 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 13 00:07:19 crc kubenswrapper[3562]: I1213 00:07:19.875305 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:07:19 crc kubenswrapper[3562]: I1213 00:07:19.875304 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:07:19 crc kubenswrapper[3562]: I1213 00:07:19.875296 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:07:19 crc kubenswrapper[3562]: E1213 00:07:19.875397 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 13 00:07:19 crc kubenswrapper[3562]: I1213 00:07:19.875454 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 13 00:07:19 crc kubenswrapper[3562]: E1213 00:07:19.875505 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 13 00:07:19 crc kubenswrapper[3562]: I1213 00:07:19.875560 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:07:19 crc kubenswrapper[3562]: I1213 00:07:19.875609 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:07:19 crc kubenswrapper[3562]: I1213 00:07:19.875647 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 13 00:07:19 crc kubenswrapper[3562]: E1213 00:07:19.875730 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 13 00:07:19 crc kubenswrapper[3562]: I1213 00:07:19.875794 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:07:19 crc kubenswrapper[3562]: E1213 00:07:19.875997 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 13 00:07:19 crc kubenswrapper[3562]: E1213 00:07:19.875995 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 13 00:07:19 crc kubenswrapper[3562]: E1213 00:07:19.876090 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 13 00:07:19 crc kubenswrapper[3562]: E1213 00:07:19.876198 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 13 00:07:19 crc kubenswrapper[3562]: E1213 00:07:19.876265 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 13 00:07:19 crc kubenswrapper[3562]: E1213 00:07:19.876365 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 13 00:07:19 crc kubenswrapper[3562]: E1213 00:07:19.876581 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 13 00:07:20 crc kubenswrapper[3562]: I1213 00:07:20.226197 3562 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="ovnkube-controller" probeResult="failure" output="" Dec 13 00:07:20 crc kubenswrapper[3562]: I1213 00:07:20.472317 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:07:20 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:07:20 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:07:20 crc kubenswrapper[3562]: healthz check failed Dec 13 00:07:20 crc kubenswrapper[3562]: I1213 00:07:20.472406 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:07:20 crc kubenswrapper[3562]: I1213 00:07:20.875444 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 13 00:07:20 crc kubenswrapper[3562]: I1213 00:07:20.875524 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 13 00:07:20 crc kubenswrapper[3562]: I1213 00:07:20.875587 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 13 00:07:20 crc kubenswrapper[3562]: I1213 00:07:20.875628 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 13 00:07:20 crc kubenswrapper[3562]: I1213 00:07:20.875638 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:07:20 crc kubenswrapper[3562]: I1213 00:07:20.875721 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:07:20 crc kubenswrapper[3562]: I1213 00:07:20.875747 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 13 00:07:20 crc kubenswrapper[3562]: I1213 00:07:20.875721 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:07:20 crc kubenswrapper[3562]: I1213 00:07:20.875755 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:07:20 crc kubenswrapper[3562]: I1213 00:07:20.875786 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 13 00:07:20 crc kubenswrapper[3562]: I1213 00:07:20.875802 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 13 00:07:20 crc kubenswrapper[3562]: I1213 00:07:20.875444 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 13 00:07:20 crc kubenswrapper[3562]: I1213 00:07:20.875896 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:07:20 crc kubenswrapper[3562]: I1213 00:07:20.875971 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:07:20 crc kubenswrapper[3562]: I1213 00:07:20.876124 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:07:20 crc kubenswrapper[3562]: E1213 00:07:20.876126 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 13 00:07:20 crc kubenswrapper[3562]: I1213 00:07:20.876259 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:07:20 crc kubenswrapper[3562]: E1213 00:07:20.876278 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 13 00:07:20 crc kubenswrapper[3562]: I1213 00:07:20.876291 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:07:20 crc kubenswrapper[3562]: I1213 00:07:20.876330 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:07:20 crc kubenswrapper[3562]: I1213 00:07:20.876365 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:07:20 crc kubenswrapper[3562]: I1213 00:07:20.876373 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:07:20 crc kubenswrapper[3562]: I1213 00:07:20.876213 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 13 00:07:20 crc kubenswrapper[3562]: I1213 00:07:20.876219 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:07:20 crc kubenswrapper[3562]: I1213 00:07:20.876578 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:07:20 crc kubenswrapper[3562]: E1213 00:07:20.876592 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 13 00:07:20 crc kubenswrapper[3562]: E1213 00:07:20.876749 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 13 00:07:20 crc kubenswrapper[3562]: E1213 00:07:20.876973 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 13 00:07:20 crc kubenswrapper[3562]: I1213 00:07:20.877024 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 13 00:07:20 crc kubenswrapper[3562]: I1213 00:07:20.877055 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:07:20 crc kubenswrapper[3562]: E1213 00:07:20.877199 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 13 00:07:20 crc kubenswrapper[3562]: I1213 00:07:20.877280 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 13 00:07:20 crc kubenswrapper[3562]: I1213 00:07:20.877285 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:07:20 crc kubenswrapper[3562]: I1213 00:07:20.877326 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 13 00:07:20 crc kubenswrapper[3562]: I1213 00:07:20.877353 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:07:20 crc kubenswrapper[3562]: I1213 00:07:20.877374 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 13 00:07:20 crc kubenswrapper[3562]: I1213 00:07:20.877451 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:07:20 crc kubenswrapper[3562]: E1213 00:07:20.877549 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 13 00:07:20 crc kubenswrapper[3562]: E1213 00:07:20.877826 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 13 00:07:20 crc kubenswrapper[3562]: E1213 00:07:20.878096 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 13 00:07:20 crc kubenswrapper[3562]: E1213 00:07:20.878141 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 13 00:07:20 crc kubenswrapper[3562]: I1213 00:07:20.878252 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 13 00:07:20 crc kubenswrapper[3562]: I1213 00:07:20.878269 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:07:20 crc kubenswrapper[3562]: E1213 00:07:20.878453 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 13 00:07:20 crc kubenswrapper[3562]: I1213 00:07:20.878553 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:07:20 crc kubenswrapper[3562]: I1213 00:07:20.878614 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 13 00:07:20 crc kubenswrapper[3562]: E1213 00:07:20.878791 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 13 00:07:20 crc kubenswrapper[3562]: E1213 00:07:20.878982 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 13 00:07:20 crc kubenswrapper[3562]: E1213 00:07:20.879118 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 13 00:07:20 crc kubenswrapper[3562]: E1213 00:07:20.879356 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 13 00:07:20 crc kubenswrapper[3562]: E1213 00:07:20.879744 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 13 00:07:20 crc kubenswrapper[3562]: E1213 00:07:20.879879 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 13 00:07:20 crc kubenswrapper[3562]: I1213 00:07:20.880017 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 13 00:07:20 crc kubenswrapper[3562]: E1213 00:07:20.880056 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 13 00:07:20 crc kubenswrapper[3562]: E1213 00:07:20.880171 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 13 00:07:20 crc kubenswrapper[3562]: E1213 00:07:20.880353 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 13 00:07:20 crc kubenswrapper[3562]: E1213 00:07:20.880403 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 13 00:07:20 crc kubenswrapper[3562]: E1213 00:07:20.880550 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 13 00:07:20 crc kubenswrapper[3562]: E1213 00:07:20.880706 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 13 00:07:20 crc kubenswrapper[3562]: E1213 00:07:20.880965 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 13 00:07:20 crc kubenswrapper[3562]: E1213 00:07:20.881193 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 13 00:07:20 crc kubenswrapper[3562]: E1213 00:07:20.881252 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 13 00:07:20 crc kubenswrapper[3562]: E1213 00:07:20.881394 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 13 00:07:20 crc kubenswrapper[3562]: E1213 00:07:20.881535 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 13 00:07:20 crc kubenswrapper[3562]: E1213 00:07:20.881707 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 13 00:07:20 crc kubenswrapper[3562]: E1213 00:07:20.881866 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 13 00:07:20 crc kubenswrapper[3562]: E1213 00:07:20.882009 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 13 00:07:20 crc kubenswrapper[3562]: E1213 00:07:20.882099 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 13 00:07:20 crc kubenswrapper[3562]: E1213 00:07:20.882339 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 13 00:07:20 crc kubenswrapper[3562]: E1213 00:07:20.882457 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 13 00:07:20 crc kubenswrapper[3562]: E1213 00:07:20.882506 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 13 00:07:20 crc kubenswrapper[3562]: E1213 00:07:20.882628 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 13 00:07:21 crc kubenswrapper[3562]: I1213 00:07:21.471109 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:07:21 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:07:21 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:07:21 crc kubenswrapper[3562]: healthz check failed Dec 13 00:07:21 crc kubenswrapper[3562]: I1213 00:07:21.471226 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:07:21 crc kubenswrapper[3562]: I1213 00:07:21.874995 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 13 00:07:21 crc kubenswrapper[3562]: I1213 00:07:21.875043 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:07:21 crc kubenswrapper[3562]: I1213 00:07:21.875070 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 13 00:07:21 crc kubenswrapper[3562]: I1213 00:07:21.875095 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:07:21 crc kubenswrapper[3562]: I1213 00:07:21.875043 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:07:21 crc kubenswrapper[3562]: I1213 00:07:21.875052 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 13 00:07:21 crc kubenswrapper[3562]: I1213 00:07:21.875139 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:07:21 crc kubenswrapper[3562]: I1213 00:07:21.875193 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:07:21 crc kubenswrapper[3562]: I1213 00:07:21.875201 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:07:21 crc kubenswrapper[3562]: E1213 00:07:21.875306 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 13 00:07:21 crc kubenswrapper[3562]: I1213 00:07:21.875340 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:07:21 crc kubenswrapper[3562]: E1213 00:07:21.875420 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 13 00:07:21 crc kubenswrapper[3562]: E1213 00:07:21.875490 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 13 00:07:21 crc kubenswrapper[3562]: E1213 00:07:21.875587 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 13 00:07:21 crc kubenswrapper[3562]: E1213 00:07:21.875637 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 13 00:07:21 crc kubenswrapper[3562]: E1213 00:07:21.875717 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 13 00:07:21 crc kubenswrapper[3562]: E1213 00:07:21.875789 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 13 00:07:21 crc kubenswrapper[3562]: I1213 00:07:21.875815 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:07:21 crc kubenswrapper[3562]: E1213 00:07:21.875877 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 13 00:07:21 crc kubenswrapper[3562]: E1213 00:07:21.875947 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 13 00:07:21 crc kubenswrapper[3562]: E1213 00:07:21.875989 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 13 00:07:21 crc kubenswrapper[3562]: E1213 00:07:21.876035 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 13 00:07:22 crc kubenswrapper[3562]: I1213 00:07:22.471224 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:07:22 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:07:22 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:07:22 crc kubenswrapper[3562]: healthz check failed Dec 13 00:07:22 crc kubenswrapper[3562]: I1213 00:07:22.471305 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:07:22 crc kubenswrapper[3562]: I1213 00:07:22.874919 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:07:22 crc kubenswrapper[3562]: I1213 00:07:22.874973 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 13 00:07:22 crc kubenswrapper[3562]: I1213 00:07:22.874979 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:07:22 crc kubenswrapper[3562]: I1213 00:07:22.875037 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:07:22 crc kubenswrapper[3562]: I1213 00:07:22.875050 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 13 00:07:22 crc kubenswrapper[3562]: I1213 00:07:22.875100 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 13 00:07:22 crc kubenswrapper[3562]: I1213 00:07:22.875124 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:07:22 crc kubenswrapper[3562]: I1213 00:07:22.874935 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:07:22 crc kubenswrapper[3562]: E1213 00:07:22.875187 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 13 00:07:22 crc kubenswrapper[3562]: I1213 00:07:22.875196 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 13 00:07:22 crc kubenswrapper[3562]: I1213 00:07:22.875067 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 13 00:07:22 crc kubenswrapper[3562]: I1213 00:07:22.875295 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:07:22 crc kubenswrapper[3562]: E1213 00:07:22.875307 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 13 00:07:22 crc kubenswrapper[3562]: I1213 00:07:22.875107 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:07:22 crc kubenswrapper[3562]: I1213 00:07:22.875331 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 13 00:07:22 crc kubenswrapper[3562]: I1213 00:07:22.875227 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 13 00:07:22 crc kubenswrapper[3562]: I1213 00:07:22.875371 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:07:22 crc kubenswrapper[3562]: I1213 00:07:22.875407 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:07:22 crc kubenswrapper[3562]: I1213 00:07:22.875334 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 13 00:07:22 crc kubenswrapper[3562]: I1213 00:07:22.875431 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:07:22 crc kubenswrapper[3562]: I1213 00:07:22.875407 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:07:22 crc kubenswrapper[3562]: I1213 00:07:22.875457 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 13 00:07:22 crc kubenswrapper[3562]: I1213 00:07:22.875471 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:07:22 crc kubenswrapper[3562]: I1213 00:07:22.875503 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:07:22 crc kubenswrapper[3562]: I1213 00:07:22.875509 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:07:22 crc kubenswrapper[3562]: I1213 00:07:22.875514 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:07:22 crc kubenswrapper[3562]: I1213 00:07:22.875479 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:07:22 crc kubenswrapper[3562]: I1213 00:07:22.875520 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 13 00:07:22 crc kubenswrapper[3562]: I1213 00:07:22.875563 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 13 00:07:22 crc kubenswrapper[3562]: I1213 00:07:22.875579 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 13 00:07:22 crc kubenswrapper[3562]: I1213 00:07:22.875581 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:07:22 crc kubenswrapper[3562]: I1213 00:07:22.875736 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:07:22 crc kubenswrapper[3562]: I1213 00:07:22.875608 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 13 00:07:22 crc kubenswrapper[3562]: I1213 00:07:22.875660 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 13 00:07:22 crc kubenswrapper[3562]: E1213 00:07:22.875693 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 13 00:07:22 crc kubenswrapper[3562]: I1213 00:07:22.875710 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 13 00:07:22 crc kubenswrapper[3562]: E1213 00:07:22.875862 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 13 00:07:22 crc kubenswrapper[3562]: I1213 00:07:22.875883 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 13 00:07:22 crc kubenswrapper[3562]: I1213 00:07:22.875889 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:07:22 crc kubenswrapper[3562]: I1213 00:07:22.875953 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:07:22 crc kubenswrapper[3562]: E1213 00:07:22.875999 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 13 00:07:22 crc kubenswrapper[3562]: E1213 00:07:22.876076 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 13 00:07:22 crc kubenswrapper[3562]: E1213 00:07:22.876246 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 13 00:07:22 crc kubenswrapper[3562]: E1213 00:07:22.876473 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 13 00:07:22 crc kubenswrapper[3562]: E1213 00:07:22.876619 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 13 00:07:22 crc kubenswrapper[3562]: E1213 00:07:22.876740 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 13 00:07:22 crc kubenswrapper[3562]: E1213 00:07:22.876875 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 13 00:07:22 crc kubenswrapper[3562]: E1213 00:07:22.877087 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 13 00:07:22 crc kubenswrapper[3562]: E1213 00:07:22.877171 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 13 00:07:22 crc kubenswrapper[3562]: E1213 00:07:22.877227 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 13 00:07:22 crc kubenswrapper[3562]: E1213 00:07:22.877313 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 13 00:07:22 crc kubenswrapper[3562]: E1213 00:07:22.877426 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 13 00:07:22 crc kubenswrapper[3562]: E1213 00:07:22.877524 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 13 00:07:22 crc kubenswrapper[3562]: E1213 00:07:22.877613 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 13 00:07:22 crc kubenswrapper[3562]: E1213 00:07:22.877718 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 13 00:07:22 crc kubenswrapper[3562]: E1213 00:07:22.877806 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 13 00:07:22 crc kubenswrapper[3562]: E1213 00:07:22.877946 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 13 00:07:22 crc kubenswrapper[3562]: E1213 00:07:22.878037 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 13 00:07:22 crc kubenswrapper[3562]: E1213 00:07:22.878087 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 13 00:07:22 crc kubenswrapper[3562]: E1213 00:07:22.878180 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 13 00:07:22 crc kubenswrapper[3562]: E1213 00:07:22.878269 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 13 00:07:22 crc kubenswrapper[3562]: E1213 00:07:22.878338 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 13 00:07:22 crc kubenswrapper[3562]: E1213 00:07:22.878415 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 13 00:07:22 crc kubenswrapper[3562]: E1213 00:07:22.878500 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 13 00:07:22 crc kubenswrapper[3562]: E1213 00:07:22.878562 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 13 00:07:22 crc kubenswrapper[3562]: E1213 00:07:22.878603 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 13 00:07:22 crc kubenswrapper[3562]: E1213 00:07:22.878657 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 13 00:07:22 crc kubenswrapper[3562]: E1213 00:07:22.878734 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 13 00:07:22 crc kubenswrapper[3562]: E1213 00:07:22.878837 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 13 00:07:22 crc kubenswrapper[3562]: E1213 00:07:22.878841 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 13 00:07:22 crc kubenswrapper[3562]: E1213 00:07:22.878861 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 13 00:07:22 crc kubenswrapper[3562]: E1213 00:07:22.878890 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 13 00:07:23 crc kubenswrapper[3562]: I1213 00:07:23.471479 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:07:23 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:07:23 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:07:23 crc kubenswrapper[3562]: healthz check failed Dec 13 00:07:23 crc kubenswrapper[3562]: I1213 00:07:23.471589 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:07:23 crc kubenswrapper[3562]: I1213 00:07:23.874503 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:07:23 crc kubenswrapper[3562]: I1213 00:07:23.874545 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:07:23 crc kubenswrapper[3562]: I1213 00:07:23.874517 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 13 00:07:23 crc kubenswrapper[3562]: I1213 00:07:23.874601 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:07:23 crc kubenswrapper[3562]: I1213 00:07:23.874604 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:07:23 crc kubenswrapper[3562]: I1213 00:07:23.874649 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:07:23 crc kubenswrapper[3562]: I1213 00:07:23.874659 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:07:23 crc kubenswrapper[3562]: I1213 00:07:23.874577 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 13 00:07:23 crc kubenswrapper[3562]: I1213 00:07:23.874690 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 13 00:07:23 crc kubenswrapper[3562]: I1213 00:07:23.874744 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:07:23 crc kubenswrapper[3562]: E1213 00:07:23.876648 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 13 00:07:23 crc kubenswrapper[3562]: I1213 00:07:23.876703 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:07:23 crc kubenswrapper[3562]: E1213 00:07:23.876930 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 13 00:07:23 crc kubenswrapper[3562]: E1213 00:07:23.877033 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 13 00:07:23 crc kubenswrapper[3562]: E1213 00:07:23.877056 3562 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 13 00:07:23 crc kubenswrapper[3562]: E1213 00:07:23.877130 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 13 00:07:23 crc kubenswrapper[3562]: E1213 00:07:23.877160 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 13 00:07:23 crc kubenswrapper[3562]: E1213 00:07:23.877222 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 13 00:07:23 crc kubenswrapper[3562]: E1213 00:07:23.877482 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 13 00:07:23 crc kubenswrapper[3562]: E1213 00:07:23.877578 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 13 00:07:23 crc kubenswrapper[3562]: E1213 00:07:23.877631 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 13 00:07:23 crc kubenswrapper[3562]: E1213 00:07:23.877686 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 13 00:07:23 crc kubenswrapper[3562]: E1213 00:07:23.877740 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 13 00:07:24 crc kubenswrapper[3562]: I1213 00:07:24.471587 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:07:24 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:07:24 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:07:24 crc kubenswrapper[3562]: healthz check failed Dec 13 00:07:24 crc kubenswrapper[3562]: I1213 00:07:24.471664 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:07:24 crc kubenswrapper[3562]: I1213 00:07:24.874479 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:07:24 crc kubenswrapper[3562]: I1213 00:07:24.874845 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:07:24 crc kubenswrapper[3562]: I1213 00:07:24.874850 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 13 00:07:24 crc kubenswrapper[3562]: I1213 00:07:24.874948 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:07:24 crc kubenswrapper[3562]: I1213 00:07:24.874976 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:07:24 crc kubenswrapper[3562]: I1213 00:07:24.874556 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 13 00:07:24 crc kubenswrapper[3562]: E1213 00:07:24.875063 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 13 00:07:24 crc kubenswrapper[3562]: I1213 00:07:24.874592 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 13 00:07:24 crc kubenswrapper[3562]: I1213 00:07:24.874586 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:07:24 crc kubenswrapper[3562]: I1213 00:07:24.874606 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:07:24 crc kubenswrapper[3562]: I1213 00:07:24.874627 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:07:24 crc kubenswrapper[3562]: I1213 00:07:24.874639 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:07:24 crc kubenswrapper[3562]: I1213 00:07:24.874643 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 13 00:07:24 crc kubenswrapper[3562]: I1213 00:07:24.874666 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 13 00:07:24 crc kubenswrapper[3562]: E1213 00:07:24.875245 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 13 00:07:24 crc kubenswrapper[3562]: I1213 00:07:24.874677 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:07:24 crc kubenswrapper[3562]: I1213 00:07:24.874688 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:07:24 crc kubenswrapper[3562]: I1213 00:07:24.874698 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 13 00:07:24 crc kubenswrapper[3562]: I1213 00:07:24.874712 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:07:24 crc kubenswrapper[3562]: I1213 00:07:24.874725 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 13 00:07:24 crc kubenswrapper[3562]: E1213 00:07:24.875381 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 13 00:07:24 crc kubenswrapper[3562]: E1213 00:07:24.875440 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 13 00:07:24 crc kubenswrapper[3562]: I1213 00:07:24.874730 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 13 00:07:24 crc kubenswrapper[3562]: I1213 00:07:24.874727 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:07:24 crc kubenswrapper[3562]: I1213 00:07:24.874740 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 13 00:07:24 crc kubenswrapper[3562]: I1213 00:07:24.874742 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:07:24 crc kubenswrapper[3562]: I1213 00:07:24.874747 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:07:24 crc kubenswrapper[3562]: I1213 00:07:24.874762 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 13 00:07:24 crc kubenswrapper[3562]: I1213 00:07:24.874771 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 13 00:07:24 crc kubenswrapper[3562]: I1213 00:07:24.874771 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:07:24 crc kubenswrapper[3562]: I1213 00:07:24.874779 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:07:24 crc kubenswrapper[3562]: I1213 00:07:24.874777 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 13 00:07:24 crc kubenswrapper[3562]: I1213 00:07:24.874797 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:07:24 crc kubenswrapper[3562]: I1213 00:07:24.874808 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 13 00:07:24 crc kubenswrapper[3562]: I1213 00:07:24.874810 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 13 00:07:24 crc kubenswrapper[3562]: I1213 00:07:24.874810 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:07:24 crc kubenswrapper[3562]: I1213 00:07:24.874817 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:07:24 crc kubenswrapper[3562]: I1213 00:07:24.874820 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 13 00:07:24 crc kubenswrapper[3562]: I1213 00:07:24.874876 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 13 00:07:24 crc kubenswrapper[3562]: I1213 00:07:24.874490 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:07:24 crc kubenswrapper[3562]: E1213 00:07:24.875793 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 13 00:07:24 crc kubenswrapper[3562]: E1213 00:07:24.875941 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 13 00:07:24 crc kubenswrapper[3562]: E1213 00:07:24.876054 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 13 00:07:24 crc kubenswrapper[3562]: E1213 00:07:24.876212 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 13 00:07:24 crc kubenswrapper[3562]: E1213 00:07:24.876268 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 13 00:07:24 crc kubenswrapper[3562]: E1213 00:07:24.876388 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 13 00:07:24 crc kubenswrapper[3562]: E1213 00:07:24.876487 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 13 00:07:24 crc kubenswrapper[3562]: E1213 00:07:24.876556 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 13 00:07:24 crc kubenswrapper[3562]: E1213 00:07:24.876697 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 13 00:07:24 crc kubenswrapper[3562]: E1213 00:07:24.876741 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 13 00:07:24 crc kubenswrapper[3562]: E1213 00:07:24.876831 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 13 00:07:24 crc kubenswrapper[3562]: E1213 00:07:24.876885 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 13 00:07:24 crc kubenswrapper[3562]: E1213 00:07:24.876978 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 13 00:07:24 crc kubenswrapper[3562]: E1213 00:07:24.877110 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 13 00:07:24 crc kubenswrapper[3562]: E1213 00:07:24.877235 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 13 00:07:24 crc kubenswrapper[3562]: E1213 00:07:24.877287 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 13 00:07:24 crc kubenswrapper[3562]: E1213 00:07:24.877439 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 13 00:07:24 crc kubenswrapper[3562]: E1213 00:07:24.877521 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 13 00:07:24 crc kubenswrapper[3562]: E1213 00:07:24.877605 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 13 00:07:24 crc kubenswrapper[3562]: E1213 00:07:24.877745 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 13 00:07:24 crc kubenswrapper[3562]: E1213 00:07:24.877784 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 13 00:07:24 crc kubenswrapper[3562]: E1213 00:07:24.877853 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 13 00:07:24 crc kubenswrapper[3562]: E1213 00:07:24.877931 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 13 00:07:24 crc kubenswrapper[3562]: E1213 00:07:24.878023 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 13 00:07:24 crc kubenswrapper[3562]: E1213 00:07:24.878046 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 13 00:07:24 crc kubenswrapper[3562]: E1213 00:07:24.878051 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 13 00:07:24 crc kubenswrapper[3562]: E1213 00:07:24.878124 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 13 00:07:24 crc kubenswrapper[3562]: E1213 00:07:24.878196 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 13 00:07:24 crc kubenswrapper[3562]: E1213 00:07:24.878260 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 13 00:07:24 crc kubenswrapper[3562]: E1213 00:07:24.878351 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 13 00:07:24 crc kubenswrapper[3562]: E1213 00:07:24.878389 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 13 00:07:24 crc kubenswrapper[3562]: E1213 00:07:24.878504 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 13 00:07:25 crc kubenswrapper[3562]: I1213 00:07:25.470755 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:07:25 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:07:25 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:07:25 crc kubenswrapper[3562]: healthz check failed Dec 13 00:07:25 crc kubenswrapper[3562]: I1213 00:07:25.470833 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:07:25 crc kubenswrapper[3562]: I1213 00:07:25.874721 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:07:25 crc kubenswrapper[3562]: I1213 00:07:25.874721 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:07:25 crc kubenswrapper[3562]: I1213 00:07:25.874779 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 13 00:07:25 crc kubenswrapper[3562]: I1213 00:07:25.874803 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:07:25 crc kubenswrapper[3562]: I1213 00:07:25.874872 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:07:25 crc kubenswrapper[3562]: I1213 00:07:25.874933 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:07:25 crc kubenswrapper[3562]: I1213 00:07:25.874948 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 13 00:07:25 crc kubenswrapper[3562]: I1213 00:07:25.874828 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:07:25 crc kubenswrapper[3562]: I1213 00:07:25.874788 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:07:25 crc kubenswrapper[3562]: I1213 00:07:25.874831 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 13 00:07:25 crc kubenswrapper[3562]: E1213 00:07:25.875054 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 13 00:07:25 crc kubenswrapper[3562]: E1213 00:07:25.875217 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 13 00:07:25 crc kubenswrapper[3562]: E1213 00:07:25.875348 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 13 00:07:25 crc kubenswrapper[3562]: E1213 00:07:25.875451 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 13 00:07:25 crc kubenswrapper[3562]: E1213 00:07:25.875537 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 13 00:07:25 crc kubenswrapper[3562]: E1213 00:07:25.875628 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 13 00:07:25 crc kubenswrapper[3562]: I1213 00:07:25.875671 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:07:25 crc kubenswrapper[3562]: E1213 00:07:25.875744 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 13 00:07:25 crc kubenswrapper[3562]: E1213 00:07:25.875793 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 13 00:07:25 crc kubenswrapper[3562]: E1213 00:07:25.875865 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 13 00:07:25 crc kubenswrapper[3562]: E1213 00:07:25.875934 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 13 00:07:25 crc kubenswrapper[3562]: E1213 00:07:25.875996 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.471052 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:07:26 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:07:26 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:07:26 crc kubenswrapper[3562]: healthz check failed Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.471198 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.721291 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.721444 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.721491 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.721495 3562 configmap.go:199] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.721626 3562 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.721638 3562 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.721533 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.721707 3562 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.721645 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.721615284 +0000 UTC m=+147.204845817 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.721813 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.721767588 +0000 UTC m=+147.204998351 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.721850 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.72183089 +0000 UTC m=+147.205061673 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.721954 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.722031 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.722101 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.722142 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.722116118 +0000 UTC m=+147.205346821 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-client" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.722191 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.722243 3562 secret.go:194] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.722247 3562 secret.go:194] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.722303 3562 secret.go:194] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.722252 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.722344 3562 secret.go:194] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.722308 3562 configmap.go:199] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.722320 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.722296303 +0000 UTC m=+147.205527016 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.722432 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.722406916 +0000 UTC m=+147.205637479 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.722467 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.722448517 +0000 UTC m=+147.205679070 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-client" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.722568 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.722585 3562 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-idp-0-file-data: object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.722700 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.722689925 +0000 UTC m=+147.205920448 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.722721 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.722713105 +0000 UTC m=+147.205943628 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.722695 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.722766 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.722799 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.722812 3562 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.722856 3562 secret.go:194] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.722928 3562 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.722958 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.72287541 +0000 UTC m=+147.206105983 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.722977 3562 configmap.go:199] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.722823 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.722998 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.722977693 +0000 UTC m=+147.206208256 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-serving-cert" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.723030 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.723013364 +0000 UTC m=+147.206243927 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-idp-0-file-data" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.723061 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.723043795 +0000 UTC m=+147.206274548 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.723090 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.723073845 +0000 UTC m=+147.206304408 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"console-operator-config" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.723216 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.723366 3562 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.723380 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.723433 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.723408055 +0000 UTC m=+147.206638588 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.723452 3562 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.723516 3562 secret.go:194] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.723521 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.723540 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.723532968 +0000 UTC m=+147.206763491 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.723574 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.723624 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.723625 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.723618061 +0000 UTC m=+147.206848584 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.723664 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.723722 3562 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.723764 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.723799 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.723777245 +0000 UTC m=+147.207007968 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.723829 3562 secret.go:194] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.723837 3562 secret.go:194] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.723732 3562 configmap.go:199] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.723942 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.723898518 +0000 UTC m=+147.207129051 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"image-import-ca" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.723961 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.72395128 +0000 UTC m=+147.207181803 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"etcd-client" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.723985 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs podName:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.723979671 +0000 UTC m=+147.207210194 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs") pod "multus-admission-controller-6c7c885997-4hbbc" (UID: "d5025cb4-ddb0-4107-88c1-bcbcdb779ac0") : object "openshift-multus"/"multus-admission-controller-secret" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.825216 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.825285 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.825311 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.825335 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.825395 3562 configmap.go:199] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.825461 3562 secret.go:194] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.825411 3562 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.825522 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.825486467 +0000 UTC m=+147.308717030 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.825567 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.825546839 +0000 UTC m=+147.308777402 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-oauth-config" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.825601 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.825581799 +0000 UTC m=+147.308812402 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.825687 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.825718 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.825740 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.825773 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.825796 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.825824 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.825849 3562 configmap.go:199] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.825851 3562 secret.go:194] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.825926 3562 secret.go:194] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.825928 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.825884598 +0000 UTC m=+147.309115181 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"config" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.825965 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.82594805 +0000 UTC m=+147.309178573 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"encryption-config-1" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.825855 3562 secret.go:194] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.825980 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.82597381 +0000 UTC m=+147.309204333 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default-metrics-tls" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.826007 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.825870 3562 secret.go:194] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.826051 3562 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.826055 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.826028092 +0000 UTC m=+147.309258685 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"serving-cert" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.826131 3562 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.826196 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.826219 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.826195747 +0000 UTC m=+147.309426300 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.826277 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.826327 3562 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.826361 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.826394 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.826373752 +0000 UTC m=+147.309604315 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"client-ca" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.826440 3562 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.826455 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.826477 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.826466935 +0000 UTC m=+147.309697478 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.826500 3562 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.826519 3562 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.826576 3562 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.826601 3562 projected.go:200] Error preparing data for projected volume kube-api-access-ncrf5 for pod openshift-marketplace/certified-operators-7287f: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.826528 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.826615 3562 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.826526 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.826515016 +0000 UTC m=+147.309745539 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.826645 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.82663879 +0000 UTC m=+147.309869313 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"metrics-tls" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.826659 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.82665238 +0000 UTC m=+147.309882903 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"trusted-ca" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.826678 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.826711 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.826731 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.826752 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.826753 3562 secret.go:194] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.826793 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.826809 3562 secret.go:194] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.826820 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.826799204 +0000 UTC m=+147.310029767 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.826832 3562 configmap.go:199] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.826860 3562 secret.go:194] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.826890 3562 secret.go:194] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.826929 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.826892747 +0000 UTC m=+147.310123270 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : object "openshift-dns-operator"/"metrics-tls" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.827026 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.827073 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.827109 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.827080392 +0000 UTC m=+147.310310955 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.827135 3562 configmap.go:199] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.827158 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5 podName:887d596e-c519-4bfa-af90-3edd9e1b2f0f nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.827136544 +0000 UTC m=+147.310367107 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-ncrf5" (UniqueName: "kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5") pod "certified-operators-7287f" (UID: "887d596e-c519-4bfa-af90-3edd9e1b2f0f") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.827159 3562 projected.go:269] Couldn't get secret openshift-image-registry/image-registry-tls: object "openshift-image-registry"/"image-registry-tls" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.827193 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.827175465 +0000 UTC m=+147.310406018 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.827209 3562 projected.go:200] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-75779c45fd-v2j2v: object "openshift-image-registry"/"image-registry-tls" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.827245 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.827225726 +0000 UTC m=+147.310456289 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.827277 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.827260757 +0000 UTC m=+147.310491320 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : object "openshift-service-ca"/"signing-key" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.827380 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.827456 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.827469 3562 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.827474 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.827448582 +0000 UTC m=+147.310679105 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"client-ca" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.827562 3562 configmap.go:199] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.827569 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.827546415 +0000 UTC m=+147.310776988 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"image-registry-tls" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.827650 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.827679 3562 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.827737 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.827748 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.82772672 +0000 UTC m=+147.310957283 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.827784 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.827766051 +0000 UTC m=+147.310996664 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.827815 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.827798792 +0000 UTC m=+147.311029415 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"audit-1" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.827851 3562 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.827874 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.827970 3562 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.828000 3562 configmap.go:199] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.828010 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.828014 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.827999258 +0000 UTC m=+147.311229771 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.828071 3562 configmap.go:199] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.828096 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.82806802 +0000 UTC m=+147.311298603 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.828135 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.828114031 +0000 UTC m=+147.311344654 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.828238 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.828260 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.828246844 +0000 UTC m=+147.311477367 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : object "openshift-service-ca"/"signing-cabundle" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.828314 3562 configmap.go:199] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.828361 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.828348557 +0000 UTC m=+147.311579110 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"etcd-serving-ca" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.828362 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.828406 3562 secret.go:194] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.828417 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.828435 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.82842833 +0000 UTC m=+147.311658853 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.828464 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.828495 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.828493 3562 secret.go:194] Couldn't get secret openshift-image-registry/installation-pull-secrets: object "openshift-image-registry"/"installation-pull-secrets" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.828530 3562 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.828547 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.828528912 +0000 UTC m=+147.311759475 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "installation-pull-secrets" (UniqueName: "kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"installation-pull-secrets" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.828535 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-76gl8\" (UniqueName: \"kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8\") pod \"network-check-target-v54bt\" (UID: \"34a48baf-1bee-4921-8bb2-9b7320e76f79\") " pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.828569 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.828558003 +0000 UTC m=+147.311788566 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"audit-1" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.828590 3562 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.828603 3562 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.828611 3562 projected.go:200] Error preparing data for projected volume kube-api-access-76gl8 for pod openshift-network-diagnostics/network-check-target-v54bt: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.828612 3562 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.828610 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.828636 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8 podName:34a48baf-1bee-4921-8bb2-9b7320e76f79 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.828629105 +0000 UTC m=+147.311859628 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-76gl8" (UniqueName: "kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8") pod "network-check-target-v54bt" (UID: "34a48baf-1bee-4921-8bb2-9b7320e76f79") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.828650 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.828643636 +0000 UTC m=+147.311874289 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.828665 3562 secret.go:194] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.828702 3562 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.828708 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.828696397 +0000 UTC m=+147.311926960 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"serving-cert" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.828671 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.828724 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.828718208 +0000 UTC m=+147.311948731 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.828783 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.828829 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.828873 3562 configmap.go:199] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.828915 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.828897183 +0000 UTC m=+147.312127706 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-images" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.828876 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.828930 3562 configmap.go:199] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.828966 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.828991 3562 secret.go:194] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.829006 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.828999265 +0000 UTC m=+147.312229788 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.829028 3562 configmap.go:199] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.828996 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.829040 3562 configmap.go:199] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.829040 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.829026256 +0000 UTC m=+147.312256819 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"serving-cert" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.829085 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.829113 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.829140 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.829124649 +0000 UTC m=+147.312355242 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"service-ca" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.829151 3562 configmap.go:199] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.829168 3562 secret.go:194] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.829167 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.829158221 +0000 UTC m=+147.312388814 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"trusted-ca-bundle" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.829218 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.829237 3562 secret.go:194] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.829237 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.829223753 +0000 UTC m=+147.312454316 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.829264 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.829256284 +0000 UTC m=+147.312486807 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.829278 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.829271674 +0000 UTC m=+147.312502197 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.829318 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.829345 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.829376 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.829397 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.829432 3562 configmap.go:199] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.829453 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4w8wh\" (UniqueName: \"kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.829469 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.829458629 +0000 UTC m=+147.312689242 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-config" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.829518 3562 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.829531 3562 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.829541 3562 projected.go:200] Error preparing data for projected volume kube-api-access-4w8wh for pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.829542 3562 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.829583 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.829574262 +0000 UTC m=+147.312804875 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-4w8wh" (UniqueName: "kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.829546 3562 configmap.go:199] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.829609 3562 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.829630 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.829602263 +0000 UTC m=+147.312832816 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.829667 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.829648774 +0000 UTC m=+147.312879407 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.829777 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.829767508 +0000 UTC m=+147.312998031 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"trusted-ca" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.830016 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.830045 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.830066 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.830123 3562 configmap.go:199] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.830121 3562 configmap.go:199] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.830147 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.830141348 +0000 UTC m=+147.313371861 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"trusted-ca" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.830163 3562 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.830191 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.830172559 +0000 UTC m=+147.313403112 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"openshift-global-ca" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.830218 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.8302045 +0000 UTC m=+147.313435053 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.830459 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.830508 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.830550 3562 configmap.go:199] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.830557 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.830577 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.83057122 +0000 UTC m=+147.313801743 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.830577 3562 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.830609 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.830611 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.830602791 +0000 UTC m=+147.313833314 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.830631 3562 configmap.go:199] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.830659 3562 configmap.go:199] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.830660 3562 secret.go:194] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.830636 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.830696 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.830677563 +0000 UTC m=+147.313908176 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.830725 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.830711144 +0000 UTC m=+147.313941697 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.830755 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.830739455 +0000 UTC m=+147.313970118 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.830822 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.830889 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.830931 3562 secret.go:194] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.831009 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.831034 3562 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.831081 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs podName:a702c6d2-4dde-4077-ab8c-0f8df804bf7a nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.831068694 +0000 UTC m=+147.314299217 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs") pod "network-metrics-daemon-qdfr4" (UID: "a702c6d2-4dde-4077-ab8c-0f8df804bf7a") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.831084 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.831115 3562 configmap.go:199] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.831111 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.831089674 +0000 UTC m=+147.314320237 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"config" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.831212 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.831213 3562 secret.go:194] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.831234 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.831224868 +0000 UTC m=+147.314455391 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.831215 3562 configmap.go:199] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.831292 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.83127219 +0000 UTC m=+147.314502783 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.831355 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.831409 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.831435 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.831449 3562 configmap.go:199] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.831470 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.831444404 +0000 UTC m=+147.314675037 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.831481 3562 secret.go:194] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.831522 3562 secret.go:194] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.831533 3562 configmap.go:199] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.831541 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.831554 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.831543507 +0000 UTC m=+147.314774100 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.831572 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.831562508 +0000 UTC m=+147.314793141 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"audit" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.831591 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.831579258 +0000 UTC m=+147.314809781 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"serving-cert" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.831634 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.831625919 +0000 UTC m=+147.314856442 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"config" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.831639 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.831684 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.831752 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.831776 3562 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.831807 3562 configmap.go:199] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.831834 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.831827115 +0000 UTC m=+147.315057638 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"oauth-serving-cert" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.831834 3562 secret.go:194] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.831786 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.831847 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.831840005 +0000 UTC m=+147.315070528 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.831942 3562 configmap.go:199] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.832013 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.83198153 +0000 UTC m=+147.315212093 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"serving-cert" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.832080 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2h9\" (UniqueName: \"kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.832096 3562 projected.go:294] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.832108 3562 projected.go:294] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.832117 3562 projected.go:200] Error preparing data for projected volume kube-api-access-lx2h9 for pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.832142 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-serving-cert\" (UniqueName: \"kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.832148 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.832134414 +0000 UTC m=+147.315365027 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.832178 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9 podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.832162635 +0000 UTC m=+147.315393168 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-lx2h9" (UniqueName: "kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.832206 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.832215 3562 secret.go:194] Couldn't get secret openshift-console-operator/webhook-serving-cert: object "openshift-console-operator"/"webhook-serving-cert" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.832243 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.832251 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.832241667 +0000 UTC m=+147.315472190 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "webhook-serving-cert" (UniqueName: "kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : object "openshift-console-operator"/"webhook-serving-cert" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.832276 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.832304 3562 secret.go:194] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.832309 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.832332 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.83232542 +0000 UTC m=+147.315555933 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.832368 3562 configmap.go:199] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.832389 3562 secret.go:194] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.832409 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.832398802 +0000 UTC m=+147.315629405 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.832430 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.832456 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.832464 3562 secret.go:194] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.832493 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.832496 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.832486644 +0000 UTC m=+147.315717167 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"serving-cert" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.832519 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 podName: nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.832512675 +0000 UTC m=+147.315743198 (durationBeforeRetry 1m4s). Error: MountVolume.MountDevice failed for volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.832534 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls podName:297ab9b6-2186-4d5b-a952-2bfd59af63c4 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.832528175 +0000 UTC m=+147.315758698 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls") pod "machine-config-controller-6df6df6b6b-58shh" (UID: "297ab9b6-2186-4d5b-a952-2bfd59af63c4") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.832534 3562 secret.go:194] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.832553 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.832561 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.832555156 +0000 UTC m=+147.315785679 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"serving-cert" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.832602 3562 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.832623 3562 secret.go:194] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.832653 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.832647859 +0000 UTC m=+147.315878382 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.832687 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.832666319 +0000 UTC m=+147.315896872 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-session" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.874601 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.874664 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.874662 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.874745 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.874733 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.874804 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.874856 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.874814 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.875074 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.875106 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.875117 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.875202 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.875245 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.875316 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.875338 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.875347 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.875447 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.875493 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.875544 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.875586 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.875586 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.875660 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.875785 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.875786 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.875792 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.875852 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.875826 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.876084 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.876154 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.876108 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.876283 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.876355 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.876412 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.876559 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.876603 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.876638 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.876653 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.876751 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.876813 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.876873 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.876939 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.877016 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.877074 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.877089 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.877119 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.877147 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.877120 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.877180 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.877315 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.877363 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.877391 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.877514 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.877683 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.877942 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.878113 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.878176 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.878258 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.878326 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.878449 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.878572 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.878655 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.878769 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.878883 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.879071 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.879196 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.879298 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.879520 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.879669 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.879745 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.879876 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.879972 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.880065 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.933847 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.933940 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hjlhw\" (UniqueName: \"kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.934097 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bxr\" (UniqueName: \"kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.934141 3562 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.934199 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9724w\" (UniqueName: \"kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.934210 3562 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.934230 3562 projected.go:200] Error preparing data for projected volume kube-api-access-rg2zg for pod openshift-marketplace/marketplace-operator-8b455464d-f9xdt: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.934258 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8hpxx\" (UniqueName: \"kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.934282 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kp86\" (UniqueName: \"kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.934314 3562 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.934359 3562 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.934330 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.934299119 +0000 UTC m=+147.417529662 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-rg2zg" (UniqueName: "kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.934362 3562 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.934395 3562 projected.go:200] Error preparing data for projected volume kube-api-access-hjlhw for pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.934405 3562 projected.go:294] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.934424 3562 projected.go:294] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.934434 3562 projected.go:200] Error preparing data for projected volume kube-api-access-9724w for pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.934449 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.934428603 +0000 UTC m=+147.417659126 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-hjlhw" (UniqueName: "kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.934373 3562 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.934536 3562 projected.go:294] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.934537 3562 projected.go:200] Error preparing data for projected volume kube-api-access-6kp86 for pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.934571 3562 projected.go:294] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.934595 3562 projected.go:200] Error preparing data for projected volume kube-api-access-v7vkr for pod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.934612 3562 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.934658 3562 projected.go:294] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.934736 3562 projected.go:294] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.934749 3562 projected.go:200] Error preparing data for projected volume kube-api-access-8hpxx for pod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.934431 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-v7vkr\" (UniqueName: \"kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.934687 3562 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.934835 3562 projected.go:200] Error preparing data for projected volume kube-api-access-l8bxr for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.934465 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.934456333 +0000 UTC m=+147.417686856 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-9724w" (UniqueName: "kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.934951 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86 podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.934892995 +0000 UTC m=+147.418123518 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kp86" (UniqueName: "kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.934973 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.934965127 +0000 UTC m=+147.418195650 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-v7vkr" (UniqueName: "kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.935034 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.935088 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.93507576 +0000 UTC m=+147.418306283 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l8bxr" (UniqueName: "kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.935126 3562 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.935139 3562 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.935136 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.935100411 +0000 UTC m=+147.418331024 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-8hpxx" (UniqueName: "kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.935147 3562 projected.go:200] Error preparing data for projected volume kube-api-access-ptdrb for pod openshift-marketplace/redhat-operators-f4jkp: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.935253 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb podName:4092a9f8-5acc-4932-9e90-ef962eeb301a nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.935231025 +0000 UTC m=+147.418461658 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-ptdrb" (UniqueName: "kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb") pod "redhat-operators-f4jkp" (UID: "4092a9f8-5acc-4932-9e90-ef962eeb301a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.935945 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tvc4r\" (UniqueName: \"kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.936034 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-fqnmc\" (UniqueName: \"kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.936040 3562 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.936055 3562 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.936063 3562 projected.go:200] Error preparing data for projected volume kube-api-access-tvc4r for pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.936089 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.936081809 +0000 UTC m=+147.419312332 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-tvc4r" (UniqueName: "kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.936115 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-7ggjm\" (UniqueName: \"kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.936153 3562 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.936176 3562 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.936189 3562 projected.go:200] Error preparing data for projected volume kube-api-access-fqnmc for pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.936259 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.936247344 +0000 UTC m=+147.419477997 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-fqnmc" (UniqueName: "kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.936313 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9p8gt\" (UniqueName: \"kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt\") pod \"community-operators-sdddl\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " pod="openshift-marketplace/community-operators-sdddl" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.936389 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bm986\" (UniqueName: \"kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.936508 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j7zrh\" (UniqueName: \"kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.936527 3562 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.936572 3562 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.936540 3562 projected.go:294] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.936616 3562 projected.go:294] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.936633 3562 projected.go:294] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.936725 3562 projected.go:200] Error preparing data for projected volume kube-api-access-7ggjm for pod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.936584 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxcp\" (UniqueName: \"kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp\") pod \"network-check-source-5c5478f8c-vqvt7\" (UID: \"d0f40333-c860-4c04-8058-a0bf572dcf12\") " pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.936642 3562 projected.go:294] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.936857 3562 projected.go:200] Error preparing data for projected volume kube-api-access-j7zrh for pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.936886 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.936878432 +0000 UTC m=+147.420108955 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-j7zrh" (UniqueName: "kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.936534 3562 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.936919 3562 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.936926 3562 projected.go:200] Error preparing data for projected volume kube-api-access-bm986 for pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.936946 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986 podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.936939853 +0000 UTC m=+147.420170376 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-bm986" (UniqueName: "kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.936605 3562 projected.go:200] Error preparing data for projected volume kube-api-access-9p8gt for pod openshift-marketplace/community-operators-sdddl: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.936972 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt podName:fc9c9ba0-fcbb-4e78-8cf5-a059ec435760 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.936966584 +0000 UTC m=+147.420197107 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-9p8gt" (UniqueName: "kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt") pod "community-operators-sdddl" (UID: "fc9c9ba0-fcbb-4e78-8cf5-a059ec435760") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.936655 3562 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.936988 3562 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.936995 3562 projected.go:200] Error preparing data for projected volume kube-api-access-qcxcp for pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.937008 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.936991945 +0000 UTC m=+147.420222538 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-7ggjm" (UniqueName: "kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.937026 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp podName:d0f40333-c860-4c04-8058-a0bf572dcf12 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.937020826 +0000 UTC m=+147.420251349 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-qcxcp" (UniqueName: "kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp") pod "network-check-source-5c5478f8c-vqvt7" (UID: "d0f40333-c860-4c04-8058-a0bf572dcf12") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.937024 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.937076 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-x5d97\" (UniqueName: \"kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.937143 3562 projected.go:294] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.937162 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8dcvj\" (UniqueName: \"kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.937174 3562 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.937217 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2nz92\" (UniqueName: \"kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.937219 3562 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.937236 3562 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.937244 3562 projected.go:200] Error preparing data for projected volume kube-api-access-x5d97 for pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.937245 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.937223061 +0000 UTC m=+147.420453654 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.937268 3562 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.937279 3562 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.937288 3562 projected.go:200] Error preparing data for projected volume kube-api-access-2nz92 for pod openshift-console/console-644bb77b49-5x5xk: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.937311 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92 podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.937304403 +0000 UTC m=+147.420534926 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-2nz92" (UniqueName: "kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.937314 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kgvs\" (UniqueName: \"kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.937321 3562 projected.go:294] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.937352 3562 projected.go:294] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.937356 3562 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.937363 3562 projected.go:200] Error preparing data for projected volume kube-api-access-8dcvj for pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.937368 3562 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.937376 3562 projected.go:200] Error preparing data for projected volume kube-api-access-6kgvs for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.937404 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.937393256 +0000 UTC m=+147.420623789 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-8dcvj" (UniqueName: "kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.937420 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtrv\" (UniqueName: \"kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.937425 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.937415677 +0000 UTC m=+147.420646210 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kgvs" (UniqueName: "kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.937493 3562 projected.go:294] Couldn't get configMap hostpath-provisioner/kube-root-ca.crt: object "hostpath-provisioner"/"kube-root-ca.crt" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.937487 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97 podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.937467388 +0000 UTC m=+147.420697951 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-x5d97" (UniqueName: "kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.937510 3562 projected.go:294] Couldn't get configMap hostpath-provisioner/openshift-service-ca.crt: object "hostpath-provisioner"/"openshift-service-ca.crt" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.937520 3562 projected.go:200] Error preparing data for projected volume kube-api-access-vvtrv for pod hostpath-provisioner/csi-hostpathplugin-hvm8g: [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.937814 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv podName:12e733dd-0939-4f1b-9cbb-13897e093787 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.937793977 +0000 UTC m=+147.421024500 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-vvtrv" (UniqueName: "kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv") pod "csi-hostpathplugin-hvm8g" (UID: "12e733dd-0939-4f1b-9cbb-13897e093787") : [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Dec 13 00:07:26 crc kubenswrapper[3562]: I1213 00:07:26.937859 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nf4t2\" (UniqueName: \"kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.938004 3562 projected.go:294] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.938016 3562 projected.go:294] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.938024 3562 projected.go:200] Error preparing data for projected volume kube-api-access-nf4t2 for pod openshift-dns-operator/dns-operator-75f687757b-nz2xb: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:07:26 crc kubenswrapper[3562]: E1213 00:07:26.938052 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2 podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-12-13 00:08:30.938044794 +0000 UTC m=+147.421275307 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-nf4t2" (UniqueName: "kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:07:27 crc kubenswrapper[3562]: I1213 00:07:27.039383 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:07:27 crc kubenswrapper[3562]: I1213 00:07:27.039472 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:07:27 crc kubenswrapper[3562]: E1213 00:07:27.039612 3562 projected.go:294] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 13 00:07:27 crc kubenswrapper[3562]: I1213 00:07:27.039678 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dt5cx\" (UniqueName: \"kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx\") pod \"ingress-canary-2vhcn\" (UID: \"0b5d722a-1123-4935-9740-52a08d018bc9\") " pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 13 00:07:27 crc kubenswrapper[3562]: E1213 00:07:27.039724 3562 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 13 00:07:27 crc kubenswrapper[3562]: E1213 00:07:27.040040 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:31.040000534 +0000 UTC m=+147.523231067 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 13 00:07:27 crc kubenswrapper[3562]: E1213 00:07:27.040035 3562 projected.go:294] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Dec 13 00:07:27 crc kubenswrapper[3562]: E1213 00:07:27.040092 3562 projected.go:294] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Dec 13 00:07:27 crc kubenswrapper[3562]: E1213 00:07:27.040107 3562 projected.go:200] Error preparing data for projected volume kube-api-access-dt5cx for pod openshift-ingress-canary/ingress-canary-2vhcn: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Dec 13 00:07:27 crc kubenswrapper[3562]: E1213 00:07:27.040165 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx podName:0b5d722a-1123-4935-9740-52a08d018bc9 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:31.040155538 +0000 UTC m=+147.523386061 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-dt5cx" (UniqueName: "kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx") pod "ingress-canary-2vhcn" (UID: "0b5d722a-1123-4935-9740-52a08d018bc9") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Dec 13 00:07:27 crc kubenswrapper[3562]: E1213 00:07:27.040212 3562 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 13 00:07:27 crc kubenswrapper[3562]: E1213 00:07:27.040608 3562 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 13 00:07:27 crc kubenswrapper[3562]: E1213 00:07:27.040633 3562 projected.go:200] Error preparing data for projected volume kube-api-access-55f7t for pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 13 00:07:27 crc kubenswrapper[3562]: E1213 00:07:27.041125 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-12-13 00:08:31.041102244 +0000 UTC m=+147.524332807 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-55f7t" (UniqueName: "kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 13 00:07:27 crc kubenswrapper[3562]: I1213 00:07:27.142948 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2zpsk\" (UniqueName: \"kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk\") pod \"downloads-65476884b9-9wcvx\" (UID: \"6268b7fe-8910-4505-b404-6f1df638105c\") " pod="openshift-console/downloads-65476884b9-9wcvx" Dec 13 00:07:27 crc kubenswrapper[3562]: I1213 00:07:27.143052 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmhq\" (UniqueName: \"kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq\") pod \"migrator-f7c6d88df-q2fnv\" (UID: \"cf1a8966-f594-490a-9fbb-eec5bafd13d3\") " pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 13 00:07:27 crc kubenswrapper[3562]: E1213 00:07:27.143469 3562 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 13 00:07:27 crc kubenswrapper[3562]: E1213 00:07:27.143560 3562 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Dec 13 00:07:27 crc kubenswrapper[3562]: E1213 00:07:27.143613 3562 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Dec 13 00:07:27 crc kubenswrapper[3562]: E1213 00:07:27.143626 3562 projected.go:200] Error preparing data for projected volume kube-api-access-hqmhq for pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Dec 13 00:07:27 crc kubenswrapper[3562]: E1213 00:07:27.143696 3562 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 13 00:07:27 crc kubenswrapper[3562]: E1213 00:07:27.143716 3562 projected.go:200] Error preparing data for projected volume kube-api-access-2zpsk for pod openshift-console/downloads-65476884b9-9wcvx: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 13 00:07:27 crc kubenswrapper[3562]: E1213 00:07:27.143702 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq podName:cf1a8966-f594-490a-9fbb-eec5bafd13d3 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:31.143676311 +0000 UTC m=+147.626906864 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqmhq" (UniqueName: "kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq") pod "migrator-f7c6d88df-q2fnv" (UID: "cf1a8966-f594-490a-9fbb-eec5bafd13d3") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Dec 13 00:07:27 crc kubenswrapper[3562]: E1213 00:07:27.143790 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk podName:6268b7fe-8910-4505-b404-6f1df638105c nodeName:}" failed. No retries permitted until 2025-12-13 00:08:31.143770943 +0000 UTC m=+147.627001466 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-2zpsk" (UniqueName: "kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk") pod "downloads-65476884b9-9wcvx" (UID: "6268b7fe-8910-4505-b404-6f1df638105c") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 13 00:07:27 crc kubenswrapper[3562]: I1213 00:07:27.245588 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wrd8h\" (UniqueName: \"kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:07:27 crc kubenswrapper[3562]: I1213 00:07:27.245668 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:07:27 crc kubenswrapper[3562]: I1213 00:07:27.245718 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vhj\" (UniqueName: \"kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:07:27 crc kubenswrapper[3562]: I1213 00:07:27.245765 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tl5kg\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:07:27 crc kubenswrapper[3562]: E1213 00:07:27.245944 3562 projected.go:294] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Dec 13 00:07:27 crc kubenswrapper[3562]: E1213 00:07:27.245985 3562 projected.go:294] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Dec 13 00:07:27 crc kubenswrapper[3562]: E1213 00:07:27.246000 3562 projected.go:200] Error preparing data for projected volume kube-api-access-d9vhj for pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:07:27 crc kubenswrapper[3562]: E1213 00:07:27.245944 3562 projected.go:294] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 13 00:07:27 crc kubenswrapper[3562]: E1213 00:07:27.246071 3562 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 13 00:07:27 crc kubenswrapper[3562]: E1213 00:07:27.246098 3562 projected.go:294] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Dec 13 00:07:27 crc kubenswrapper[3562]: E1213 00:07:27.246146 3562 projected.go:294] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Dec 13 00:07:27 crc kubenswrapper[3562]: E1213 00:07:27.246161 3562 projected.go:200] Error preparing data for projected volume kube-api-access-tl5kg for pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:07:27 crc kubenswrapper[3562]: E1213 00:07:27.246071 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:31.246047571 +0000 UTC m=+147.729278154 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-d9vhj" (UniqueName: "kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:07:27 crc kubenswrapper[3562]: E1213 00:07:27.246243 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:31.246222706 +0000 UTC m=+147.729453219 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 13 00:07:27 crc kubenswrapper[3562]: E1213 00:07:27.246255 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:31.246249467 +0000 UTC m=+147.729479990 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-tl5kg" (UniqueName: "kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:07:27 crc kubenswrapper[3562]: E1213 00:07:27.246707 3562 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 13 00:07:27 crc kubenswrapper[3562]: E1213 00:07:27.246725 3562 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 13 00:07:27 crc kubenswrapper[3562]: E1213 00:07:27.246735 3562 projected.go:200] Error preparing data for projected volume kube-api-access-wrd8h for pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:07:27 crc kubenswrapper[3562]: E1213 00:07:27.246772 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-12-13 00:08:31.246761812 +0000 UTC m=+147.729992335 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-wrd8h" (UniqueName: "kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:07:27 crc kubenswrapper[3562]: I1213 00:07:27.348526 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5rpl7\" (UniqueName: \"kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:07:27 crc kubenswrapper[3562]: I1213 00:07:27.348608 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Dec 13 00:07:27 crc kubenswrapper[3562]: I1213 00:07:27.348684 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-js87r\" (UniqueName: \"kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:07:27 crc kubenswrapper[3562]: I1213 00:07:27.348715 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pkhl4\" (UniqueName: \"kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:07:27 crc kubenswrapper[3562]: E1213 00:07:27.348718 3562 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Dec 13 00:07:27 crc kubenswrapper[3562]: E1213 00:07:27.348778 3562 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Dec 13 00:07:27 crc kubenswrapper[3562]: E1213 00:07:27.348792 3562 projected.go:200] Error preparing data for projected volume kube-api-access-5rpl7 for pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:07:27 crc kubenswrapper[3562]: E1213 00:07:27.348795 3562 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 13 00:07:27 crc kubenswrapper[3562]: E1213 00:07:27.348821 3562 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 13 00:07:27 crc kubenswrapper[3562]: E1213 00:07:27.348829 3562 projected.go:294] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Dec 13 00:07:27 crc kubenswrapper[3562]: E1213 00:07:27.348831 3562 projected.go:200] Error preparing data for projected volume kube-api-access-n6sqt for pod openshift-marketplace/community-operators-8jhz6: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:07:27 crc kubenswrapper[3562]: E1213 00:07:27.348846 3562 projected.go:294] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Dec 13 00:07:27 crc kubenswrapper[3562]: E1213 00:07:27.348852 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7 podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:31.348832084 +0000 UTC m=+147.832062607 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-5rpl7" (UniqueName: "kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 13 00:07:27 crc kubenswrapper[3562]: E1213 00:07:27.348859 3562 projected.go:200] Error preparing data for projected volume kube-api-access-pkhl4 for pod openshift-controller-manager/controller-manager-778975cc4f-x5vcf: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:07:27 crc kubenswrapper[3562]: E1213 00:07:27.348886 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt podName:3f4dca86-e6ee-4ec9-8324-86aff960225e nodeName:}" failed. No retries permitted until 2025-12-13 00:08:31.348868905 +0000 UTC m=+147.832099428 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-n6sqt" (UniqueName: "kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt") pod "community-operators-8jhz6" (UID: "3f4dca86-e6ee-4ec9-8324-86aff960225e") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:07:27 crc kubenswrapper[3562]: E1213 00:07:27.348891 3562 projected.go:294] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Dec 13 00:07:27 crc kubenswrapper[3562]: E1213 00:07:27.348955 3562 projected.go:294] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Dec 13 00:07:27 crc kubenswrapper[3562]: E1213 00:07:27.348967 3562 projected.go:200] Error preparing data for projected volume kube-api-access-js87r for pod openshift-service-ca/service-ca-666f99b6f-kk8kg: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Dec 13 00:07:27 crc kubenswrapper[3562]: E1213 00:07:27.348919 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4 podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-12-13 00:08:31.348895225 +0000 UTC m=+147.832125748 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-pkhl4" (UniqueName: "kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Dec 13 00:07:27 crc kubenswrapper[3562]: I1213 00:07:27.349132 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 13 00:07:27 crc kubenswrapper[3562]: E1213 00:07:27.349162 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-12-13 00:08:31.349132882 +0000 UTC m=+147.832363445 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-js87r" (UniqueName: "kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Dec 13 00:07:27 crc kubenswrapper[3562]: E1213 00:07:27.349205 3562 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 13 00:07:27 crc kubenswrapper[3562]: E1213 00:07:27.349221 3562 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 13 00:07:27 crc kubenswrapper[3562]: E1213 00:07:27.349231 3562 projected.go:200] Error preparing data for projected volume kube-api-access-tf29r for pod openshift-marketplace/redhat-marketplace-8s8pc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:07:27 crc kubenswrapper[3562]: E1213 00:07:27.349275 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r podName:c782cf62-a827-4677-b3c2-6f82c5f09cbb nodeName:}" failed. No retries permitted until 2025-12-13 00:08:31.349264246 +0000 UTC m=+147.832494769 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-tf29r" (UniqueName: "kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r") pod "redhat-marketplace-8s8pc" (UID: "c782cf62-a827-4677-b3c2-6f82c5f09cbb") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 13 00:07:27 crc kubenswrapper[3562]: I1213 00:07:27.470969 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:07:27 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:07:27 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:07:27 crc kubenswrapper[3562]: healthz check failed Dec 13 00:07:27 crc kubenswrapper[3562]: I1213 00:07:27.471058 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:07:27 crc kubenswrapper[3562]: I1213 00:07:27.874758 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:07:27 crc kubenswrapper[3562]: I1213 00:07:27.874814 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:07:27 crc kubenswrapper[3562]: I1213 00:07:27.874868 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 13 00:07:27 crc kubenswrapper[3562]: I1213 00:07:27.874954 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 13 00:07:27 crc kubenswrapper[3562]: I1213 00:07:27.874957 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:07:27 crc kubenswrapper[3562]: I1213 00:07:27.874769 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:07:27 crc kubenswrapper[3562]: I1213 00:07:27.875034 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:07:27 crc kubenswrapper[3562]: I1213 00:07:27.874974 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:07:27 crc kubenswrapper[3562]: E1213 00:07:27.875084 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 13 00:07:27 crc kubenswrapper[3562]: I1213 00:07:27.875103 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:07:27 crc kubenswrapper[3562]: I1213 00:07:27.875179 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:07:27 crc kubenswrapper[3562]: I1213 00:07:27.875221 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 13 00:07:27 crc kubenswrapper[3562]: E1213 00:07:27.875224 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 13 00:07:27 crc kubenswrapper[3562]: E1213 00:07:27.875504 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 13 00:07:27 crc kubenswrapper[3562]: E1213 00:07:27.875637 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 13 00:07:27 crc kubenswrapper[3562]: E1213 00:07:27.875750 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 13 00:07:27 crc kubenswrapper[3562]: E1213 00:07:27.875840 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 13 00:07:27 crc kubenswrapper[3562]: E1213 00:07:27.875962 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 13 00:07:27 crc kubenswrapper[3562]: E1213 00:07:27.876042 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 13 00:07:27 crc kubenswrapper[3562]: E1213 00:07:27.876122 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 13 00:07:27 crc kubenswrapper[3562]: E1213 00:07:27.876220 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 13 00:07:27 crc kubenswrapper[3562]: E1213 00:07:27.876279 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 13 00:07:28 crc kubenswrapper[3562]: I1213 00:07:28.471028 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:07:28 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:07:28 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:07:28 crc kubenswrapper[3562]: healthz check failed Dec 13 00:07:28 crc kubenswrapper[3562]: I1213 00:07:28.471106 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:07:28 crc kubenswrapper[3562]: I1213 00:07:28.874211 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 13 00:07:28 crc kubenswrapper[3562]: E1213 00:07:28.874408 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 13 00:07:28 crc kubenswrapper[3562]: I1213 00:07:28.874619 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 13 00:07:28 crc kubenswrapper[3562]: E1213 00:07:28.874716 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 13 00:07:28 crc kubenswrapper[3562]: I1213 00:07:28.874849 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 13 00:07:28 crc kubenswrapper[3562]: E1213 00:07:28.874958 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 13 00:07:28 crc kubenswrapper[3562]: I1213 00:07:28.875076 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:07:28 crc kubenswrapper[3562]: E1213 00:07:28.875154 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 13 00:07:28 crc kubenswrapper[3562]: I1213 00:07:28.875166 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:07:28 crc kubenswrapper[3562]: I1213 00:07:28.875277 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:07:28 crc kubenswrapper[3562]: I1213 00:07:28.875337 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:07:28 crc kubenswrapper[3562]: E1213 00:07:28.875418 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 13 00:07:28 crc kubenswrapper[3562]: I1213 00:07:28.875445 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:07:28 crc kubenswrapper[3562]: I1213 00:07:28.875531 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 13 00:07:28 crc kubenswrapper[3562]: I1213 00:07:28.875568 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:07:28 crc kubenswrapper[3562]: I1213 00:07:28.875605 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:07:28 crc kubenswrapper[3562]: I1213 00:07:28.875625 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:07:28 crc kubenswrapper[3562]: I1213 00:07:28.875688 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:07:28 crc kubenswrapper[3562]: E1213 00:07:28.875699 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 13 00:07:28 crc kubenswrapper[3562]: I1213 00:07:28.875709 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 13 00:07:28 crc kubenswrapper[3562]: I1213 00:07:28.875740 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:07:28 crc kubenswrapper[3562]: I1213 00:07:28.875781 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:07:28 crc kubenswrapper[3562]: I1213 00:07:28.875798 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:07:28 crc kubenswrapper[3562]: I1213 00:07:28.875813 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 13 00:07:28 crc kubenswrapper[3562]: I1213 00:07:28.875858 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:07:28 crc kubenswrapper[3562]: E1213 00:07:28.875866 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 13 00:07:28 crc kubenswrapper[3562]: I1213 00:07:28.875786 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 13 00:07:28 crc kubenswrapper[3562]: I1213 00:07:28.875873 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 13 00:07:28 crc kubenswrapper[3562]: I1213 00:07:28.875848 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 13 00:07:28 crc kubenswrapper[3562]: I1213 00:07:28.875965 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:07:28 crc kubenswrapper[3562]: I1213 00:07:28.876013 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 13 00:07:28 crc kubenswrapper[3562]: I1213 00:07:28.876119 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:07:28 crc kubenswrapper[3562]: I1213 00:07:28.876130 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 13 00:07:28 crc kubenswrapper[3562]: I1213 00:07:28.876151 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:07:28 crc kubenswrapper[3562]: E1213 00:07:28.876186 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 13 00:07:28 crc kubenswrapper[3562]: E1213 00:07:28.876208 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 13 00:07:28 crc kubenswrapper[3562]: I1213 00:07:28.876249 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:07:28 crc kubenswrapper[3562]: I1213 00:07:28.876293 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 13 00:07:28 crc kubenswrapper[3562]: I1213 00:07:28.876296 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 13 00:07:28 crc kubenswrapper[3562]: E1213 00:07:28.876362 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 13 00:07:28 crc kubenswrapper[3562]: E1213 00:07:28.876446 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 13 00:07:28 crc kubenswrapper[3562]: E1213 00:07:28.876505 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 13 00:07:28 crc kubenswrapper[3562]: I1213 00:07:28.876531 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 13 00:07:28 crc kubenswrapper[3562]: E1213 00:07:28.876596 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 13 00:07:28 crc kubenswrapper[3562]: E1213 00:07:28.876646 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 13 00:07:28 crc kubenswrapper[3562]: I1213 00:07:28.876696 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:07:28 crc kubenswrapper[3562]: E1213 00:07:28.876759 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 13 00:07:28 crc kubenswrapper[3562]: I1213 00:07:28.876797 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 13 00:07:28 crc kubenswrapper[3562]: E1213 00:07:28.876845 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 13 00:07:28 crc kubenswrapper[3562]: E1213 00:07:28.876992 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 13 00:07:28 crc kubenswrapper[3562]: I1213 00:07:28.877015 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 13 00:07:28 crc kubenswrapper[3562]: E1213 00:07:28.877058 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 13 00:07:28 crc kubenswrapper[3562]: I1213 00:07:28.877080 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:07:28 crc kubenswrapper[3562]: E1213 00:07:28.877112 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 13 00:07:28 crc kubenswrapper[3562]: E1213 00:07:28.877226 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 13 00:07:28 crc kubenswrapper[3562]: E1213 00:07:28.877309 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 13 00:07:28 crc kubenswrapper[3562]: E1213 00:07:28.877384 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 13 00:07:28 crc kubenswrapper[3562]: E1213 00:07:28.877441 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 13 00:07:28 crc kubenswrapper[3562]: I1213 00:07:28.877460 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:07:28 crc kubenswrapper[3562]: E1213 00:07:28.877511 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 13 00:07:28 crc kubenswrapper[3562]: E1213 00:07:28.877616 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 13 00:07:28 crc kubenswrapper[3562]: E1213 00:07:28.877720 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 13 00:07:28 crc kubenswrapper[3562]: E1213 00:07:28.877763 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 13 00:07:28 crc kubenswrapper[3562]: E1213 00:07:28.877773 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 13 00:07:28 crc kubenswrapper[3562]: E1213 00:07:28.877783 3562 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 13 00:07:28 crc kubenswrapper[3562]: E1213 00:07:28.877926 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 13 00:07:28 crc kubenswrapper[3562]: E1213 00:07:28.877964 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 13 00:07:28 crc kubenswrapper[3562]: E1213 00:07:28.878048 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 13 00:07:28 crc kubenswrapper[3562]: E1213 00:07:28.878109 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 13 00:07:28 crc kubenswrapper[3562]: E1213 00:07:28.878212 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 13 00:07:28 crc kubenswrapper[3562]: E1213 00:07:28.878263 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 13 00:07:28 crc kubenswrapper[3562]: E1213 00:07:28.878312 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 13 00:07:28 crc kubenswrapper[3562]: E1213 00:07:28.878437 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 13 00:07:29 crc kubenswrapper[3562]: I1213 00:07:29.470873 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:07:29 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:07:29 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:07:29 crc kubenswrapper[3562]: healthz check failed Dec 13 00:07:29 crc kubenswrapper[3562]: I1213 00:07:29.470951 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:07:29 crc kubenswrapper[3562]: I1213 00:07:29.875318 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:07:29 crc kubenswrapper[3562]: I1213 00:07:29.875363 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 13 00:07:29 crc kubenswrapper[3562]: I1213 00:07:29.875398 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:07:29 crc kubenswrapper[3562]: I1213 00:07:29.875269 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 13 00:07:29 crc kubenswrapper[3562]: I1213 00:07:29.875337 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:07:29 crc kubenswrapper[3562]: I1213 00:07:29.875575 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:07:29 crc kubenswrapper[3562]: I1213 00:07:29.875582 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:07:29 crc kubenswrapper[3562]: I1213 00:07:29.875680 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:07:29 crc kubenswrapper[3562]: E1213 00:07:29.875774 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 13 00:07:29 crc kubenswrapper[3562]: I1213 00:07:29.875800 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:07:29 crc kubenswrapper[3562]: E1213 00:07:29.875892 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 13 00:07:29 crc kubenswrapper[3562]: E1213 00:07:29.875988 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 13 00:07:29 crc kubenswrapper[3562]: E1213 00:07:29.876046 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 13 00:07:29 crc kubenswrapper[3562]: I1213 00:07:29.876056 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:07:29 crc kubenswrapper[3562]: E1213 00:07:29.876103 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 13 00:07:29 crc kubenswrapper[3562]: E1213 00:07:29.876154 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 13 00:07:29 crc kubenswrapper[3562]: E1213 00:07:29.876213 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 13 00:07:29 crc kubenswrapper[3562]: E1213 00:07:29.876273 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 13 00:07:29 crc kubenswrapper[3562]: E1213 00:07:29.876328 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 13 00:07:29 crc kubenswrapper[3562]: I1213 00:07:29.876404 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 13 00:07:29 crc kubenswrapper[3562]: E1213 00:07:29.876480 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 13 00:07:29 crc kubenswrapper[3562]: E1213 00:07:29.876577 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 13 00:07:30 crc kubenswrapper[3562]: I1213 00:07:30.471870 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:07:30 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:07:30 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:07:30 crc kubenswrapper[3562]: healthz check failed Dec 13 00:07:30 crc kubenswrapper[3562]: I1213 00:07:30.471978 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:07:30 crc kubenswrapper[3562]: I1213 00:07:30.874820 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:07:30 crc kubenswrapper[3562]: I1213 00:07:30.874872 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 13 00:07:30 crc kubenswrapper[3562]: I1213 00:07:30.874839 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:07:30 crc kubenswrapper[3562]: I1213 00:07:30.874933 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 13 00:07:30 crc kubenswrapper[3562]: I1213 00:07:30.874966 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:07:30 crc kubenswrapper[3562]: I1213 00:07:30.874973 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 13 00:07:30 crc kubenswrapper[3562]: I1213 00:07:30.874996 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 13 00:07:30 crc kubenswrapper[3562]: I1213 00:07:30.874974 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 13 00:07:30 crc kubenswrapper[3562]: I1213 00:07:30.874942 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:07:30 crc kubenswrapper[3562]: I1213 00:07:30.874886 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:07:30 crc kubenswrapper[3562]: E1213 00:07:30.875146 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 13 00:07:30 crc kubenswrapper[3562]: I1213 00:07:30.875222 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 13 00:07:30 crc kubenswrapper[3562]: I1213 00:07:30.875230 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 13 00:07:30 crc kubenswrapper[3562]: I1213 00:07:30.875247 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:07:30 crc kubenswrapper[3562]: I1213 00:07:30.875262 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:07:30 crc kubenswrapper[3562]: I1213 00:07:30.875247 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:07:30 crc kubenswrapper[3562]: I1213 00:07:30.874870 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 13 00:07:30 crc kubenswrapper[3562]: I1213 00:07:30.875389 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:07:30 crc kubenswrapper[3562]: E1213 00:07:30.875424 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 13 00:07:30 crc kubenswrapper[3562]: I1213 00:07:30.875448 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:07:30 crc kubenswrapper[3562]: E1213 00:07:30.875489 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 13 00:07:30 crc kubenswrapper[3562]: E1213 00:07:30.875581 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 13 00:07:30 crc kubenswrapper[3562]: I1213 00:07:30.875611 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:07:30 crc kubenswrapper[3562]: I1213 00:07:30.875660 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:07:30 crc kubenswrapper[3562]: E1213 00:07:30.875749 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 13 00:07:30 crc kubenswrapper[3562]: E1213 00:07:30.875751 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 13 00:07:30 crc kubenswrapper[3562]: E1213 00:07:30.875821 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 13 00:07:30 crc kubenswrapper[3562]: E1213 00:07:30.875899 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 13 00:07:30 crc kubenswrapper[3562]: I1213 00:07:30.875951 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:07:30 crc kubenswrapper[3562]: E1213 00:07:30.876040 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 13 00:07:30 crc kubenswrapper[3562]: I1213 00:07:30.876068 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 13 00:07:30 crc kubenswrapper[3562]: E1213 00:07:30.876163 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 13 00:07:30 crc kubenswrapper[3562]: I1213 00:07:30.876169 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 13 00:07:30 crc kubenswrapper[3562]: I1213 00:07:30.876190 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:07:30 crc kubenswrapper[3562]: I1213 00:07:30.876242 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:07:30 crc kubenswrapper[3562]: I1213 00:07:30.876243 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 13 00:07:30 crc kubenswrapper[3562]: E1213 00:07:30.876302 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 13 00:07:30 crc kubenswrapper[3562]: I1213 00:07:30.876394 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 13 00:07:30 crc kubenswrapper[3562]: I1213 00:07:30.876438 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:07:30 crc kubenswrapper[3562]: I1213 00:07:30.876491 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 13 00:07:30 crc kubenswrapper[3562]: I1213 00:07:30.876492 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:07:30 crc kubenswrapper[3562]: I1213 00:07:30.876516 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 13 00:07:30 crc kubenswrapper[3562]: I1213 00:07:30.876554 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 13 00:07:30 crc kubenswrapper[3562]: I1213 00:07:30.876562 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:07:30 crc kubenswrapper[3562]: E1213 00:07:30.876469 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 13 00:07:30 crc kubenswrapper[3562]: I1213 00:07:30.876633 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:07:30 crc kubenswrapper[3562]: E1213 00:07:30.876642 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 13 00:07:30 crc kubenswrapper[3562]: I1213 00:07:30.876679 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:07:30 crc kubenswrapper[3562]: E1213 00:07:30.876874 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 13 00:07:30 crc kubenswrapper[3562]: E1213 00:07:30.877178 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 13 00:07:30 crc kubenswrapper[3562]: E1213 00:07:30.877194 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 13 00:07:30 crc kubenswrapper[3562]: I1213 00:07:30.877238 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 13 00:07:30 crc kubenswrapper[3562]: E1213 00:07:30.877267 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 13 00:07:30 crc kubenswrapper[3562]: E1213 00:07:30.877420 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 13 00:07:30 crc kubenswrapper[3562]: E1213 00:07:30.877540 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 13 00:07:30 crc kubenswrapper[3562]: E1213 00:07:30.877681 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 13 00:07:30 crc kubenswrapper[3562]: E1213 00:07:30.877808 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 13 00:07:30 crc kubenswrapper[3562]: E1213 00:07:30.877935 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 13 00:07:30 crc kubenswrapper[3562]: E1213 00:07:30.878023 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 13 00:07:30 crc kubenswrapper[3562]: E1213 00:07:30.878112 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 13 00:07:30 crc kubenswrapper[3562]: E1213 00:07:30.878248 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 13 00:07:30 crc kubenswrapper[3562]: E1213 00:07:30.878347 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 13 00:07:30 crc kubenswrapper[3562]: E1213 00:07:30.878392 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 13 00:07:30 crc kubenswrapper[3562]: E1213 00:07:30.878426 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 13 00:07:30 crc kubenswrapper[3562]: E1213 00:07:30.878485 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 13 00:07:30 crc kubenswrapper[3562]: E1213 00:07:30.878555 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 13 00:07:30 crc kubenswrapper[3562]: E1213 00:07:30.878623 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 13 00:07:30 crc kubenswrapper[3562]: E1213 00:07:30.878692 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 13 00:07:30 crc kubenswrapper[3562]: E1213 00:07:30.878809 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 13 00:07:30 crc kubenswrapper[3562]: E1213 00:07:30.878850 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 13 00:07:30 crc kubenswrapper[3562]: E1213 00:07:30.878967 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 13 00:07:30 crc kubenswrapper[3562]: E1213 00:07:30.879005 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 13 00:07:31 crc kubenswrapper[3562]: I1213 00:07:31.471363 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:07:31 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:07:31 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:07:31 crc kubenswrapper[3562]: healthz check failed Dec 13 00:07:31 crc kubenswrapper[3562]: I1213 00:07:31.471491 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:07:31 crc kubenswrapper[3562]: I1213 00:07:31.875050 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:07:31 crc kubenswrapper[3562]: I1213 00:07:31.875098 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:07:31 crc kubenswrapper[3562]: I1213 00:07:31.875058 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 13 00:07:31 crc kubenswrapper[3562]: I1213 00:07:31.875146 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:07:31 crc kubenswrapper[3562]: I1213 00:07:31.875169 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 13 00:07:31 crc kubenswrapper[3562]: I1213 00:07:31.875192 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:07:31 crc kubenswrapper[3562]: I1213 00:07:31.875204 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 13 00:07:31 crc kubenswrapper[3562]: I1213 00:07:31.875119 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:07:31 crc kubenswrapper[3562]: I1213 00:07:31.875147 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:07:31 crc kubenswrapper[3562]: I1213 00:07:31.875257 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:07:31 crc kubenswrapper[3562]: I1213 00:07:31.875289 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:07:31 crc kubenswrapper[3562]: E1213 00:07:31.875349 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 13 00:07:31 crc kubenswrapper[3562]: E1213 00:07:31.875452 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 13 00:07:31 crc kubenswrapper[3562]: E1213 00:07:31.875507 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 13 00:07:31 crc kubenswrapper[3562]: E1213 00:07:31.875606 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 13 00:07:31 crc kubenswrapper[3562]: E1213 00:07:31.875679 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 13 00:07:31 crc kubenswrapper[3562]: E1213 00:07:31.875775 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 13 00:07:31 crc kubenswrapper[3562]: E1213 00:07:31.876068 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 13 00:07:31 crc kubenswrapper[3562]: E1213 00:07:31.876111 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 13 00:07:31 crc kubenswrapper[3562]: E1213 00:07:31.876311 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 13 00:07:31 crc kubenswrapper[3562]: E1213 00:07:31.876406 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 13 00:07:31 crc kubenswrapper[3562]: E1213 00:07:31.876507 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 13 00:07:32 crc kubenswrapper[3562]: I1213 00:07:32.470753 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:07:32 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:07:32 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:07:32 crc kubenswrapper[3562]: healthz check failed Dec 13 00:07:32 crc kubenswrapper[3562]: I1213 00:07:32.471126 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:07:32 crc kubenswrapper[3562]: I1213 00:07:32.874595 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:07:32 crc kubenswrapper[3562]: I1213 00:07:32.874635 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:07:32 crc kubenswrapper[3562]: I1213 00:07:32.874667 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 13 00:07:32 crc kubenswrapper[3562]: I1213 00:07:32.874671 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 13 00:07:32 crc kubenswrapper[3562]: I1213 00:07:32.874708 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 13 00:07:32 crc kubenswrapper[3562]: I1213 00:07:32.874682 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:07:32 crc kubenswrapper[3562]: I1213 00:07:32.874749 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:07:32 crc kubenswrapper[3562]: I1213 00:07:32.874752 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 13 00:07:32 crc kubenswrapper[3562]: I1213 00:07:32.874601 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 13 00:07:32 crc kubenswrapper[3562]: I1213 00:07:32.874806 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:07:32 crc kubenswrapper[3562]: I1213 00:07:32.874719 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 13 00:07:32 crc kubenswrapper[3562]: I1213 00:07:32.874823 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 13 00:07:32 crc kubenswrapper[3562]: I1213 00:07:32.874634 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:07:32 crc kubenswrapper[3562]: I1213 00:07:32.874727 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:07:32 crc kubenswrapper[3562]: I1213 00:07:32.874733 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:07:32 crc kubenswrapper[3562]: E1213 00:07:32.874883 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 13 00:07:32 crc kubenswrapper[3562]: I1213 00:07:32.874756 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 13 00:07:32 crc kubenswrapper[3562]: I1213 00:07:32.874891 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:07:32 crc kubenswrapper[3562]: I1213 00:07:32.875584 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 13 00:07:32 crc kubenswrapper[3562]: I1213 00:07:32.874885 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:07:32 crc kubenswrapper[3562]: E1213 00:07:32.875648 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 13 00:07:32 crc kubenswrapper[3562]: I1213 00:07:32.874848 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 13 00:07:32 crc kubenswrapper[3562]: E1213 00:07:32.874832 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 13 00:07:32 crc kubenswrapper[3562]: E1213 00:07:32.875711 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 13 00:07:32 crc kubenswrapper[3562]: I1213 00:07:32.874936 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 13 00:07:32 crc kubenswrapper[3562]: I1213 00:07:32.874950 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:07:32 crc kubenswrapper[3562]: E1213 00:07:32.875025 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 13 00:07:32 crc kubenswrapper[3562]: I1213 00:07:32.875062 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:07:32 crc kubenswrapper[3562]: E1213 00:07:32.875794 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 13 00:07:32 crc kubenswrapper[3562]: I1213 00:07:32.875081 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 13 00:07:32 crc kubenswrapper[3562]: I1213 00:07:32.875093 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:07:32 crc kubenswrapper[3562]: E1213 00:07:32.875863 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 13 00:07:32 crc kubenswrapper[3562]: I1213 00:07:32.875107 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 13 00:07:32 crc kubenswrapper[3562]: E1213 00:07:32.875200 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 13 00:07:32 crc kubenswrapper[3562]: I1213 00:07:32.875216 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:07:32 crc kubenswrapper[3562]: E1213 00:07:32.875946 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 13 00:07:32 crc kubenswrapper[3562]: I1213 00:07:32.875252 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:07:32 crc kubenswrapper[3562]: I1213 00:07:32.875257 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 13 00:07:32 crc kubenswrapper[3562]: I1213 00:07:32.875281 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:07:32 crc kubenswrapper[3562]: I1213 00:07:32.875287 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 13 00:07:32 crc kubenswrapper[3562]: E1213 00:07:32.876100 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 13 00:07:32 crc kubenswrapper[3562]: I1213 00:07:32.875312 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 13 00:07:32 crc kubenswrapper[3562]: E1213 00:07:32.875382 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 13 00:07:32 crc kubenswrapper[3562]: I1213 00:07:32.875423 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:07:32 crc kubenswrapper[3562]: E1213 00:07:32.875486 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 13 00:07:32 crc kubenswrapper[3562]: I1213 00:07:32.875597 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:07:32 crc kubenswrapper[3562]: E1213 00:07:32.876185 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 13 00:07:32 crc kubenswrapper[3562]: I1213 00:07:32.876241 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:07:32 crc kubenswrapper[3562]: E1213 00:07:32.876396 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 13 00:07:32 crc kubenswrapper[3562]: E1213 00:07:32.876511 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 13 00:07:32 crc kubenswrapper[3562]: E1213 00:07:32.876552 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 13 00:07:32 crc kubenswrapper[3562]: E1213 00:07:32.876608 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 13 00:07:32 crc kubenswrapper[3562]: I1213 00:07:32.876681 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:07:32 crc kubenswrapper[3562]: E1213 00:07:32.876735 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 13 00:07:32 crc kubenswrapper[3562]: E1213 00:07:32.876801 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 13 00:07:32 crc kubenswrapper[3562]: E1213 00:07:32.876878 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 13 00:07:32 crc kubenswrapper[3562]: E1213 00:07:32.876965 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 13 00:07:32 crc kubenswrapper[3562]: E1213 00:07:32.877044 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 13 00:07:32 crc kubenswrapper[3562]: E1213 00:07:32.877163 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 13 00:07:32 crc kubenswrapper[3562]: E1213 00:07:32.877184 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 13 00:07:32 crc kubenswrapper[3562]: E1213 00:07:32.877220 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 13 00:07:32 crc kubenswrapper[3562]: E1213 00:07:32.877277 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 13 00:07:32 crc kubenswrapper[3562]: E1213 00:07:32.877376 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 13 00:07:32 crc kubenswrapper[3562]: E1213 00:07:32.877435 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 13 00:07:32 crc kubenswrapper[3562]: E1213 00:07:32.877501 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 13 00:07:32 crc kubenswrapper[3562]: E1213 00:07:32.877565 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 13 00:07:32 crc kubenswrapper[3562]: E1213 00:07:32.877636 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 13 00:07:32 crc kubenswrapper[3562]: E1213 00:07:32.877687 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 13 00:07:32 crc kubenswrapper[3562]: E1213 00:07:32.877748 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 13 00:07:32 crc kubenswrapper[3562]: E1213 00:07:32.877799 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 13 00:07:32 crc kubenswrapper[3562]: E1213 00:07:32.877849 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 13 00:07:32 crc kubenswrapper[3562]: E1213 00:07:32.877895 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 13 00:07:33 crc kubenswrapper[3562]: I1213 00:07:33.471755 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:07:33 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:07:33 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:07:33 crc kubenswrapper[3562]: healthz check failed Dec 13 00:07:33 crc kubenswrapper[3562]: I1213 00:07:33.471814 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:07:33 crc kubenswrapper[3562]: I1213 00:07:33.874407 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:07:33 crc kubenswrapper[3562]: I1213 00:07:33.874506 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 13 00:07:33 crc kubenswrapper[3562]: I1213 00:07:33.874519 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:07:33 crc kubenswrapper[3562]: I1213 00:07:33.874493 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:07:33 crc kubenswrapper[3562]: I1213 00:07:33.874561 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 13 00:07:33 crc kubenswrapper[3562]: I1213 00:07:33.874572 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:07:33 crc kubenswrapper[3562]: I1213 00:07:33.874749 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:07:33 crc kubenswrapper[3562]: I1213 00:07:33.874826 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 13 00:07:33 crc kubenswrapper[3562]: I1213 00:07:33.874872 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:07:33 crc kubenswrapper[3562]: E1213 00:07:33.877309 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 13 00:07:33 crc kubenswrapper[3562]: I1213 00:07:33.877549 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:07:33 crc kubenswrapper[3562]: E1213 00:07:33.877553 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 13 00:07:33 crc kubenswrapper[3562]: E1213 00:07:33.877614 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 13 00:07:33 crc kubenswrapper[3562]: E1213 00:07:33.877703 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 13 00:07:33 crc kubenswrapper[3562]: I1213 00:07:33.877746 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:07:33 crc kubenswrapper[3562]: E1213 00:07:33.877993 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 13 00:07:33 crc kubenswrapper[3562]: E1213 00:07:33.878051 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 13 00:07:33 crc kubenswrapper[3562]: E1213 00:07:33.878155 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 13 00:07:33 crc kubenswrapper[3562]: E1213 00:07:33.878378 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 13 00:07:33 crc kubenswrapper[3562]: E1213 00:07:33.878460 3562 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 13 00:07:33 crc kubenswrapper[3562]: E1213 00:07:33.878468 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 13 00:07:33 crc kubenswrapper[3562]: E1213 00:07:33.878752 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 13 00:07:33 crc kubenswrapper[3562]: E1213 00:07:33.878834 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 13 00:07:34 crc kubenswrapper[3562]: I1213 00:07:34.471221 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:07:34 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:07:34 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:07:34 crc kubenswrapper[3562]: healthz check failed Dec 13 00:07:34 crc kubenswrapper[3562]: I1213 00:07:34.471301 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:07:34 crc kubenswrapper[3562]: I1213 00:07:34.874465 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:07:34 crc kubenswrapper[3562]: I1213 00:07:34.874482 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:07:34 crc kubenswrapper[3562]: I1213 00:07:34.874595 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 13 00:07:34 crc kubenswrapper[3562]: I1213 00:07:34.874613 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:07:34 crc kubenswrapper[3562]: I1213 00:07:34.874666 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 13 00:07:34 crc kubenswrapper[3562]: I1213 00:07:34.874712 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:07:34 crc kubenswrapper[3562]: E1213 00:07:34.874724 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 13 00:07:34 crc kubenswrapper[3562]: I1213 00:07:34.874762 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:07:34 crc kubenswrapper[3562]: I1213 00:07:34.874796 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:07:34 crc kubenswrapper[3562]: I1213 00:07:34.874528 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 13 00:07:34 crc kubenswrapper[3562]: I1213 00:07:34.874898 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:07:34 crc kubenswrapper[3562]: I1213 00:07:34.874761 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:07:34 crc kubenswrapper[3562]: I1213 00:07:34.874959 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 13 00:07:34 crc kubenswrapper[3562]: I1213 00:07:34.874997 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 13 00:07:34 crc kubenswrapper[3562]: I1213 00:07:34.874985 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 13 00:07:34 crc kubenswrapper[3562]: I1213 00:07:34.875048 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 13 00:07:34 crc kubenswrapper[3562]: I1213 00:07:34.875072 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:07:34 crc kubenswrapper[3562]: E1213 00:07:34.875132 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 13 00:07:34 crc kubenswrapper[3562]: I1213 00:07:34.874820 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 13 00:07:34 crc kubenswrapper[3562]: I1213 00:07:34.875167 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:07:34 crc kubenswrapper[3562]: I1213 00:07:34.874830 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:07:34 crc kubenswrapper[3562]: I1213 00:07:34.875242 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:07:34 crc kubenswrapper[3562]: I1213 00:07:34.874812 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 13 00:07:34 crc kubenswrapper[3562]: I1213 00:07:34.875259 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:07:34 crc kubenswrapper[3562]: I1213 00:07:34.874867 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 13 00:07:34 crc kubenswrapper[3562]: I1213 00:07:34.875283 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:07:34 crc kubenswrapper[3562]: I1213 00:07:34.874758 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 13 00:07:34 crc kubenswrapper[3562]: I1213 00:07:34.875313 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:07:34 crc kubenswrapper[3562]: E1213 00:07:34.875019 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 13 00:07:34 crc kubenswrapper[3562]: I1213 00:07:34.875314 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 13 00:07:34 crc kubenswrapper[3562]: I1213 00:07:34.875341 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:07:34 crc kubenswrapper[3562]: E1213 00:07:34.875385 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 13 00:07:34 crc kubenswrapper[3562]: I1213 00:07:34.875396 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:07:34 crc kubenswrapper[3562]: I1213 00:07:34.875427 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 13 00:07:34 crc kubenswrapper[3562]: I1213 00:07:34.875213 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:07:34 crc kubenswrapper[3562]: I1213 00:07:34.875452 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 13 00:07:34 crc kubenswrapper[3562]: I1213 00:07:34.875345 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:07:34 crc kubenswrapper[3562]: I1213 00:07:34.875488 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:07:34 crc kubenswrapper[3562]: E1213 00:07:34.875596 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 13 00:07:34 crc kubenswrapper[3562]: E1213 00:07:34.875695 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 13 00:07:34 crc kubenswrapper[3562]: E1213 00:07:34.875885 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 13 00:07:34 crc kubenswrapper[3562]: E1213 00:07:34.876004 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 13 00:07:34 crc kubenswrapper[3562]: I1213 00:07:34.876149 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 13 00:07:34 crc kubenswrapper[3562]: E1213 00:07:34.876148 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 13 00:07:34 crc kubenswrapper[3562]: E1213 00:07:34.876223 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 13 00:07:34 crc kubenswrapper[3562]: E1213 00:07:34.876433 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 13 00:07:34 crc kubenswrapper[3562]: E1213 00:07:34.876561 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 13 00:07:34 crc kubenswrapper[3562]: E1213 00:07:34.876662 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 13 00:07:34 crc kubenswrapper[3562]: E1213 00:07:34.876816 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 13 00:07:34 crc kubenswrapper[3562]: E1213 00:07:34.876891 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 13 00:07:34 crc kubenswrapper[3562]: I1213 00:07:34.877000 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 13 00:07:34 crc kubenswrapper[3562]: E1213 00:07:34.877031 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 13 00:07:34 crc kubenswrapper[3562]: E1213 00:07:34.877184 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 13 00:07:34 crc kubenswrapper[3562]: E1213 00:07:34.877295 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 13 00:07:34 crc kubenswrapper[3562]: E1213 00:07:34.877385 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 13 00:07:34 crc kubenswrapper[3562]: E1213 00:07:34.877517 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 13 00:07:34 crc kubenswrapper[3562]: E1213 00:07:34.877637 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 13 00:07:34 crc kubenswrapper[3562]: E1213 00:07:34.877750 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 13 00:07:34 crc kubenswrapper[3562]: E1213 00:07:34.877947 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 13 00:07:34 crc kubenswrapper[3562]: E1213 00:07:34.878017 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 13 00:07:34 crc kubenswrapper[3562]: E1213 00:07:34.878119 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 13 00:07:34 crc kubenswrapper[3562]: E1213 00:07:34.878244 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 13 00:07:34 crc kubenswrapper[3562]: E1213 00:07:34.878315 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 13 00:07:34 crc kubenswrapper[3562]: E1213 00:07:34.878448 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 13 00:07:34 crc kubenswrapper[3562]: E1213 00:07:34.878567 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 13 00:07:34 crc kubenswrapper[3562]: E1213 00:07:34.878663 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 13 00:07:34 crc kubenswrapper[3562]: E1213 00:07:34.878678 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 13 00:07:34 crc kubenswrapper[3562]: E1213 00:07:34.878806 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 13 00:07:34 crc kubenswrapper[3562]: E1213 00:07:34.878879 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 13 00:07:34 crc kubenswrapper[3562]: E1213 00:07:34.879019 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 13 00:07:34 crc kubenswrapper[3562]: E1213 00:07:34.879132 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 13 00:07:34 crc kubenswrapper[3562]: E1213 00:07:34.879205 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 13 00:07:35 crc kubenswrapper[3562]: I1213 00:07:35.471056 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:07:35 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:07:35 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:07:35 crc kubenswrapper[3562]: healthz check failed Dec 13 00:07:35 crc kubenswrapper[3562]: I1213 00:07:35.471140 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:07:35 crc kubenswrapper[3562]: I1213 00:07:35.874135 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:07:35 crc kubenswrapper[3562]: I1213 00:07:35.874210 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:07:35 crc kubenswrapper[3562]: I1213 00:07:35.874271 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:07:35 crc kubenswrapper[3562]: I1213 00:07:35.874320 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 13 00:07:35 crc kubenswrapper[3562]: I1213 00:07:35.874360 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:07:35 crc kubenswrapper[3562]: I1213 00:07:35.874386 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:07:35 crc kubenswrapper[3562]: I1213 00:07:35.874339 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:07:35 crc kubenswrapper[3562]: I1213 00:07:35.874424 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 13 00:07:35 crc kubenswrapper[3562]: I1213 00:07:35.874440 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:07:35 crc kubenswrapper[3562]: I1213 00:07:35.874531 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 13 00:07:35 crc kubenswrapper[3562]: E1213 00:07:35.874746 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 13 00:07:35 crc kubenswrapper[3562]: E1213 00:07:35.874985 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 13 00:07:35 crc kubenswrapper[3562]: E1213 00:07:35.875187 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 13 00:07:35 crc kubenswrapper[3562]: E1213 00:07:35.875423 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 13 00:07:35 crc kubenswrapper[3562]: E1213 00:07:35.875554 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 13 00:07:35 crc kubenswrapper[3562]: I1213 00:07:35.875562 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:07:35 crc kubenswrapper[3562]: E1213 00:07:35.875653 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 13 00:07:35 crc kubenswrapper[3562]: E1213 00:07:35.875745 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 13 00:07:35 crc kubenswrapper[3562]: E1213 00:07:35.875940 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 13 00:07:35 crc kubenswrapper[3562]: E1213 00:07:35.876081 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 13 00:07:35 crc kubenswrapper[3562]: E1213 00:07:35.876104 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 13 00:07:35 crc kubenswrapper[3562]: E1213 00:07:35.876174 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 13 00:07:36 crc kubenswrapper[3562]: I1213 00:07:36.471177 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:07:36 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:07:36 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:07:36 crc kubenswrapper[3562]: healthz check failed Dec 13 00:07:36 crc kubenswrapper[3562]: I1213 00:07:36.471287 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:07:36 crc kubenswrapper[3562]: I1213 00:07:36.874508 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:07:36 crc kubenswrapper[3562]: I1213 00:07:36.874549 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 13 00:07:36 crc kubenswrapper[3562]: I1213 00:07:36.874613 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:07:36 crc kubenswrapper[3562]: I1213 00:07:36.874615 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:07:36 crc kubenswrapper[3562]: I1213 00:07:36.874658 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 13 00:07:36 crc kubenswrapper[3562]: I1213 00:07:36.874672 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 13 00:07:36 crc kubenswrapper[3562]: I1213 00:07:36.874624 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 13 00:07:36 crc kubenswrapper[3562]: I1213 00:07:36.874709 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 13 00:07:36 crc kubenswrapper[3562]: I1213 00:07:36.874713 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:07:36 crc kubenswrapper[3562]: I1213 00:07:36.874626 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 13 00:07:36 crc kubenswrapper[3562]: I1213 00:07:36.874794 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:07:36 crc kubenswrapper[3562]: I1213 00:07:36.874809 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 13 00:07:36 crc kubenswrapper[3562]: E1213 00:07:36.874818 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 13 00:07:36 crc kubenswrapper[3562]: E1213 00:07:36.874874 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 13 00:07:36 crc kubenswrapper[3562]: E1213 00:07:36.874975 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 13 00:07:36 crc kubenswrapper[3562]: I1213 00:07:36.874992 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:07:36 crc kubenswrapper[3562]: I1213 00:07:36.875017 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:07:36 crc kubenswrapper[3562]: I1213 00:07:36.875021 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 13 00:07:36 crc kubenswrapper[3562]: I1213 00:07:36.874529 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 13 00:07:36 crc kubenswrapper[3562]: E1213 00:07:36.875118 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 13 00:07:36 crc kubenswrapper[3562]: I1213 00:07:36.875128 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 13 00:07:36 crc kubenswrapper[3562]: E1213 00:07:36.875173 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 13 00:07:36 crc kubenswrapper[3562]: I1213 00:07:36.875210 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:07:36 crc kubenswrapper[3562]: E1213 00:07:36.875239 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 13 00:07:36 crc kubenswrapper[3562]: I1213 00:07:36.875176 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:07:36 crc kubenswrapper[3562]: E1213 00:07:36.875286 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 13 00:07:36 crc kubenswrapper[3562]: I1213 00:07:36.875312 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:07:36 crc kubenswrapper[3562]: I1213 00:07:36.875335 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 13 00:07:36 crc kubenswrapper[3562]: E1213 00:07:36.875362 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 13 00:07:36 crc kubenswrapper[3562]: I1213 00:07:36.875407 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:07:36 crc kubenswrapper[3562]: I1213 00:07:36.875432 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:07:36 crc kubenswrapper[3562]: E1213 00:07:36.875466 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 13 00:07:36 crc kubenswrapper[3562]: I1213 00:07:36.875481 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 13 00:07:36 crc kubenswrapper[3562]: I1213 00:07:36.875517 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 13 00:07:36 crc kubenswrapper[3562]: E1213 00:07:36.875534 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 13 00:07:36 crc kubenswrapper[3562]: I1213 00:07:36.875570 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:07:36 crc kubenswrapper[3562]: E1213 00:07:36.875605 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 13 00:07:36 crc kubenswrapper[3562]: I1213 00:07:36.875642 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:07:36 crc kubenswrapper[3562]: I1213 00:07:36.875673 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:07:36 crc kubenswrapper[3562]: I1213 00:07:36.875731 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:07:36 crc kubenswrapper[3562]: E1213 00:07:36.875702 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 13 00:07:36 crc kubenswrapper[3562]: E1213 00:07:36.875818 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 13 00:07:36 crc kubenswrapper[3562]: I1213 00:07:36.875838 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 13 00:07:36 crc kubenswrapper[3562]: E1213 00:07:36.875964 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 13 00:07:36 crc kubenswrapper[3562]: I1213 00:07:36.876052 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 13 00:07:36 crc kubenswrapper[3562]: E1213 00:07:36.876080 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 13 00:07:36 crc kubenswrapper[3562]: I1213 00:07:36.876130 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:07:36 crc kubenswrapper[3562]: E1213 00:07:36.876192 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 13 00:07:36 crc kubenswrapper[3562]: I1213 00:07:36.876213 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 13 00:07:36 crc kubenswrapper[3562]: E1213 00:07:36.876269 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 13 00:07:36 crc kubenswrapper[3562]: E1213 00:07:36.876347 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 13 00:07:36 crc kubenswrapper[3562]: I1213 00:07:36.876390 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:07:36 crc kubenswrapper[3562]: I1213 00:07:36.876427 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:07:36 crc kubenswrapper[3562]: E1213 00:07:36.876476 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 13 00:07:36 crc kubenswrapper[3562]: I1213 00:07:36.876510 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:07:36 crc kubenswrapper[3562]: E1213 00:07:36.876585 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 13 00:07:36 crc kubenswrapper[3562]: E1213 00:07:36.876654 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 13 00:07:36 crc kubenswrapper[3562]: E1213 00:07:36.876691 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 13 00:07:36 crc kubenswrapper[3562]: E1213 00:07:36.876814 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 13 00:07:36 crc kubenswrapper[3562]: E1213 00:07:36.877040 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 13 00:07:36 crc kubenswrapper[3562]: E1213 00:07:36.877074 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 13 00:07:36 crc kubenswrapper[3562]: E1213 00:07:36.877137 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 13 00:07:36 crc kubenswrapper[3562]: E1213 00:07:36.877208 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 13 00:07:36 crc kubenswrapper[3562]: E1213 00:07:36.877279 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 13 00:07:36 crc kubenswrapper[3562]: E1213 00:07:36.877424 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 13 00:07:36 crc kubenswrapper[3562]: E1213 00:07:36.877461 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 13 00:07:36 crc kubenswrapper[3562]: E1213 00:07:36.877532 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 13 00:07:36 crc kubenswrapper[3562]: E1213 00:07:36.877606 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 13 00:07:36 crc kubenswrapper[3562]: E1213 00:07:36.877637 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 13 00:07:36 crc kubenswrapper[3562]: E1213 00:07:36.877678 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 13 00:07:36 crc kubenswrapper[3562]: E1213 00:07:36.877735 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 13 00:07:36 crc kubenswrapper[3562]: E1213 00:07:36.877777 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 13 00:07:37 crc kubenswrapper[3562]: I1213 00:07:37.471006 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:07:37 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:07:37 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:07:37 crc kubenswrapper[3562]: healthz check failed Dec 13 00:07:37 crc kubenswrapper[3562]: I1213 00:07:37.471104 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:07:37 crc kubenswrapper[3562]: I1213 00:07:37.875270 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 13 00:07:37 crc kubenswrapper[3562]: I1213 00:07:37.875326 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:07:37 crc kubenswrapper[3562]: I1213 00:07:37.875365 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 13 00:07:37 crc kubenswrapper[3562]: I1213 00:07:37.875382 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:07:37 crc kubenswrapper[3562]: I1213 00:07:37.875405 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:07:37 crc kubenswrapper[3562]: I1213 00:07:37.875341 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:07:37 crc kubenswrapper[3562]: I1213 00:07:37.875307 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:07:37 crc kubenswrapper[3562]: I1213 00:07:37.875349 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:07:37 crc kubenswrapper[3562]: I1213 00:07:37.875388 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 13 00:07:37 crc kubenswrapper[3562]: I1213 00:07:37.875299 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:07:37 crc kubenswrapper[3562]: I1213 00:07:37.875541 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:07:37 crc kubenswrapper[3562]: E1213 00:07:37.875697 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 13 00:07:37 crc kubenswrapper[3562]: E1213 00:07:37.875877 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 13 00:07:37 crc kubenswrapper[3562]: E1213 00:07:37.875958 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 13 00:07:37 crc kubenswrapper[3562]: E1213 00:07:37.876048 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 13 00:07:37 crc kubenswrapper[3562]: E1213 00:07:37.876079 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 13 00:07:37 crc kubenswrapper[3562]: E1213 00:07:37.876151 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 13 00:07:37 crc kubenswrapper[3562]: E1213 00:07:37.876264 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 13 00:07:37 crc kubenswrapper[3562]: E1213 00:07:37.876334 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 13 00:07:37 crc kubenswrapper[3562]: E1213 00:07:37.876400 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 13 00:07:37 crc kubenswrapper[3562]: E1213 00:07:37.876466 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 13 00:07:37 crc kubenswrapper[3562]: E1213 00:07:37.876566 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 13 00:07:38 crc kubenswrapper[3562]: I1213 00:07:38.470844 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:07:38 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:07:38 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:07:38 crc kubenswrapper[3562]: healthz check failed Dec 13 00:07:38 crc kubenswrapper[3562]: I1213 00:07:38.470978 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:07:38 crc kubenswrapper[3562]: I1213 00:07:38.874710 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:07:38 crc kubenswrapper[3562]: I1213 00:07:38.874772 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 13 00:07:38 crc kubenswrapper[3562]: I1213 00:07:38.874813 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:07:38 crc kubenswrapper[3562]: I1213 00:07:38.874896 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:07:38 crc kubenswrapper[3562]: I1213 00:07:38.874953 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 13 00:07:38 crc kubenswrapper[3562]: I1213 00:07:38.874981 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:07:38 crc kubenswrapper[3562]: I1213 00:07:38.875028 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:07:38 crc kubenswrapper[3562]: I1213 00:07:38.875028 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:07:38 crc kubenswrapper[3562]: I1213 00:07:38.875045 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:07:38 crc kubenswrapper[3562]: I1213 00:07:38.875057 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 13 00:07:38 crc kubenswrapper[3562]: I1213 00:07:38.875064 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 13 00:07:38 crc kubenswrapper[3562]: I1213 00:07:38.875088 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 13 00:07:38 crc kubenswrapper[3562]: I1213 00:07:38.874896 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:07:38 crc kubenswrapper[3562]: I1213 00:07:38.875111 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:07:38 crc kubenswrapper[3562]: I1213 00:07:38.875132 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:07:38 crc kubenswrapper[3562]: I1213 00:07:38.875136 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:07:38 crc kubenswrapper[3562]: I1213 00:07:38.874981 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:07:38 crc kubenswrapper[3562]: I1213 00:07:38.875013 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 13 00:07:38 crc kubenswrapper[3562]: I1213 00:07:38.875163 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 13 00:07:38 crc kubenswrapper[3562]: I1213 00:07:38.875202 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 13 00:07:38 crc kubenswrapper[3562]: I1213 00:07:38.875212 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:07:38 crc kubenswrapper[3562]: I1213 00:07:38.875229 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:07:38 crc kubenswrapper[3562]: I1213 00:07:38.875268 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 13 00:07:38 crc kubenswrapper[3562]: E1213 00:07:38.875278 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 13 00:07:38 crc kubenswrapper[3562]: E1213 00:07:38.875280 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 13 00:07:38 crc kubenswrapper[3562]: I1213 00:07:38.874987 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 13 00:07:38 crc kubenswrapper[3562]: I1213 00:07:38.875322 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:07:38 crc kubenswrapper[3562]: I1213 00:07:38.874841 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 13 00:07:38 crc kubenswrapper[3562]: I1213 00:07:38.875338 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 13 00:07:38 crc kubenswrapper[3562]: I1213 00:07:38.875172 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 13 00:07:38 crc kubenswrapper[3562]: I1213 00:07:38.875394 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:07:38 crc kubenswrapper[3562]: I1213 00:07:38.875190 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 13 00:07:38 crc kubenswrapper[3562]: E1213 00:07:38.875453 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 13 00:07:38 crc kubenswrapper[3562]: I1213 00:07:38.875484 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:07:38 crc kubenswrapper[3562]: I1213 00:07:38.875498 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:07:38 crc kubenswrapper[3562]: I1213 00:07:38.875526 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 13 00:07:38 crc kubenswrapper[3562]: I1213 00:07:38.875539 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 13 00:07:38 crc kubenswrapper[3562]: I1213 00:07:38.874741 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:07:38 crc kubenswrapper[3562]: I1213 00:07:38.875560 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:07:38 crc kubenswrapper[3562]: E1213 00:07:38.875785 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 13 00:07:38 crc kubenswrapper[3562]: E1213 00:07:38.875930 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 13 00:07:38 crc kubenswrapper[3562]: E1213 00:07:38.876645 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 13 00:07:38 crc kubenswrapper[3562]: E1213 00:07:38.876817 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 13 00:07:38 crc kubenswrapper[3562]: E1213 00:07:38.877354 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 13 00:07:38 crc kubenswrapper[3562]: E1213 00:07:38.877518 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 13 00:07:38 crc kubenswrapper[3562]: E1213 00:07:38.877553 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 13 00:07:38 crc kubenswrapper[3562]: E1213 00:07:38.877939 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 13 00:07:38 crc kubenswrapper[3562]: E1213 00:07:38.878403 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 13 00:07:38 crc kubenswrapper[3562]: E1213 00:07:38.878617 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 13 00:07:38 crc kubenswrapper[3562]: E1213 00:07:38.878743 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 13 00:07:38 crc kubenswrapper[3562]: E1213 00:07:38.878931 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 13 00:07:38 crc kubenswrapper[3562]: E1213 00:07:38.879040 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 13 00:07:38 crc kubenswrapper[3562]: E1213 00:07:38.879362 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 13 00:07:38 crc kubenswrapper[3562]: E1213 00:07:38.879407 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 13 00:07:38 crc kubenswrapper[3562]: E1213 00:07:38.879553 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 13 00:07:38 crc kubenswrapper[3562]: E1213 00:07:38.879746 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 13 00:07:38 crc kubenswrapper[3562]: E1213 00:07:38.879833 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 13 00:07:38 crc kubenswrapper[3562]: E1213 00:07:38.879951 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 13 00:07:38 crc kubenswrapper[3562]: E1213 00:07:38.880091 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 13 00:07:38 crc kubenswrapper[3562]: E1213 00:07:38.880198 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 13 00:07:38 crc kubenswrapper[3562]: E1213 00:07:38.880337 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 13 00:07:38 crc kubenswrapper[3562]: E1213 00:07:38.880456 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 13 00:07:38 crc kubenswrapper[3562]: E1213 00:07:38.880565 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 13 00:07:38 crc kubenswrapper[3562]: E1213 00:07:38.880669 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 13 00:07:38 crc kubenswrapper[3562]: E1213 00:07:38.880773 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 13 00:07:38 crc kubenswrapper[3562]: E1213 00:07:38.880861 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 13 00:07:38 crc kubenswrapper[3562]: E1213 00:07:38.880953 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 13 00:07:38 crc kubenswrapper[3562]: E1213 00:07:38.881046 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 13 00:07:38 crc kubenswrapper[3562]: E1213 00:07:38.881217 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 13 00:07:38 crc kubenswrapper[3562]: E1213 00:07:38.881346 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 13 00:07:38 crc kubenswrapper[3562]: E1213 00:07:38.883488 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 13 00:07:38 crc kubenswrapper[3562]: E1213 00:07:38.883701 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 13 00:07:38 crc kubenswrapper[3562]: E1213 00:07:38.883722 3562 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 13 00:07:39 crc kubenswrapper[3562]: I1213 00:07:39.470497 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:07:39 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:07:39 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:07:39 crc kubenswrapper[3562]: healthz check failed Dec 13 00:07:39 crc kubenswrapper[3562]: I1213 00:07:39.470573 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:07:39 crc kubenswrapper[3562]: I1213 00:07:39.874401 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 13 00:07:39 crc kubenswrapper[3562]: I1213 00:07:39.874448 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 13 00:07:39 crc kubenswrapper[3562]: I1213 00:07:39.874493 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:07:39 crc kubenswrapper[3562]: I1213 00:07:39.874502 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:07:39 crc kubenswrapper[3562]: I1213 00:07:39.874522 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:07:39 crc kubenswrapper[3562]: I1213 00:07:39.874448 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 13 00:07:39 crc kubenswrapper[3562]: I1213 00:07:39.874540 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:07:39 crc kubenswrapper[3562]: I1213 00:07:39.874571 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:07:39 crc kubenswrapper[3562]: I1213 00:07:39.874612 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:07:39 crc kubenswrapper[3562]: I1213 00:07:39.874480 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:07:39 crc kubenswrapper[3562]: E1213 00:07:39.874737 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 13 00:07:39 crc kubenswrapper[3562]: E1213 00:07:39.874837 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 13 00:07:39 crc kubenswrapper[3562]: E1213 00:07:39.874941 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 13 00:07:39 crc kubenswrapper[3562]: E1213 00:07:39.875005 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 13 00:07:39 crc kubenswrapper[3562]: I1213 00:07:39.875037 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:07:39 crc kubenswrapper[3562]: E1213 00:07:39.875107 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 13 00:07:39 crc kubenswrapper[3562]: E1213 00:07:39.875172 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 13 00:07:39 crc kubenswrapper[3562]: E1213 00:07:39.875228 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 13 00:07:39 crc kubenswrapper[3562]: E1213 00:07:39.875286 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 13 00:07:39 crc kubenswrapper[3562]: E1213 00:07:39.875328 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 13 00:07:39 crc kubenswrapper[3562]: E1213 00:07:39.875373 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 13 00:07:39 crc kubenswrapper[3562]: E1213 00:07:39.875422 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 13 00:07:40 crc kubenswrapper[3562]: I1213 00:07:40.471618 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:07:40 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:07:40 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:07:40 crc kubenswrapper[3562]: healthz check failed Dec 13 00:07:40 crc kubenswrapper[3562]: I1213 00:07:40.471697 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:07:40 crc kubenswrapper[3562]: I1213 00:07:40.874493 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 13 00:07:40 crc kubenswrapper[3562]: I1213 00:07:40.874516 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:07:40 crc kubenswrapper[3562]: E1213 00:07:40.874675 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 13 00:07:40 crc kubenswrapper[3562]: E1213 00:07:40.875421 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 13 00:07:40 crc kubenswrapper[3562]: I1213 00:07:40.875614 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 13 00:07:40 crc kubenswrapper[3562]: I1213 00:07:40.875736 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:07:40 crc kubenswrapper[3562]: E1213 00:07:40.875855 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 13 00:07:40 crc kubenswrapper[3562]: I1213 00:07:40.875930 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 13 00:07:40 crc kubenswrapper[3562]: I1213 00:07:40.875677 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 13 00:07:40 crc kubenswrapper[3562]: E1213 00:07:40.876029 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 13 00:07:40 crc kubenswrapper[3562]: E1213 00:07:40.876108 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 13 00:07:40 crc kubenswrapper[3562]: I1213 00:07:40.875689 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:07:40 crc kubenswrapper[3562]: I1213 00:07:40.876110 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:07:40 crc kubenswrapper[3562]: I1213 00:07:40.876142 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:07:40 crc kubenswrapper[3562]: I1213 00:07:40.876193 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:07:40 crc kubenswrapper[3562]: I1213 00:07:40.876218 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 13 00:07:40 crc kubenswrapper[3562]: E1213 00:07:40.876232 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 13 00:07:40 crc kubenswrapper[3562]: I1213 00:07:40.876242 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:07:40 crc kubenswrapper[3562]: I1213 00:07:40.875706 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 13 00:07:40 crc kubenswrapper[3562]: I1213 00:07:40.876276 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 13 00:07:40 crc kubenswrapper[3562]: I1213 00:07:40.875726 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 13 00:07:40 crc kubenswrapper[3562]: I1213 00:07:40.875715 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:07:40 crc kubenswrapper[3562]: I1213 00:07:40.876305 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:07:40 crc kubenswrapper[3562]: I1213 00:07:40.876181 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:07:40 crc kubenswrapper[3562]: I1213 00:07:40.876338 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:07:40 crc kubenswrapper[3562]: I1213 00:07:40.876365 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:07:40 crc kubenswrapper[3562]: I1213 00:07:40.876388 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:07:40 crc kubenswrapper[3562]: I1213 00:07:40.876429 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 13 00:07:40 crc kubenswrapper[3562]: E1213 00:07:40.876444 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 13 00:07:40 crc kubenswrapper[3562]: I1213 00:07:40.876470 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:07:40 crc kubenswrapper[3562]: I1213 00:07:40.876460 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 13 00:07:40 crc kubenswrapper[3562]: I1213 00:07:40.876505 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:07:40 crc kubenswrapper[3562]: E1213 00:07:40.876526 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 13 00:07:40 crc kubenswrapper[3562]: I1213 00:07:40.876539 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:07:40 crc kubenswrapper[3562]: I1213 00:07:40.876561 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:07:40 crc kubenswrapper[3562]: I1213 00:07:40.876573 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 13 00:07:40 crc kubenswrapper[3562]: I1213 00:07:40.876607 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:07:40 crc kubenswrapper[3562]: I1213 00:07:40.876633 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 13 00:07:40 crc kubenswrapper[3562]: I1213 00:07:40.876494 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 13 00:07:40 crc kubenswrapper[3562]: I1213 00:07:40.876660 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 13 00:07:40 crc kubenswrapper[3562]: I1213 00:07:40.876679 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:07:40 crc kubenswrapper[3562]: E1213 00:07:40.876702 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 13 00:07:40 crc kubenswrapper[3562]: I1213 00:07:40.876734 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 13 00:07:40 crc kubenswrapper[3562]: I1213 00:07:40.876753 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:07:40 crc kubenswrapper[3562]: I1213 00:07:40.876777 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 13 00:07:40 crc kubenswrapper[3562]: E1213 00:07:40.876782 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 13 00:07:40 crc kubenswrapper[3562]: E1213 00:07:40.876941 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 13 00:07:40 crc kubenswrapper[3562]: E1213 00:07:40.877029 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 13 00:07:40 crc kubenswrapper[3562]: E1213 00:07:40.877137 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 13 00:07:40 crc kubenswrapper[3562]: E1213 00:07:40.877194 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 13 00:07:40 crc kubenswrapper[3562]: E1213 00:07:40.877472 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 13 00:07:40 crc kubenswrapper[3562]: E1213 00:07:40.877527 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 13 00:07:40 crc kubenswrapper[3562]: E1213 00:07:40.877614 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 13 00:07:40 crc kubenswrapper[3562]: E1213 00:07:40.877681 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 13 00:07:40 crc kubenswrapper[3562]: E1213 00:07:40.877754 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 13 00:07:40 crc kubenswrapper[3562]: E1213 00:07:40.877821 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 13 00:07:40 crc kubenswrapper[3562]: E1213 00:07:40.877884 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 13 00:07:40 crc kubenswrapper[3562]: E1213 00:07:40.877951 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 13 00:07:40 crc kubenswrapper[3562]: E1213 00:07:40.878001 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 13 00:07:40 crc kubenswrapper[3562]: E1213 00:07:40.878052 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 13 00:07:40 crc kubenswrapper[3562]: E1213 00:07:40.878135 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 13 00:07:40 crc kubenswrapper[3562]: E1213 00:07:40.878187 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 13 00:07:40 crc kubenswrapper[3562]: E1213 00:07:40.878236 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 13 00:07:40 crc kubenswrapper[3562]: E1213 00:07:40.878289 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 13 00:07:40 crc kubenswrapper[3562]: E1213 00:07:40.878334 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 13 00:07:40 crc kubenswrapper[3562]: E1213 00:07:40.878377 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 13 00:07:40 crc kubenswrapper[3562]: E1213 00:07:40.878421 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 13 00:07:40 crc kubenswrapper[3562]: E1213 00:07:40.878469 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 13 00:07:40 crc kubenswrapper[3562]: E1213 00:07:40.878506 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 13 00:07:40 crc kubenswrapper[3562]: E1213 00:07:40.878541 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 13 00:07:40 crc kubenswrapper[3562]: E1213 00:07:40.878578 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 13 00:07:40 crc kubenswrapper[3562]: E1213 00:07:40.878621 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 13 00:07:41 crc kubenswrapper[3562]: I1213 00:07:41.470688 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:07:41 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:07:41 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:07:41 crc kubenswrapper[3562]: healthz check failed Dec 13 00:07:41 crc kubenswrapper[3562]: I1213 00:07:41.470769 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:07:41 crc kubenswrapper[3562]: I1213 00:07:41.875132 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 13 00:07:41 crc kubenswrapper[3562]: I1213 00:07:41.875171 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:07:41 crc kubenswrapper[3562]: I1213 00:07:41.875257 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 13 00:07:41 crc kubenswrapper[3562]: I1213 00:07:41.875158 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 13 00:07:41 crc kubenswrapper[3562]: I1213 00:07:41.875177 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:07:41 crc kubenswrapper[3562]: I1213 00:07:41.875221 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:07:41 crc kubenswrapper[3562]: E1213 00:07:41.875400 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 13 00:07:41 crc kubenswrapper[3562]: I1213 00:07:41.875430 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:07:41 crc kubenswrapper[3562]: I1213 00:07:41.875468 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:07:41 crc kubenswrapper[3562]: E1213 00:07:41.875604 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 13 00:07:41 crc kubenswrapper[3562]: I1213 00:07:41.875623 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:07:41 crc kubenswrapper[3562]: E1213 00:07:41.875719 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 13 00:07:41 crc kubenswrapper[3562]: E1213 00:07:41.875817 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 13 00:07:41 crc kubenswrapper[3562]: E1213 00:07:41.875928 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 13 00:07:41 crc kubenswrapper[3562]: E1213 00:07:41.876015 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 13 00:07:41 crc kubenswrapper[3562]: I1213 00:07:41.876064 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:07:41 crc kubenswrapper[3562]: E1213 00:07:41.876099 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 13 00:07:41 crc kubenswrapper[3562]: I1213 00:07:41.876140 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:07:41 crc kubenswrapper[3562]: E1213 00:07:41.876266 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 13 00:07:41 crc kubenswrapper[3562]: E1213 00:07:41.876306 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 13 00:07:41 crc kubenswrapper[3562]: E1213 00:07:41.876425 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 13 00:07:41 crc kubenswrapper[3562]: E1213 00:07:41.876506 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 13 00:07:42 crc kubenswrapper[3562]: I1213 00:07:42.471109 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:07:42 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:07:42 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:07:42 crc kubenswrapper[3562]: healthz check failed Dec 13 00:07:42 crc kubenswrapper[3562]: I1213 00:07:42.471180 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:07:42 crc kubenswrapper[3562]: I1213 00:07:42.874661 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:07:42 crc kubenswrapper[3562]: I1213 00:07:42.874710 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:07:42 crc kubenswrapper[3562]: I1213 00:07:42.874769 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 13 00:07:42 crc kubenswrapper[3562]: I1213 00:07:42.874717 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 13 00:07:42 crc kubenswrapper[3562]: I1213 00:07:42.874785 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:07:42 crc kubenswrapper[3562]: I1213 00:07:42.874812 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:07:42 crc kubenswrapper[3562]: I1213 00:07:42.874675 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 13 00:07:42 crc kubenswrapper[3562]: I1213 00:07:42.874850 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:07:42 crc kubenswrapper[3562]: I1213 00:07:42.874866 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:07:42 crc kubenswrapper[3562]: I1213 00:07:42.874932 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:07:42 crc kubenswrapper[3562]: I1213 00:07:42.874965 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 13 00:07:42 crc kubenswrapper[3562]: I1213 00:07:42.874993 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:07:42 crc kubenswrapper[3562]: I1213 00:07:42.875002 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 13 00:07:42 crc kubenswrapper[3562]: I1213 00:07:42.875004 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 13 00:07:42 crc kubenswrapper[3562]: I1213 00:07:42.875023 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:07:42 crc kubenswrapper[3562]: I1213 00:07:42.874675 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:07:42 crc kubenswrapper[3562]: I1213 00:07:42.875040 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 13 00:07:42 crc kubenswrapper[3562]: I1213 00:07:42.875014 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 13 00:07:42 crc kubenswrapper[3562]: I1213 00:07:42.874970 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 13 00:07:42 crc kubenswrapper[3562]: I1213 00:07:42.874871 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:07:42 crc kubenswrapper[3562]: E1213 00:07:42.875125 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 13 00:07:42 crc kubenswrapper[3562]: I1213 00:07:42.875138 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:07:42 crc kubenswrapper[3562]: I1213 00:07:42.874699 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:07:42 crc kubenswrapper[3562]: I1213 00:07:42.874705 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:07:42 crc kubenswrapper[3562]: I1213 00:07:42.874784 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 13 00:07:42 crc kubenswrapper[3562]: I1213 00:07:42.874820 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:07:42 crc kubenswrapper[3562]: I1213 00:07:42.874840 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 13 00:07:42 crc kubenswrapper[3562]: E1213 00:07:42.875229 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 13 00:07:42 crc kubenswrapper[3562]: I1213 00:07:42.874881 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 13 00:07:42 crc kubenswrapper[3562]: I1213 00:07:42.874889 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 13 00:07:42 crc kubenswrapper[3562]: I1213 00:07:42.874576 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 13 00:07:42 crc kubenswrapper[3562]: I1213 00:07:42.874893 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 13 00:07:42 crc kubenswrapper[3562]: I1213 00:07:42.874694 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 13 00:07:42 crc kubenswrapper[3562]: I1213 00:07:42.874890 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:07:42 crc kubenswrapper[3562]: I1213 00:07:42.874907 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:07:42 crc kubenswrapper[3562]: I1213 00:07:42.874977 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:07:42 crc kubenswrapper[3562]: E1213 00:07:42.875417 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 13 00:07:42 crc kubenswrapper[3562]: E1213 00:07:42.875538 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 13 00:07:42 crc kubenswrapper[3562]: E1213 00:07:42.875559 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 13 00:07:42 crc kubenswrapper[3562]: E1213 00:07:42.875625 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 13 00:07:42 crc kubenswrapper[3562]: E1213 00:07:42.875681 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 13 00:07:42 crc kubenswrapper[3562]: I1213 00:07:42.875709 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:07:42 crc kubenswrapper[3562]: E1213 00:07:42.875780 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 13 00:07:42 crc kubenswrapper[3562]: E1213 00:07:42.875843 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 13 00:07:42 crc kubenswrapper[3562]: E1213 00:07:42.875921 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 13 00:07:42 crc kubenswrapper[3562]: E1213 00:07:42.875994 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 13 00:07:42 crc kubenswrapper[3562]: E1213 00:07:42.876054 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 13 00:07:42 crc kubenswrapper[3562]: E1213 00:07:42.876108 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 13 00:07:42 crc kubenswrapper[3562]: E1213 00:07:42.876159 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 13 00:07:42 crc kubenswrapper[3562]: E1213 00:07:42.876229 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 13 00:07:42 crc kubenswrapper[3562]: E1213 00:07:42.876282 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 13 00:07:42 crc kubenswrapper[3562]: E1213 00:07:42.876413 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 13 00:07:42 crc kubenswrapper[3562]: E1213 00:07:42.876438 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 13 00:07:42 crc kubenswrapper[3562]: E1213 00:07:42.876578 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 13 00:07:42 crc kubenswrapper[3562]: E1213 00:07:42.876666 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 13 00:07:42 crc kubenswrapper[3562]: E1213 00:07:42.876697 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 13 00:07:42 crc kubenswrapper[3562]: E1213 00:07:42.876828 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 13 00:07:42 crc kubenswrapper[3562]: E1213 00:07:42.876857 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 13 00:07:42 crc kubenswrapper[3562]: E1213 00:07:42.876983 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 13 00:07:42 crc kubenswrapper[3562]: E1213 00:07:42.877061 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 13 00:07:42 crc kubenswrapper[3562]: I1213 00:07:42.877093 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:07:42 crc kubenswrapper[3562]: E1213 00:07:42.877151 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 13 00:07:42 crc kubenswrapper[3562]: E1213 00:07:42.877229 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 13 00:07:42 crc kubenswrapper[3562]: E1213 00:07:42.877279 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 13 00:07:42 crc kubenswrapper[3562]: E1213 00:07:42.877353 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 13 00:07:42 crc kubenswrapper[3562]: E1213 00:07:42.877433 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 13 00:07:42 crc kubenswrapper[3562]: E1213 00:07:42.877497 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 13 00:07:42 crc kubenswrapper[3562]: E1213 00:07:42.877535 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 13 00:07:42 crc kubenswrapper[3562]: E1213 00:07:42.877576 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 13 00:07:42 crc kubenswrapper[3562]: E1213 00:07:42.877665 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 13 00:07:42 crc kubenswrapper[3562]: E1213 00:07:42.877770 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 13 00:07:42 crc kubenswrapper[3562]: E1213 00:07:42.877851 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 13 00:07:43 crc kubenswrapper[3562]: I1213 00:07:43.472199 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:07:43 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:07:43 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:07:43 crc kubenswrapper[3562]: healthz check failed Dec 13 00:07:43 crc kubenswrapper[3562]: I1213 00:07:43.472283 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:07:43 crc kubenswrapper[3562]: I1213 00:07:43.877038 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 13 00:07:43 crc kubenswrapper[3562]: I1213 00:07:43.877062 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:07:43 crc kubenswrapper[3562]: I1213 00:07:43.877148 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:07:43 crc kubenswrapper[3562]: I1213 00:07:43.877173 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:07:43 crc kubenswrapper[3562]: I1213 00:07:43.877185 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:07:43 crc kubenswrapper[3562]: I1213 00:07:43.877166 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:07:43 crc kubenswrapper[3562]: I1213 00:07:43.877190 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 13 00:07:43 crc kubenswrapper[3562]: I1213 00:07:43.877194 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 13 00:07:43 crc kubenswrapper[3562]: E1213 00:07:43.877587 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 13 00:07:43 crc kubenswrapper[3562]: I1213 00:07:43.877222 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:07:43 crc kubenswrapper[3562]: I1213 00:07:43.877220 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:07:43 crc kubenswrapper[3562]: I1213 00:07:43.877290 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:07:43 crc kubenswrapper[3562]: E1213 00:07:43.877662 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 13 00:07:43 crc kubenswrapper[3562]: E1213 00:07:43.877780 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 13 00:07:43 crc kubenswrapper[3562]: E1213 00:07:43.877884 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 13 00:07:43 crc kubenswrapper[3562]: E1213 00:07:43.878034 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 13 00:07:43 crc kubenswrapper[3562]: E1213 00:07:43.878215 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 13 00:07:43 crc kubenswrapper[3562]: E1213 00:07:43.878241 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 13 00:07:43 crc kubenswrapper[3562]: E1213 00:07:43.878329 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 13 00:07:43 crc kubenswrapper[3562]: E1213 00:07:43.878931 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 13 00:07:43 crc kubenswrapper[3562]: E1213 00:07:43.879046 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 13 00:07:43 crc kubenswrapper[3562]: E1213 00:07:43.879193 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 13 00:07:43 crc kubenswrapper[3562]: E1213 00:07:43.885006 3562 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 13 00:07:44 crc kubenswrapper[3562]: I1213 00:07:44.472360 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:07:44 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:07:44 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:07:44 crc kubenswrapper[3562]: healthz check failed Dec 13 00:07:44 crc kubenswrapper[3562]: I1213 00:07:44.472470 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:07:44 crc kubenswrapper[3562]: I1213 00:07:44.875083 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:07:44 crc kubenswrapper[3562]: I1213 00:07:44.875118 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:07:44 crc kubenswrapper[3562]: I1213 00:07:44.875162 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:07:44 crc kubenswrapper[3562]: I1213 00:07:44.875179 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 13 00:07:44 crc kubenswrapper[3562]: I1213 00:07:44.875191 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 13 00:07:44 crc kubenswrapper[3562]: I1213 00:07:44.875203 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 13 00:07:44 crc kubenswrapper[3562]: I1213 00:07:44.875229 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:07:44 crc kubenswrapper[3562]: I1213 00:07:44.875243 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 13 00:07:44 crc kubenswrapper[3562]: E1213 00:07:44.876209 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 13 00:07:44 crc kubenswrapper[3562]: I1213 00:07:44.875225 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 13 00:07:44 crc kubenswrapper[3562]: I1213 00:07:44.875281 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:07:44 crc kubenswrapper[3562]: I1213 00:07:44.875293 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 13 00:07:44 crc kubenswrapper[3562]: I1213 00:07:44.875295 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:07:44 crc kubenswrapper[3562]: I1213 00:07:44.875305 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:07:44 crc kubenswrapper[3562]: E1213 00:07:44.876421 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 13 00:07:44 crc kubenswrapper[3562]: E1213 00:07:44.876511 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 13 00:07:44 crc kubenswrapper[3562]: I1213 00:07:44.875312 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:07:44 crc kubenswrapper[3562]: I1213 00:07:44.875321 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 13 00:07:44 crc kubenswrapper[3562]: E1213 00:07:44.876704 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 13 00:07:44 crc kubenswrapper[3562]: E1213 00:07:44.876833 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 13 00:07:44 crc kubenswrapper[3562]: E1213 00:07:44.877002 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 13 00:07:44 crc kubenswrapper[3562]: E1213 00:07:44.877138 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 13 00:07:44 crc kubenswrapper[3562]: E1213 00:07:44.877291 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 13 00:07:44 crc kubenswrapper[3562]: E1213 00:07:44.877454 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 13 00:07:44 crc kubenswrapper[3562]: I1213 00:07:44.875341 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:07:44 crc kubenswrapper[3562]: I1213 00:07:44.875355 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 13 00:07:44 crc kubenswrapper[3562]: E1213 00:07:44.877617 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 13 00:07:44 crc kubenswrapper[3562]: E1213 00:07:44.877734 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 13 00:07:44 crc kubenswrapper[3562]: E1213 00:07:44.877847 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 13 00:07:44 crc kubenswrapper[3562]: I1213 00:07:44.875367 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:07:44 crc kubenswrapper[3562]: I1213 00:07:44.875363 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 13 00:07:44 crc kubenswrapper[3562]: I1213 00:07:44.875397 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 13 00:07:44 crc kubenswrapper[3562]: I1213 00:07:44.875412 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:07:44 crc kubenswrapper[3562]: I1213 00:07:44.875423 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 13 00:07:44 crc kubenswrapper[3562]: I1213 00:07:44.875425 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:07:44 crc kubenswrapper[3562]: I1213 00:07:44.875422 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:07:44 crc kubenswrapper[3562]: I1213 00:07:44.875487 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:07:44 crc kubenswrapper[3562]: I1213 00:07:44.875489 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 13 00:07:44 crc kubenswrapper[3562]: I1213 00:07:44.875490 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:07:44 crc kubenswrapper[3562]: I1213 00:07:44.875501 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:07:44 crc kubenswrapper[3562]: E1213 00:07:44.878132 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 13 00:07:44 crc kubenswrapper[3562]: I1213 00:07:44.875516 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:07:44 crc kubenswrapper[3562]: I1213 00:07:44.875499 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 13 00:07:44 crc kubenswrapper[3562]: I1213 00:07:44.875545 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:07:44 crc kubenswrapper[3562]: I1213 00:07:44.875568 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:07:44 crc kubenswrapper[3562]: I1213 00:07:44.875596 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 13 00:07:44 crc kubenswrapper[3562]: I1213 00:07:44.875618 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 13 00:07:44 crc kubenswrapper[3562]: I1213 00:07:44.875669 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 13 00:07:44 crc kubenswrapper[3562]: I1213 00:07:44.875553 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:07:44 crc kubenswrapper[3562]: E1213 00:07:44.878542 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 13 00:07:44 crc kubenswrapper[3562]: E1213 00:07:44.878649 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 13 00:07:44 crc kubenswrapper[3562]: E1213 00:07:44.878830 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 13 00:07:44 crc kubenswrapper[3562]: E1213 00:07:44.879013 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 13 00:07:44 crc kubenswrapper[3562]: E1213 00:07:44.879132 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 13 00:07:44 crc kubenswrapper[3562]: E1213 00:07:44.879252 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 13 00:07:44 crc kubenswrapper[3562]: E1213 00:07:44.879527 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 13 00:07:44 crc kubenswrapper[3562]: E1213 00:07:44.879754 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 13 00:07:44 crc kubenswrapper[3562]: E1213 00:07:44.879933 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 13 00:07:44 crc kubenswrapper[3562]: E1213 00:07:44.880071 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 13 00:07:44 crc kubenswrapper[3562]: E1213 00:07:44.880229 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 13 00:07:44 crc kubenswrapper[3562]: E1213 00:07:44.880356 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 13 00:07:44 crc kubenswrapper[3562]: E1213 00:07:44.880473 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 13 00:07:44 crc kubenswrapper[3562]: E1213 00:07:44.880632 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 13 00:07:44 crc kubenswrapper[3562]: E1213 00:07:44.880796 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 13 00:07:44 crc kubenswrapper[3562]: E1213 00:07:44.881067 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 13 00:07:44 crc kubenswrapper[3562]: E1213 00:07:44.881302 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 13 00:07:44 crc kubenswrapper[3562]: E1213 00:07:44.881397 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 13 00:07:44 crc kubenswrapper[3562]: E1213 00:07:44.881489 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 13 00:07:44 crc kubenswrapper[3562]: E1213 00:07:44.881602 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 13 00:07:44 crc kubenswrapper[3562]: E1213 00:07:44.881717 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 13 00:07:44 crc kubenswrapper[3562]: E1213 00:07:44.881840 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 13 00:07:44 crc kubenswrapper[3562]: E1213 00:07:44.882066 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 13 00:07:45 crc kubenswrapper[3562]: I1213 00:07:45.471113 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:07:45 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:07:45 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:07:45 crc kubenswrapper[3562]: healthz check failed Dec 13 00:07:45 crc kubenswrapper[3562]: I1213 00:07:45.471199 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:07:45 crc kubenswrapper[3562]: I1213 00:07:45.874786 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 13 00:07:45 crc kubenswrapper[3562]: I1213 00:07:45.874851 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:07:45 crc kubenswrapper[3562]: I1213 00:07:45.874895 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:07:45 crc kubenswrapper[3562]: I1213 00:07:45.874947 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:07:45 crc kubenswrapper[3562]: I1213 00:07:45.874815 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:07:45 crc kubenswrapper[3562]: I1213 00:07:45.874854 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:07:45 crc kubenswrapper[3562]: I1213 00:07:45.874815 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:07:45 crc kubenswrapper[3562]: I1213 00:07:45.875009 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:07:45 crc kubenswrapper[3562]: I1213 00:07:45.874815 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 13 00:07:45 crc kubenswrapper[3562]: I1213 00:07:45.874870 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 13 00:07:45 crc kubenswrapper[3562]: E1213 00:07:45.875097 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 13 00:07:45 crc kubenswrapper[3562]: E1213 00:07:45.875352 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 13 00:07:45 crc kubenswrapper[3562]: E1213 00:07:45.875541 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 13 00:07:45 crc kubenswrapper[3562]: I1213 00:07:45.875614 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:07:45 crc kubenswrapper[3562]: E1213 00:07:45.875683 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 13 00:07:45 crc kubenswrapper[3562]: E1213 00:07:45.875790 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 13 00:07:45 crc kubenswrapper[3562]: E1213 00:07:45.876026 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 13 00:07:45 crc kubenswrapper[3562]: E1213 00:07:45.876185 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 13 00:07:45 crc kubenswrapper[3562]: E1213 00:07:45.876314 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 13 00:07:45 crc kubenswrapper[3562]: E1213 00:07:45.876435 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 13 00:07:45 crc kubenswrapper[3562]: E1213 00:07:45.876549 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 13 00:07:45 crc kubenswrapper[3562]: E1213 00:07:45.876734 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 13 00:07:46 crc kubenswrapper[3562]: I1213 00:07:46.471596 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:07:46 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:07:46 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:07:46 crc kubenswrapper[3562]: healthz check failed Dec 13 00:07:46 crc kubenswrapper[3562]: I1213 00:07:46.471675 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:07:46 crc kubenswrapper[3562]: I1213 00:07:46.874730 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:07:46 crc kubenswrapper[3562]: I1213 00:07:46.874818 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 13 00:07:46 crc kubenswrapper[3562]: I1213 00:07:46.874852 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:07:46 crc kubenswrapper[3562]: I1213 00:07:46.874853 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:07:46 crc kubenswrapper[3562]: I1213 00:07:46.874872 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:07:46 crc kubenswrapper[3562]: I1213 00:07:46.874941 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:07:46 crc kubenswrapper[3562]: I1213 00:07:46.874955 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 13 00:07:46 crc kubenswrapper[3562]: I1213 00:07:46.874998 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:07:46 crc kubenswrapper[3562]: I1213 00:07:46.875010 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:07:46 crc kubenswrapper[3562]: I1213 00:07:46.875018 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:07:46 crc kubenswrapper[3562]: I1213 00:07:46.875040 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:07:46 crc kubenswrapper[3562]: I1213 00:07:46.875009 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 13 00:07:46 crc kubenswrapper[3562]: I1213 00:07:46.875057 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:07:46 crc kubenswrapper[3562]: I1213 00:07:46.875060 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 13 00:07:46 crc kubenswrapper[3562]: I1213 00:07:46.874816 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:07:46 crc kubenswrapper[3562]: I1213 00:07:46.874963 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:07:46 crc kubenswrapper[3562]: I1213 00:07:46.875096 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 13 00:07:46 crc kubenswrapper[3562]: I1213 00:07:46.875103 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 13 00:07:46 crc kubenswrapper[3562]: I1213 00:07:46.875112 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:07:46 crc kubenswrapper[3562]: I1213 00:07:46.874758 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 13 00:07:46 crc kubenswrapper[3562]: I1213 00:07:46.875158 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 13 00:07:46 crc kubenswrapper[3562]: I1213 00:07:46.874729 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:07:46 crc kubenswrapper[3562]: I1213 00:07:46.874836 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 13 00:07:46 crc kubenswrapper[3562]: I1213 00:07:46.874929 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 13 00:07:46 crc kubenswrapper[3562]: I1213 00:07:46.874895 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:07:46 crc kubenswrapper[3562]: I1213 00:07:46.874972 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 13 00:07:46 crc kubenswrapper[3562]: E1213 00:07:46.875258 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 13 00:07:46 crc kubenswrapper[3562]: I1213 00:07:46.875007 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:07:46 crc kubenswrapper[3562]: I1213 00:07:46.875016 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:07:46 crc kubenswrapper[3562]: I1213 00:07:46.875029 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 13 00:07:46 crc kubenswrapper[3562]: I1213 00:07:46.875056 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 13 00:07:46 crc kubenswrapper[3562]: I1213 00:07:46.875077 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:07:46 crc kubenswrapper[3562]: E1213 00:07:46.875404 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 13 00:07:46 crc kubenswrapper[3562]: I1213 00:07:46.874974 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:07:46 crc kubenswrapper[3562]: E1213 00:07:46.875696 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 13 00:07:46 crc kubenswrapper[3562]: I1213 00:07:46.875506 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 13 00:07:46 crc kubenswrapper[3562]: E1213 00:07:46.875815 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 13 00:07:46 crc kubenswrapper[3562]: E1213 00:07:46.875843 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 13 00:07:46 crc kubenswrapper[3562]: E1213 00:07:46.875967 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 13 00:07:46 crc kubenswrapper[3562]: I1213 00:07:46.876013 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:07:46 crc kubenswrapper[3562]: E1213 00:07:46.876054 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 13 00:07:46 crc kubenswrapper[3562]: E1213 00:07:46.876139 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 13 00:07:46 crc kubenswrapper[3562]: I1213 00:07:46.876207 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 13 00:07:46 crc kubenswrapper[3562]: I1213 00:07:46.876264 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 13 00:07:46 crc kubenswrapper[3562]: E1213 00:07:46.876289 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 13 00:07:46 crc kubenswrapper[3562]: E1213 00:07:46.876375 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 13 00:07:46 crc kubenswrapper[3562]: E1213 00:07:46.876531 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 13 00:07:46 crc kubenswrapper[3562]: E1213 00:07:46.876618 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 13 00:07:46 crc kubenswrapper[3562]: E1213 00:07:46.876739 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 13 00:07:46 crc kubenswrapper[3562]: E1213 00:07:46.876788 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 13 00:07:46 crc kubenswrapper[3562]: E1213 00:07:46.876875 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 13 00:07:46 crc kubenswrapper[3562]: E1213 00:07:46.877004 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 13 00:07:46 crc kubenswrapper[3562]: E1213 00:07:46.877133 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 13 00:07:46 crc kubenswrapper[3562]: E1213 00:07:46.877145 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 13 00:07:46 crc kubenswrapper[3562]: E1213 00:07:46.877213 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 13 00:07:46 crc kubenswrapper[3562]: E1213 00:07:46.877284 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 13 00:07:46 crc kubenswrapper[3562]: E1213 00:07:46.877359 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 13 00:07:46 crc kubenswrapper[3562]: E1213 00:07:46.877401 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 13 00:07:46 crc kubenswrapper[3562]: E1213 00:07:46.877475 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 13 00:07:46 crc kubenswrapper[3562]: E1213 00:07:46.877631 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 13 00:07:46 crc kubenswrapper[3562]: E1213 00:07:46.877788 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 13 00:07:46 crc kubenswrapper[3562]: E1213 00:07:46.877849 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 13 00:07:46 crc kubenswrapper[3562]: E1213 00:07:46.877877 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 13 00:07:46 crc kubenswrapper[3562]: E1213 00:07:46.878026 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 13 00:07:46 crc kubenswrapper[3562]: E1213 00:07:46.878074 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 13 00:07:46 crc kubenswrapper[3562]: E1213 00:07:46.878161 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 13 00:07:46 crc kubenswrapper[3562]: E1213 00:07:46.878205 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 13 00:07:46 crc kubenswrapper[3562]: E1213 00:07:46.878292 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 13 00:07:46 crc kubenswrapper[3562]: E1213 00:07:46.878380 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 13 00:07:46 crc kubenswrapper[3562]: E1213 00:07:46.878414 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 13 00:07:46 crc kubenswrapper[3562]: E1213 00:07:46.878454 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 13 00:07:46 crc kubenswrapper[3562]: E1213 00:07:46.878500 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 13 00:07:47 crc kubenswrapper[3562]: I1213 00:07:47.471666 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:07:47 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:07:47 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:07:47 crc kubenswrapper[3562]: healthz check failed Dec 13 00:07:47 crc kubenswrapper[3562]: I1213 00:07:47.471820 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:07:47 crc kubenswrapper[3562]: I1213 00:07:47.874843 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:07:47 crc kubenswrapper[3562]: I1213 00:07:47.875008 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:07:47 crc kubenswrapper[3562]: I1213 00:07:47.875055 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:07:47 crc kubenswrapper[3562]: I1213 00:07:47.875089 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:07:47 crc kubenswrapper[3562]: I1213 00:07:47.874867 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 13 00:07:47 crc kubenswrapper[3562]: I1213 00:07:47.874998 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 13 00:07:47 crc kubenswrapper[3562]: I1213 00:07:47.875211 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 13 00:07:47 crc kubenswrapper[3562]: I1213 00:07:47.875005 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:07:47 crc kubenswrapper[3562]: I1213 00:07:47.875042 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:07:47 crc kubenswrapper[3562]: E1213 00:07:47.875345 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Dec 13 00:07:47 crc kubenswrapper[3562]: I1213 00:07:47.875109 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:07:47 crc kubenswrapper[3562]: I1213 00:07:47.875169 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:07:47 crc kubenswrapper[3562]: E1213 00:07:47.875668 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Dec 13 00:07:47 crc kubenswrapper[3562]: E1213 00:07:47.875687 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Dec 13 00:07:47 crc kubenswrapper[3562]: E1213 00:07:47.875750 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Dec 13 00:07:47 crc kubenswrapper[3562]: E1213 00:07:47.875870 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Dec 13 00:07:47 crc kubenswrapper[3562]: E1213 00:07:47.876336 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Dec 13 00:07:47 crc kubenswrapper[3562]: E1213 00:07:47.876481 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Dec 13 00:07:47 crc kubenswrapper[3562]: E1213 00:07:47.876601 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Dec 13 00:07:47 crc kubenswrapper[3562]: E1213 00:07:47.876702 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Dec 13 00:07:47 crc kubenswrapper[3562]: E1213 00:07:47.876797 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Dec 13 00:07:47 crc kubenswrapper[3562]: E1213 00:07:47.876867 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Dec 13 00:07:48 crc kubenswrapper[3562]: I1213 00:07:48.470533 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:07:48 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:07:48 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:07:48 crc kubenswrapper[3562]: healthz check failed Dec 13 00:07:48 crc kubenswrapper[3562]: I1213 00:07:48.470613 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:07:48 crc kubenswrapper[3562]: I1213 00:07:48.874768 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:07:48 crc kubenswrapper[3562]: I1213 00:07:48.874898 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:07:48 crc kubenswrapper[3562]: I1213 00:07:48.875065 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 13 00:07:48 crc kubenswrapper[3562]: E1213 00:07:48.875091 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Dec 13 00:07:48 crc kubenswrapper[3562]: I1213 00:07:48.875154 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:07:48 crc kubenswrapper[3562]: E1213 00:07:48.875257 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Dec 13 00:07:48 crc kubenswrapper[3562]: I1213 00:07:48.875307 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:07:48 crc kubenswrapper[3562]: I1213 00:07:48.875359 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:07:48 crc kubenswrapper[3562]: E1213 00:07:48.875407 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Dec 13 00:07:48 crc kubenswrapper[3562]: I1213 00:07:48.875439 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:07:48 crc kubenswrapper[3562]: E1213 00:07:48.875498 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Dec 13 00:07:48 crc kubenswrapper[3562]: I1213 00:07:48.875525 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:07:48 crc kubenswrapper[3562]: E1213 00:07:48.875584 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Dec 13 00:07:48 crc kubenswrapper[3562]: E1213 00:07:48.875641 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Dec 13 00:07:48 crc kubenswrapper[3562]: I1213 00:07:48.875950 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:07:48 crc kubenswrapper[3562]: E1213 00:07:48.876106 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Dec 13 00:07:48 crc kubenswrapper[3562]: I1213 00:07:48.876200 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 13 00:07:48 crc kubenswrapper[3562]: E1213 00:07:48.876348 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Dec 13 00:07:48 crc kubenswrapper[3562]: I1213 00:07:48.876452 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 13 00:07:48 crc kubenswrapper[3562]: E1213 00:07:48.876590 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Dec 13 00:07:48 crc kubenswrapper[3562]: I1213 00:07:48.876673 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 13 00:07:48 crc kubenswrapper[3562]: E1213 00:07:48.876787 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Dec 13 00:07:48 crc kubenswrapper[3562]: I1213 00:07:48.876885 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 13 00:07:48 crc kubenswrapper[3562]: E1213 00:07:48.877023 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Dec 13 00:07:48 crc kubenswrapper[3562]: I1213 00:07:48.877176 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 13 00:07:48 crc kubenswrapper[3562]: E1213 00:07:48.877332 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Dec 13 00:07:48 crc kubenswrapper[3562]: I1213 00:07:48.877386 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 13 00:07:48 crc kubenswrapper[3562]: I1213 00:07:48.877539 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 13 00:07:48 crc kubenswrapper[3562]: E1213 00:07:48.877680 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Dec 13 00:07:48 crc kubenswrapper[3562]: I1213 00:07:48.877767 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:07:48 crc kubenswrapper[3562]: E1213 00:07:48.877809 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Dec 13 00:07:48 crc kubenswrapper[3562]: I1213 00:07:48.877922 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:07:48 crc kubenswrapper[3562]: E1213 00:07:48.877942 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Dec 13 00:07:48 crc kubenswrapper[3562]: I1213 00:07:48.877994 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:07:48 crc kubenswrapper[3562]: I1213 00:07:48.878051 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:07:48 crc kubenswrapper[3562]: I1213 00:07:48.878071 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 13 00:07:48 crc kubenswrapper[3562]: E1213 00:07:48.878144 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Dec 13 00:07:48 crc kubenswrapper[3562]: I1213 00:07:48.878187 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:07:48 crc kubenswrapper[3562]: I1213 00:07:48.878232 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 13 00:07:48 crc kubenswrapper[3562]: I1213 00:07:48.878236 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:07:48 crc kubenswrapper[3562]: I1213 00:07:48.878198 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 13 00:07:48 crc kubenswrapper[3562]: I1213 00:07:48.878284 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:07:48 crc kubenswrapper[3562]: E1213 00:07:48.878201 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Dec 13 00:07:48 crc kubenswrapper[3562]: E1213 00:07:48.878394 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Dec 13 00:07:48 crc kubenswrapper[3562]: I1213 00:07:48.878259 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 13 00:07:48 crc kubenswrapper[3562]: I1213 00:07:48.878611 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:07:48 crc kubenswrapper[3562]: I1213 00:07:48.878709 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:07:48 crc kubenswrapper[3562]: I1213 00:07:48.878637 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:07:48 crc kubenswrapper[3562]: I1213 00:07:48.878764 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:07:48 crc kubenswrapper[3562]: I1213 00:07:48.878677 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:07:48 crc kubenswrapper[3562]: E1213 00:07:48.878789 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 13 00:07:48 crc kubenswrapper[3562]: I1213 00:07:48.878825 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 13 00:07:48 crc kubenswrapper[3562]: I1213 00:07:48.878841 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 13 00:07:48 crc kubenswrapper[3562]: I1213 00:07:48.878859 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 13 00:07:48 crc kubenswrapper[3562]: I1213 00:07:48.878842 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 13 00:07:48 crc kubenswrapper[3562]: E1213 00:07:48.878980 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Dec 13 00:07:48 crc kubenswrapper[3562]: E1213 00:07:48.879056 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Dec 13 00:07:48 crc kubenswrapper[3562]: E1213 00:07:48.879213 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Dec 13 00:07:48 crc kubenswrapper[3562]: E1213 00:07:48.879349 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Dec 13 00:07:48 crc kubenswrapper[3562]: E1213 00:07:48.879428 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Dec 13 00:07:48 crc kubenswrapper[3562]: E1213 00:07:48.879495 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Dec 13 00:07:48 crc kubenswrapper[3562]: E1213 00:07:48.879584 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Dec 13 00:07:48 crc kubenswrapper[3562]: E1213 00:07:48.879690 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Dec 13 00:07:48 crc kubenswrapper[3562]: E1213 00:07:48.879799 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Dec 13 00:07:48 crc kubenswrapper[3562]: E1213 00:07:48.879878 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Dec 13 00:07:48 crc kubenswrapper[3562]: E1213 00:07:48.880029 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Dec 13 00:07:48 crc kubenswrapper[3562]: E1213 00:07:48.880109 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Dec 13 00:07:48 crc kubenswrapper[3562]: E1213 00:07:48.880186 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Dec 13 00:07:48 crc kubenswrapper[3562]: E1213 00:07:48.880260 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Dec 13 00:07:48 crc kubenswrapper[3562]: E1213 00:07:48.880366 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Dec 13 00:07:48 crc kubenswrapper[3562]: E1213 00:07:48.880535 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Dec 13 00:07:48 crc kubenswrapper[3562]: E1213 00:07:48.880666 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Dec 13 00:07:49 crc kubenswrapper[3562]: I1213 00:07:49.471343 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:07:49 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:07:49 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:07:49 crc kubenswrapper[3562]: healthz check failed Dec 13 00:07:49 crc kubenswrapper[3562]: I1213 00:07:49.471452 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:07:49 crc kubenswrapper[3562]: I1213 00:07:49.874760 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:07:49 crc kubenswrapper[3562]: I1213 00:07:49.874833 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:07:49 crc kubenswrapper[3562]: I1213 00:07:49.874888 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 13 00:07:49 crc kubenswrapper[3562]: I1213 00:07:49.874927 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 13 00:07:49 crc kubenswrapper[3562]: I1213 00:07:49.874979 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:07:49 crc kubenswrapper[3562]: I1213 00:07:49.875088 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:07:49 crc kubenswrapper[3562]: I1213 00:07:49.874809 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:07:49 crc kubenswrapper[3562]: I1213 00:07:49.875418 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:07:49 crc kubenswrapper[3562]: I1213 00:07:49.875497 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:07:49 crc kubenswrapper[3562]: I1213 00:07:49.875622 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 13 00:07:49 crc kubenswrapper[3562]: I1213 00:07:49.875763 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:07:49 crc kubenswrapper[3562]: I1213 00:07:49.880028 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 13 00:07:49 crc kubenswrapper[3562]: I1213 00:07:49.880201 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 13 00:07:49 crc kubenswrapper[3562]: I1213 00:07:49.880648 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 13 00:07:49 crc kubenswrapper[3562]: I1213 00:07:49.881329 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 13 00:07:49 crc kubenswrapper[3562]: I1213 00:07:49.881372 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 13 00:07:49 crc kubenswrapper[3562]: I1213 00:07:49.881667 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 13 00:07:49 crc kubenswrapper[3562]: I1213 00:07:49.884553 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 13 00:07:49 crc kubenswrapper[3562]: I1213 00:07:49.885264 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 13 00:07:49 crc kubenswrapper[3562]: I1213 00:07:49.886106 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 13 00:07:49 crc kubenswrapper[3562]: I1213 00:07:49.886197 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 13 00:07:49 crc kubenswrapper[3562]: I1213 00:07:49.886419 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 13 00:07:49 crc kubenswrapper[3562]: I1213 00:07:49.886457 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 13 00:07:49 crc kubenswrapper[3562]: I1213 00:07:49.886501 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 13 00:07:49 crc kubenswrapper[3562]: I1213 00:07:49.886839 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 13 00:07:49 crc kubenswrapper[3562]: I1213 00:07:49.887686 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 13 00:07:49 crc kubenswrapper[3562]: I1213 00:07:49.887695 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 13 00:07:49 crc kubenswrapper[3562]: I1213 00:07:49.888107 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 13 00:07:49 crc kubenswrapper[3562]: I1213 00:07:49.887720 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 13 00:07:49 crc kubenswrapper[3562]: I1213 00:07:49.887750 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 13 00:07:49 crc kubenswrapper[3562]: I1213 00:07:49.887982 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 13 00:07:49 crc kubenswrapper[3562]: I1213 00:07:49.888315 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 13 00:07:49 crc kubenswrapper[3562]: I1213 00:07:49.888384 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 13 00:07:49 crc kubenswrapper[3562]: I1213 00:07:49.888441 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 13 00:07:49 crc kubenswrapper[3562]: I1213 00:07:49.888467 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 13 00:07:49 crc kubenswrapper[3562]: I1213 00:07:49.888471 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 13 00:07:49 crc kubenswrapper[3562]: I1213 00:07:49.888400 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 13 00:07:49 crc kubenswrapper[3562]: I1213 00:07:49.888049 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 13 00:07:49 crc kubenswrapper[3562]: I1213 00:07:49.888479 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 13 00:07:49 crc kubenswrapper[3562]: I1213 00:07:49.888033 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 13 00:07:49 crc kubenswrapper[3562]: I1213 00:07:49.888740 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 13 00:07:49 crc kubenswrapper[3562]: I1213 00:07:49.888785 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 13 00:07:49 crc kubenswrapper[3562]: I1213 00:07:49.888801 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-sv888" Dec 13 00:07:49 crc kubenswrapper[3562]: I1213 00:07:49.889044 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 13 00:07:49 crc kubenswrapper[3562]: I1213 00:07:49.889230 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 13 00:07:49 crc kubenswrapper[3562]: I1213 00:07:49.889689 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 13 00:07:49 crc kubenswrapper[3562]: I1213 00:07:49.890067 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.225681 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.471211 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:07:50 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:07:50 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:07:50 crc kubenswrapper[3562]: healthz check failed Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.471291 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.874501 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.874566 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.874588 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.874612 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.874629 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.874631 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.874640 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.874712 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.874567 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.874712 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.874803 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.874646 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.874511 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.874595 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.874659 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.874990 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.875012 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.875028 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.875044 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.875240 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.875247 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.875390 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.875490 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.875511 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.875532 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.875613 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.875676 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.875739 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.875937 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.876256 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.876625 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.876956 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.877074 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.877133 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.877671 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.877794 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.878520 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.880289 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.881465 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.884532 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.885131 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.886514 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.886548 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.886656 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.886725 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.886737 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-dwn4s" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.886861 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.887280 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.887406 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.888865 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.888947 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.889079 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.889228 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.889474 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.889721 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.889733 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.889085 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.890069 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.890213 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.890226 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.890348 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.890364 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-ng44q" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.890457 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.890475 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.890557 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.890606 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.890684 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.890794 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.891203 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.891215 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.892195 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.892534 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.893061 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.893332 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.894390 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.894430 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.894532 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.894693 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.894716 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.894859 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.894898 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.895464 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.896312 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.896400 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.896428 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.896457 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.896570 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.896599 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.896625 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.896678 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-r9fjc" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.896703 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.896709 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.896770 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.896793 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.896861 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.896892 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.896896 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.896957 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-6sd5l" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.896976 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.896990 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-58g82" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.897065 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.897106 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.897065 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.897121 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.897192 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-9r4gl" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.897233 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-q786x" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.897279 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.897338 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.897365 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.896892 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.897516 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.897560 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.897589 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.897641 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.897657 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.897765 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.897521 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.897851 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.897868 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.897963 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.897976 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.897980 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.897994 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-79vsd" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.898101 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.898133 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.898114 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.898501 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-twmwc" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.906161 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-kpdvz" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.919266 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.921618 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.922842 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.925292 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.926470 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.927485 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.929495 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.929823 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.931036 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.934663 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.936856 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.956937 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.976786 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 13 00:07:50 crc kubenswrapper[3562]: I1213 00:07:50.997104 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 13 00:07:51 crc kubenswrapper[3562]: I1213 00:07:51.017333 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 13 00:07:51 crc kubenswrapper[3562]: I1213 00:07:51.037593 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 13 00:07:51 crc kubenswrapper[3562]: I1213 00:07:51.056965 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console-operator"/"webhook-serving-cert" Dec 13 00:07:51 crc kubenswrapper[3562]: I1213 00:07:51.077618 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 13 00:07:51 crc kubenswrapper[3562]: I1213 00:07:51.096881 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 13 00:07:51 crc kubenswrapper[3562]: I1213 00:07:51.116971 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 13 00:07:51 crc kubenswrapper[3562]: I1213 00:07:51.137187 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 13 00:07:51 crc kubenswrapper[3562]: I1213 00:07:51.157238 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 13 00:07:51 crc kubenswrapper[3562]: I1213 00:07:51.178123 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 13 00:07:51 crc kubenswrapper[3562]: I1213 00:07:51.198040 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 13 00:07:51 crc kubenswrapper[3562]: I1213 00:07:51.217636 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 13 00:07:51 crc kubenswrapper[3562]: I1213 00:07:51.243428 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 13 00:07:51 crc kubenswrapper[3562]: I1213 00:07:51.258252 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 13 00:07:51 crc kubenswrapper[3562]: I1213 00:07:51.278442 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 13 00:07:51 crc kubenswrapper[3562]: I1213 00:07:51.297573 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 13 00:07:51 crc kubenswrapper[3562]: I1213 00:07:51.317633 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 13 00:07:51 crc kubenswrapper[3562]: I1213 00:07:51.471720 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:07:51 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:07:51 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:07:51 crc kubenswrapper[3562]: healthz check failed Dec 13 00:07:51 crc kubenswrapper[3562]: I1213 00:07:51.471795 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:07:52 crc kubenswrapper[3562]: I1213 00:07:52.475920 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:07:52 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:07:52 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:07:52 crc kubenswrapper[3562]: healthz check failed Dec 13 00:07:52 crc kubenswrapper[3562]: I1213 00:07:52.476020 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:07:53 crc kubenswrapper[3562]: I1213 00:07:53.471075 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:07:53 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:07:53 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:07:53 crc kubenswrapper[3562]: healthz check failed Dec 13 00:07:53 crc kubenswrapper[3562]: I1213 00:07:53.471155 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:07:54 crc kubenswrapper[3562]: I1213 00:07:54.471155 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:07:54 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:07:54 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:07:54 crc kubenswrapper[3562]: healthz check failed Dec 13 00:07:54 crc kubenswrapper[3562]: I1213 00:07:54.471436 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:07:55 crc kubenswrapper[3562]: I1213 00:07:55.471162 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:07:55 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:07:55 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:07:55 crc kubenswrapper[3562]: healthz check failed Dec 13 00:07:55 crc kubenswrapper[3562]: I1213 00:07:55.471276 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:07:56 crc kubenswrapper[3562]: I1213 00:07:56.471537 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:07:56 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:07:56 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:07:56 crc kubenswrapper[3562]: healthz check failed Dec 13 00:07:56 crc kubenswrapper[3562]: I1213 00:07:56.471612 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:07:57 crc kubenswrapper[3562]: I1213 00:07:57.471863 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:07:57 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:07:57 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:07:57 crc kubenswrapper[3562]: healthz check failed Dec 13 00:07:57 crc kubenswrapper[3562]: I1213 00:07:57.472813 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:07:58 crc kubenswrapper[3562]: I1213 00:07:58.472479 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:07:58 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:07:58 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:07:58 crc kubenswrapper[3562]: healthz check failed Dec 13 00:07:58 crc kubenswrapper[3562]: I1213 00:07:58.472572 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:07:59 crc kubenswrapper[3562]: I1213 00:07:59.223751 3562 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeReady" Dec 13 00:07:59 crc kubenswrapper[3562]: I1213 00:07:59.470667 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:07:59 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:07:59 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:07:59 crc kubenswrapper[3562]: healthz check failed Dec 13 00:07:59 crc kubenswrapper[3562]: I1213 00:07:59.470770 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:08:00 crc kubenswrapper[3562]: I1213 00:08:00.472315 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:08:00 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:08:00 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:08:00 crc kubenswrapper[3562]: healthz check failed Dec 13 00:08:00 crc kubenswrapper[3562]: I1213 00:08:00.472428 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:08:01 crc kubenswrapper[3562]: I1213 00:08:01.471783 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:08:01 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:08:01 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:08:01 crc kubenswrapper[3562]: healthz check failed Dec 13 00:08:01 crc kubenswrapper[3562]: I1213 00:08:01.471930 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:08:02 crc kubenswrapper[3562]: I1213 00:08:02.471949 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:08:02 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:08:02 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:08:02 crc kubenswrapper[3562]: healthz check failed Dec 13 00:08:02 crc kubenswrapper[3562]: I1213 00:08:02.472098 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:08:03 crc kubenswrapper[3562]: I1213 00:08:03.473398 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:08:03 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:08:03 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:08:03 crc kubenswrapper[3562]: healthz check failed Dec 13 00:08:03 crc kubenswrapper[3562]: I1213 00:08:03.473474 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:08:03 crc kubenswrapper[3562]: I1213 00:08:03.726440 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 13 00:08:03 crc kubenswrapper[3562]: I1213 00:08:03.726545 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 13 00:08:03 crc kubenswrapper[3562]: I1213 00:08:03.726595 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 13 00:08:03 crc kubenswrapper[3562]: I1213 00:08:03.726661 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 13 00:08:03 crc kubenswrapper[3562]: I1213 00:08:03.726706 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 13 00:08:04 crc kubenswrapper[3562]: I1213 00:08:04.471505 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:08:04 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:08:04 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:08:04 crc kubenswrapper[3562]: healthz check failed Dec 13 00:08:04 crc kubenswrapper[3562]: I1213 00:08:04.471584 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:08:05 crc kubenswrapper[3562]: I1213 00:08:05.472614 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:08:05 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:08:05 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:08:05 crc kubenswrapper[3562]: healthz check failed Dec 13 00:08:05 crc kubenswrapper[3562]: I1213 00:08:05.472706 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:08:06 crc kubenswrapper[3562]: I1213 00:08:06.471039 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:08:06 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:08:06 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:08:06 crc kubenswrapper[3562]: healthz check failed Dec 13 00:08:06 crc kubenswrapper[3562]: I1213 00:08:06.471213 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:08:07 crc kubenswrapper[3562]: I1213 00:08:07.471449 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:08:07 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:08:07 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:08:07 crc kubenswrapper[3562]: healthz check failed Dec 13 00:08:07 crc kubenswrapper[3562]: I1213 00:08:07.471556 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:08:08 crc kubenswrapper[3562]: I1213 00:08:08.471001 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:08:08 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:08:08 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:08:08 crc kubenswrapper[3562]: healthz check failed Dec 13 00:08:08 crc kubenswrapper[3562]: I1213 00:08:08.471154 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:08:09 crc kubenswrapper[3562]: I1213 00:08:09.471682 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:08:09 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:08:09 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:08:09 crc kubenswrapper[3562]: healthz check failed Dec 13 00:08:09 crc kubenswrapper[3562]: I1213 00:08:09.471749 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:08:10 crc kubenswrapper[3562]: I1213 00:08:10.471748 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:08:10 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:08:10 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:08:10 crc kubenswrapper[3562]: healthz check failed Dec 13 00:08:10 crc kubenswrapper[3562]: I1213 00:08:10.471886 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:08:11 crc kubenswrapper[3562]: I1213 00:08:11.471613 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:08:11 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:08:11 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:08:11 crc kubenswrapper[3562]: healthz check failed Dec 13 00:08:11 crc kubenswrapper[3562]: I1213 00:08:11.471759 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:08:12 crc kubenswrapper[3562]: I1213 00:08:12.471733 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:08:12 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:08:12 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:08:12 crc kubenswrapper[3562]: healthz check failed Dec 13 00:08:12 crc kubenswrapper[3562]: I1213 00:08:12.472429 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:08:13 crc kubenswrapper[3562]: I1213 00:08:13.471332 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:08:13 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:08:13 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:08:13 crc kubenswrapper[3562]: healthz check failed Dec 13 00:08:13 crc kubenswrapper[3562]: I1213 00:08:13.471425 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:08:14 crc kubenswrapper[3562]: I1213 00:08:14.471590 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:08:14 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:08:14 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:08:14 crc kubenswrapper[3562]: healthz check failed Dec 13 00:08:14 crc kubenswrapper[3562]: I1213 00:08:14.472743 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:08:15 crc kubenswrapper[3562]: I1213 00:08:15.470620 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:08:15 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:08:15 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:08:15 crc kubenswrapper[3562]: healthz check failed Dec 13 00:08:15 crc kubenswrapper[3562]: I1213 00:08:15.470710 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:08:16 crc kubenswrapper[3562]: I1213 00:08:16.470755 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:08:16 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:08:16 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:08:16 crc kubenswrapper[3562]: healthz check failed Dec 13 00:08:16 crc kubenswrapper[3562]: I1213 00:08:16.470830 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:08:17 crc kubenswrapper[3562]: I1213 00:08:17.471390 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:08:17 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:08:17 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:08:17 crc kubenswrapper[3562]: healthz check failed Dec 13 00:08:17 crc kubenswrapper[3562]: I1213 00:08:17.471457 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:08:18 crc kubenswrapper[3562]: I1213 00:08:18.472605 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:08:18 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:08:18 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:08:18 crc kubenswrapper[3562]: healthz check failed Dec 13 00:08:18 crc kubenswrapper[3562]: I1213 00:08:18.472685 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:08:19 crc kubenswrapper[3562]: I1213 00:08:19.471190 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:08:19 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:08:19 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:08:19 crc kubenswrapper[3562]: healthz check failed Dec 13 00:08:19 crc kubenswrapper[3562]: I1213 00:08:19.471286 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:08:20 crc kubenswrapper[3562]: I1213 00:08:20.470273 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:08:20 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:08:20 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:08:20 crc kubenswrapper[3562]: healthz check failed Dec 13 00:08:20 crc kubenswrapper[3562]: I1213 00:08:20.470363 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:08:21 crc kubenswrapper[3562]: I1213 00:08:21.471100 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:08:21 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:08:21 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:08:21 crc kubenswrapper[3562]: healthz check failed Dec 13 00:08:21 crc kubenswrapper[3562]: I1213 00:08:21.471177 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:08:22 crc kubenswrapper[3562]: I1213 00:08:22.471408 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:08:22 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:08:22 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:08:22 crc kubenswrapper[3562]: healthz check failed Dec 13 00:08:22 crc kubenswrapper[3562]: I1213 00:08:22.471491 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:08:23 crc kubenswrapper[3562]: I1213 00:08:23.471624 3562 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 00:08:23 crc kubenswrapper[3562]: [-]has-synced failed: reason withheld Dec 13 00:08:23 crc kubenswrapper[3562]: [+]process-running ok Dec 13 00:08:23 crc kubenswrapper[3562]: healthz check failed Dec 13 00:08:23 crc kubenswrapper[3562]: I1213 00:08:23.471708 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 00:08:23 crc kubenswrapper[3562]: I1213 00:08:23.471752 3562 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Dec 13 00:08:23 crc kubenswrapper[3562]: I1213 00:08:23.472896 3562 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"22511aa8d6dc79c1754301b9800a2a0b1dc4d8841bd28ffd8250d62e84371bed"} pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" containerMessage="Container router failed startup probe, will be restarted" Dec 13 00:08:23 crc kubenswrapper[3562]: I1213 00:08:23.472983 3562 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" containerID="cri-o://22511aa8d6dc79c1754301b9800a2a0b1dc4d8841bd28ffd8250d62e84371bed" gracePeriod=3600 Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.788041 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.788187 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.788228 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.788258 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.788289 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.788315 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.788341 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.788371 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.788409 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.788439 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.788484 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.788532 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.788571 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.788600 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.788675 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.788708 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.788741 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.788768 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.788821 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.788850 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.788891 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.791989 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.797319 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.797560 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.797711 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.797372 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.797888 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.803651 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.804039 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.804324 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.797955 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.804702 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.804728 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.804955 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.805627 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.806335 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.806944 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.797982 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.797993 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.798027 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.798040 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.798071 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.798106 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.798139 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.798241 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.798312 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.798367 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.798548 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.809599 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.811992 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.812558 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.812918 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.815306 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.816055 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.817024 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.822727 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.823860 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.872275 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.875506 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.875980 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.876977 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.882832 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.883420 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.891014 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.891369 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.891402 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.891425 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.891446 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.891469 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.891499 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.891519 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.891541 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.891561 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.891599 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.891621 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.891642 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.891671 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.891691 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.891728 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.891753 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.891984 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2h9\" (UniqueName: \"kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.892015 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.892040 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.892066 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-serving-cert\" (UniqueName: \"kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.892092 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.892116 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.892141 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.892163 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.892196 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.892231 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.892260 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:08:30 crc kubenswrapper[3562]: E1213 00:08:30.892978 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 podName: nodeName:}" failed. No retries permitted until 2025-12-13 00:10:32.892963106 +0000 UTC m=+269.376193639 (durationBeforeRetry 2m2s). Error: MountVolume.MountDevice failed for volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.893112 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.893198 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.893245 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.893273 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.893330 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.893354 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.893374 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.893435 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.893463 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.893514 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.893538 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.893581 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.893616 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.893658 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.893686 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.893741 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.893764 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.893786 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.893844 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.893866 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.893931 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.893961 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.894019 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.894079 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.894108 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.894129 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.894173 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.894209 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.894250 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.894274 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.894332 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.894407 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.894434 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.894456 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.894508 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-76gl8\" (UniqueName: \"kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8\") pod \"network-check-target-v54bt\" (UID: \"34a48baf-1bee-4921-8bb2-9b7320e76f79\") " pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.894610 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.894634 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.894678 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.894706 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.894727 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.894780 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.894802 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.894857 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.894882 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.894960 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.894983 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.895028 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.895058 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.895114 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.896044 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.896084 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4w8wh\" (UniqueName: \"kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.896243 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.896297 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.896325 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.899202 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.899301 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.899306 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.900122 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.916341 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.916597 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.917143 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.917302 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.917356 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.917676 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.918022 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.918081 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.918291 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.918380 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.918475 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.918606 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.918745 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.918795 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.918803 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.918886 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.918918 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.919019 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.919160 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.919561 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.919623 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.919959 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.920097 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.920410 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.920518 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.920639 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.920519 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.920847 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.921233 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.921622 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.921793 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.921916 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.921943 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.922044 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.922125 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.922139 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.922144 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.922244 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.922253 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.922282 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.922337 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.922343 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.922481 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.922721 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.923044 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.923144 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.923207 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console-operator"/"webhook-serving-cert" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.923308 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.923315 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.923379 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.923467 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.923499 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.923565 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.923652 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.923727 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.923789 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.924541 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.925289 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.925453 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.925623 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.925781 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.927154 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.927250 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.927294 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.927416 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.927456 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.927591 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.927657 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.927882 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.928341 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.928361 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.928882 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.928484 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.929328 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.929520 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.930023 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.937008 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.938637 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.941536 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.943147 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.945365 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.945537 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.945568 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.946159 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.946228 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.946293 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.946334 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.946862 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.946878 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"webhook-serving-cert\" (UniqueName: \"kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.947006 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.947069 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.947513 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.947657 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.947728 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.947876 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.947894 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.948018 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.948061 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.948320 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.948370 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.948385 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.948598 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.948685 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.948872 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.949484 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.949726 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.950165 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.950246 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.950873 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.951035 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.951599 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.952226 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.952261 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.952777 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.952984 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.953490 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.953793 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.953952 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.954417 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.955016 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.955385 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.955421 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.955664 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.956054 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.956278 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.956382 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.956571 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.957691 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.957711 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.957724 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.957733 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.957800 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.957813 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.958024 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.958079 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.958276 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.958388 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.958493 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.958619 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.958678 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.959128 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-lx2h9\" (UniqueName: \"kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.959751 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.960052 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.960165 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.960180 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.960191 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.960259 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.960420 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.961271 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-76gl8\" (UniqueName: \"kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8\") pod \"network-check-target-v54bt\" (UID: \"34a48baf-1bee-4921-8bb2-9b7320e76f79\") " pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.961814 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-4w8wh\" (UniqueName: \"kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.962592 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.962679 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.962864 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.963133 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.963231 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.963358 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.964785 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.980745 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-twmwc" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.987486 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.989743 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.996247 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.998218 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8hpxx\" (UniqueName: \"kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.998390 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kp86\" (UniqueName: \"kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.998514 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-v7vkr\" (UniqueName: \"kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.998805 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.999085 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tvc4r\" (UniqueName: \"kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.999210 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-fqnmc\" (UniqueName: \"kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.999313 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-7ggjm\" (UniqueName: \"kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.999435 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9p8gt\" (UniqueName: \"kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt\") pod \"community-operators-sdddl\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " pod="openshift-marketplace/community-operators-sdddl" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.999552 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bm986\" (UniqueName: \"kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.999671 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j7zrh\" (UniqueName: \"kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.999787 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxcp\" (UniqueName: \"kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp\") pod \"network-check-source-5c5478f8c-vqvt7\" (UID: \"d0f40333-c860-4c04-8058-a0bf572dcf12\") " pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 13 00:08:30 crc kubenswrapper[3562]: I1213 00:08:30.999920 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.000048 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-x5d97\" (UniqueName: \"kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.000165 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8dcvj\" (UniqueName: \"kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.000269 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtrv\" (UniqueName: \"kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.000375 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2nz92\" (UniqueName: \"kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.000491 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kgvs\" (UniqueName: \"kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.000645 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nf4t2\" (UniqueName: \"kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.000760 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.000872 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hjlhw\" (UniqueName: \"kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.001031 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bxr\" (UniqueName: \"kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.001167 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9724w\" (UniqueName: \"kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.003370 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-9p8gt\" (UniqueName: \"kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt\") pod \"community-operators-sdddl\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " pod="openshift-marketplace/community-operators-sdddl" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.003621 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.003650 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.003849 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.004068 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.004179 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.004239 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.004595 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.007426 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcxcp\" (UniqueName: \"kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp\") pod \"network-check-source-5c5478f8c-vqvt7\" (UID: \"d0f40333-c860-4c04-8058-a0bf572dcf12\") " pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.009299 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.010163 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.010564 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.010711 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.011581 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.019197 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.030739 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.039860 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.043231 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.056613 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.061137 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.061462 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.078120 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.092261 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.103454 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.103667 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.103806 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dt5cx\" (UniqueName: \"kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx\") pod \"ingress-canary-2vhcn\" (UID: \"0b5d722a-1123-4935-9740-52a08d018bc9\") " pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.112292 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.130982 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.131029 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.151322 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.163992 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.170679 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.191288 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.203480 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kp86\" (UniqueName: \"kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.205075 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2zpsk\" (UniqueName: \"kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk\") pod \"downloads-65476884b9-9wcvx\" (UID: \"6268b7fe-8910-4505-b404-6f1df638105c\") " pod="openshift-console/downloads-65476884b9-9wcvx" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.205129 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmhq\" (UniqueName: \"kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq\") pod \"migrator-f7c6d88df-q2fnv\" (UID: \"cf1a8966-f594-490a-9fbb-eec5bafd13d3\") " pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.212196 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.222876 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hpxx\" (UniqueName: \"kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.226638 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.239646 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.247137 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7vkr\" (UniqueName: \"kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.251560 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.263971 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjlhw\" (UniqueName: \"kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.264392 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvc4r\" (UniqueName: \"kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.265270 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5d97\" (UniqueName: \"kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.270536 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.275656 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ggjm\" (UniqueName: \"kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.295480 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.308023 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.308694 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7zrh\" (UniqueName: \"kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.309494 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wrd8h\" (UniqueName: \"kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.309524 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.309578 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vhj\" (UniqueName: \"kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.309605 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tl5kg\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.312133 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.318299 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.320429 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-bm986\" (UniqueName: \"kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.322054 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrd8h\" (UniqueName: \"kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.324151 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.333376 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.353076 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.365081 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqnmc\" (UniqueName: \"kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.368261 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dcvj\" (UniqueName: \"kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.370272 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.376476 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nz92\" (UniqueName: \"kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.377919 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.378778 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zpsk\" (UniqueName: \"kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk\") pod \"downloads-65476884b9-9wcvx\" (UID: \"6268b7fe-8910-4505-b404-6f1df638105c\") " pod="openshift-console/downloads-65476884b9-9wcvx" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.390105 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.391547 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.397364 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvtrv\" (UniqueName: \"kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.411760 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.411771 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.412079 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.412143 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5rpl7\" (UniqueName: \"kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.412172 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pkhl4\" (UniqueName: \"kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.412193 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-js87r\" (UniqueName: \"kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.412219 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.416944 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.417608 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rpl7\" (UniqueName: \"kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.417804 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.423343 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.432207 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.437633 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.442326 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.443287 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" event={"ID":"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0","Type":"ContainerStarted","Data":"68b9adb9401e4eca213dc3af407736601bcbf0a46306604d98272be7928a8f83"} Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.451274 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.455722 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kgvs\" (UniqueName: \"kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.456010 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.470639 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.485263 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.489128 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-nf4t2\" (UniqueName: \"kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.491119 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-dwn4s" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.495515 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.512991 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.514688 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 13 00:08:31 crc kubenswrapper[3562]: W1213 00:08:31.522512 3562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod297ab9b6_2186_4d5b_a952_2bfd59af63c4.slice/crio-735bde00af993eaec8d3e47b3081bd49c9eae2267067f6974c638dd5d082c837 WatchSource:0}: Error finding container 735bde00af993eaec8d3e47b3081bd49c9eae2267067f6974c638dd5d082c837: Status 404 returned error can't find the container with id 735bde00af993eaec8d3e47b3081bd49c9eae2267067f6974c638dd5d082c837 Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.534510 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.540485 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8bxr\" (UniqueName: \"kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.546862 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-9724w\" (UniqueName: \"kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.550279 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.569695 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.571029 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-sv888" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.574469 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.591031 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.603065 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.613214 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 13 00:08:31 crc kubenswrapper[3562]: W1213 00:08:31.617077 3562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd0f40333_c860_4c04_8058_a0bf572dcf12.slice/crio-ccb69f90d7c565e683202fee691901e05031a3ff9d525fdf3094cffe03037e6c WatchSource:0}: Error finding container ccb69f90d7c565e683202fee691901e05031a3ff9d525fdf3094cffe03037e6c: Status 404 returned error can't find the container with id ccb69f90d7c565e683202fee691901e05031a3ff9d525fdf3094cffe03037e6c Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.630951 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.631758 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.636968 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.646234 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.651620 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-6sd5l" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.652995 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.656382 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.670210 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.673672 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.687256 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.689442 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.690332 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.710500 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.725545 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-dt5cx\" (UniqueName: \"kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx\") pod \"ingress-canary-2vhcn\" (UID: \"0b5d722a-1123-4935-9740-52a08d018bc9\") " pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.730917 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-r9fjc" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.731909 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.750177 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.750415 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-ng44q" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.758402 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.771490 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-9r4gl" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.774835 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.790414 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.799672 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqmhq\" (UniqueName: \"kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq\") pod \"migrator-f7c6d88df-q2fnv\" (UID: \"cf1a8966-f594-490a-9fbb-eec5bafd13d3\") " pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.814142 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.833103 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-kpdvz" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.835584 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.840144 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.849658 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9vhj\" (UniqueName: \"kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.851627 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.868505 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-tl5kg\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.870743 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.880492 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-js87r\" (UniqueName: \"kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.899354 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.914414 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkhl4\" (UniqueName: \"kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.933096 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.957432 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.968678 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Dec 13 00:08:31 crc kubenswrapper[3562]: I1213 00:08:31.982542 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Dec 13 00:08:32 crc kubenswrapper[3562]: I1213 00:08:32.121923 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-58g82" Dec 13 00:08:32 crc kubenswrapper[3562]: I1213 00:08:32.130734 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:08:32 crc kubenswrapper[3562]: I1213 00:08:32.148952 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-79vsd" Dec 13 00:08:32 crc kubenswrapper[3562]: I1213 00:08:32.157282 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Dec 13 00:08:32 crc kubenswrapper[3562]: I1213 00:08:32.455089 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7287f" event={"ID":"887d596e-c519-4bfa-af90-3edd9e1b2f0f","Type":"ContainerStarted","Data":"7d0bc7d4c91e939d39cde0865eb12e34bf21ba6e9949a18d8639eb77d6c02074"} Dec 13 00:08:32 crc kubenswrapper[3562]: I1213 00:08:32.470628 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" event={"ID":"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf","Type":"ContainerStarted","Data":"365435f4f3e1ab930cb6a9752a06fbc3fe7d8e15f72f7200ad454fccec502714"} Dec 13 00:08:32 crc kubenswrapper[3562]: I1213 00:08:32.472422 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" event={"ID":"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0","Type":"ContainerStarted","Data":"3c7829365dd938e0443012a7653a3a11a297d96883362d7f3adc2384109bd27f"} Dec 13 00:08:32 crc kubenswrapper[3562]: I1213 00:08:32.477275 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" event={"ID":"d0f40333-c860-4c04-8058-a0bf572dcf12","Type":"ContainerStarted","Data":"ccb69f90d7c565e683202fee691901e05031a3ff9d525fdf3094cffe03037e6c"} Dec 13 00:08:32 crc kubenswrapper[3562]: I1213 00:08:32.478375 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" event={"ID":"5bacb25d-97b6-4491-8fb4-99feae1d802a","Type":"ContainerStarted","Data":"f78350fe1c5c1685485c8096cec5b831bbe12d37cc8822c494948f2dbdc2ab9b"} Dec 13 00:08:32 crc kubenswrapper[3562]: I1213 00:08:32.479490 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-gbw49" event={"ID":"13045510-8717-4a71-ade4-be95a76440a7","Type":"ContainerStarted","Data":"9e8269ba2a96e265a30c3d6850bcf8229078525251702fc6eab0ceb28f5245b5"} Dec 13 00:08:32 crc kubenswrapper[3562]: I1213 00:08:32.480405 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" event={"ID":"8a5ae51d-d173-4531-8975-f164c975ce1f","Type":"ContainerStarted","Data":"0e4ad50569654dc49b8fa554cebd4aeffc4ed1b813a6e4b0368622dfdd01fc80"} Dec 13 00:08:32 crc kubenswrapper[3562]: I1213 00:08:32.481295 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" event={"ID":"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e","Type":"ContainerStarted","Data":"91fea45557199379f393b7e0657efea64d6f4350d62f74cb558d5d348c5b7321"} Dec 13 00:08:32 crc kubenswrapper[3562]: I1213 00:08:32.485630 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" event={"ID":"b54e8941-2fc4-432a-9e51-39684df9089e","Type":"ContainerStarted","Data":"20d357defb68779da15405c9b35931637f666cf9d3f18e542a95cd7bf4469f40"} Dec 13 00:08:32 crc kubenswrapper[3562]: I1213 00:08:32.485665 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" event={"ID":"b54e8941-2fc4-432a-9e51-39684df9089e","Type":"ContainerStarted","Data":"3515a621157b2d1c9133b8d2d9bb50d7072cdfbe9687446359e662445857187d"} Dec 13 00:08:32 crc kubenswrapper[3562]: I1213 00:08:32.487337 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" event={"ID":"530553aa-0a1d-423e-8a22-f5eb4bdbb883","Type":"ContainerStarted","Data":"f2a70c810a4160fb52dab790b9d69c333428f3376db680d4b6bcd259ce133d3b"} Dec 13 00:08:32 crc kubenswrapper[3562]: I1213 00:08:32.491369 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" event={"ID":"34a48baf-1bee-4921-8bb2-9b7320e76f79","Type":"ContainerStarted","Data":"58d0560cb4755f90a6ed2115b1bc72f54e5ae00d405f29e2768e740e7f1495d4"} Dec 13 00:08:32 crc kubenswrapper[3562]: I1213 00:08:32.493216 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" event={"ID":"3482be94-0cdb-4e2a-889b-e5fac59fdbf5","Type":"ContainerStarted","Data":"b80d6a778ead2c7c6ae610f4fefb614976b20c604b9176d4a7f76b6c7ca9d50d"} Dec 13 00:08:32 crc kubenswrapper[3562]: I1213 00:08:32.494898 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" event={"ID":"a702c6d2-4dde-4077-ab8c-0f8df804bf7a","Type":"ContainerStarted","Data":"dd7e66ffd6d57361bf49f68e324885f2abc239c8ab8a6d4c5b408f96cba8778e"} Dec 13 00:08:32 crc kubenswrapper[3562]: I1213 00:08:32.518265 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" event={"ID":"bd556935-a077-45df-ba3f-d42c39326ccd","Type":"ContainerStarted","Data":"2da6f397d56af8476b2594934ba41ce28a18fadeb4e31d6ac42b659714c7318f"} Dec 13 00:08:32 crc kubenswrapper[3562]: I1213 00:08:32.523959 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" event={"ID":"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be","Type":"ContainerStarted","Data":"f11c0f47a564e171f7815dfc7589cea09525482eee85dd1d01ff52967e115482"} Dec 13 00:08:32 crc kubenswrapper[3562]: I1213 00:08:32.526428 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" event={"ID":"4f8aa612-9da0-4a2b-911e-6a1764a4e74e","Type":"ContainerStarted","Data":"22a3b377649767b8aa4382ec79a9372bba551f276cfdae41d8ab8310c2e05cbc"} Dec 13 00:08:32 crc kubenswrapper[3562]: I1213 00:08:32.527985 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" event={"ID":"43ae1c37-047b-4ee2-9fee-41e337dd4ac8","Type":"ContainerStarted","Data":"1c2e3ef4de847de2c53265f8e219c07d3b5a9063f636d5b76a03ec49b48125da"} Dec 13 00:08:32 crc kubenswrapper[3562]: I1213 00:08:32.529749 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" event={"ID":"297ab9b6-2186-4d5b-a952-2bfd59af63c4","Type":"ContainerStarted","Data":"e049fb78decf6a81b2efd5eaf9d16fc7264d8a57ac84386a8e5039ae18bfb09b"} Dec 13 00:08:32 crc kubenswrapper[3562]: I1213 00:08:32.529781 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" event={"ID":"297ab9b6-2186-4d5b-a952-2bfd59af63c4","Type":"ContainerStarted","Data":"735bde00af993eaec8d3e47b3081bd49c9eae2267067f6974c638dd5d082c837"} Dec 13 00:08:32 crc kubenswrapper[3562]: I1213 00:08:32.530897 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-65476884b9-9wcvx" event={"ID":"6268b7fe-8910-4505-b404-6f1df638105c","Type":"ContainerStarted","Data":"bf67e0c6bd893a50a0c4b9fc019255e0ed2787692bc9675d80c326a234a19ec2"} Dec 13 00:08:32 crc kubenswrapper[3562]: I1213 00:08:32.534390 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" event={"ID":"120b38dc-8236-4fa6-a452-642b8ad738ee","Type":"ContainerStarted","Data":"e58b270aaab26510bb0b39cb8838d176aa02ecdbcb3a4ec6ac033b38dc89451b"} Dec 13 00:08:33 crc kubenswrapper[3562]: W1213 00:08:33.240030 3562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9127708_ccfd_4891_8a3a_f0cacb77e0f4.slice/crio-3b29fd6af8eee0737f36ef67b73e03758e8a852495d7b03b49015ecfb1eceed1 WatchSource:0}: Error finding container 3b29fd6af8eee0737f36ef67b73e03758e8a852495d7b03b49015ecfb1eceed1: Status 404 returned error can't find the container with id 3b29fd6af8eee0737f36ef67b73e03758e8a852495d7b03b49015ecfb1eceed1 Dec 13 00:08:33 crc kubenswrapper[3562]: W1213 00:08:33.256890 3562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f4dca86_e6ee_4ec9_8324_86aff960225e.slice/crio-7bda89fde1d856e752d73604dfe94da99a00f11f45dc4b14b6395f58e9e3fa1e WatchSource:0}: Error finding container 7bda89fde1d856e752d73604dfe94da99a00f11f45dc4b14b6395f58e9e3fa1e: Status 404 returned error can't find the container with id 7bda89fde1d856e752d73604dfe94da99a00f11f45dc4b14b6395f58e9e3fa1e Dec 13 00:08:33 crc kubenswrapper[3562]: W1213 00:08:33.317632 3562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ae0dfbb_a0a9_45bb_85b5_cd9f94f64fe7.slice/crio-a90d58b9dea817e2988ae2caa8e054c91198afa9f5264ace3a9f4e949454673b WatchSource:0}: Error finding container a90d58b9dea817e2988ae2caa8e054c91198afa9f5264ace3a9f4e949454673b: Status 404 returned error can't find the container with id a90d58b9dea817e2988ae2caa8e054c91198afa9f5264ace3a9f4e949454673b Dec 13 00:08:33 crc kubenswrapper[3562]: W1213 00:08:33.339439 3562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e649ef6_bbda_4ad9_8a09_ac3803dd0cc1.slice/crio-d1af20500d492c107419843beac84e705f78254d17a7e634953793564622974e WatchSource:0}: Error finding container d1af20500d492c107419843beac84e705f78254d17a7e634953793564622974e: Status 404 returned error can't find the container with id d1af20500d492c107419843beac84e705f78254d17a7e634953793564622974e Dec 13 00:08:33 crc kubenswrapper[3562]: W1213 00:08:33.344678 3562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d51f445_054a_4e4f_a67b_a828f5a32511.slice/crio-dd606f6de5d14c5d1d9165cfaf9027a944203a6fb262240af029a1d60347d046 WatchSource:0}: Error finding container dd606f6de5d14c5d1d9165cfaf9027a944203a6fb262240af029a1d60347d046: Status 404 returned error can't find the container with id dd606f6de5d14c5d1d9165cfaf9027a944203a6fb262240af029a1d60347d046 Dec 13 00:08:33 crc kubenswrapper[3562]: W1213 00:08:33.357056 3562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b5d722a_1123_4935_9740_52a08d018bc9.slice/crio-d6108ae63d52c05a9d9c4fb56a63678a8b56a4391bce2c25004e3d62badb568c WatchSource:0}: Error finding container d6108ae63d52c05a9d9c4fb56a63678a8b56a4391bce2c25004e3d62badb568c: Status 404 returned error can't find the container with id d6108ae63d52c05a9d9c4fb56a63678a8b56a4391bce2c25004e3d62badb568c Dec 13 00:08:33 crc kubenswrapper[3562]: W1213 00:08:33.374074 3562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc782cf62_a827_4677_b3c2_6f82c5f09cbb.slice/crio-c3878d719398701044049e64b6685f4563a54d6352b17533a94949338ccda9ac WatchSource:0}: Error finding container c3878d719398701044049e64b6685f4563a54d6352b17533a94949338ccda9ac: Status 404 returned error can't find the container with id c3878d719398701044049e64b6685f4563a54d6352b17533a94949338ccda9ac Dec 13 00:08:33 crc kubenswrapper[3562]: I1213 00:08:33.572549 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" event={"ID":"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7","Type":"ContainerStarted","Data":"a90d58b9dea817e2988ae2caa8e054c91198afa9f5264ace3a9f4e949454673b"} Dec 13 00:08:33 crc kubenswrapper[3562]: I1213 00:08:33.594988 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" event={"ID":"59748b9b-c309-4712-aa85-bb38d71c4915","Type":"ContainerStarted","Data":"2c1bdd18a5bb3e03de5c03b9979723241ff3930ccf25fdfce781d5ed65848cb8"} Dec 13 00:08:33 crc kubenswrapper[3562]: I1213 00:08:33.633263 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sdddl" event={"ID":"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760","Type":"ContainerStarted","Data":"5391da27b802f38330bcf9e4f82a5b3e218a9f77ceb2a778f9618a3305d99b9e"} Dec 13 00:08:33 crc kubenswrapper[3562]: I1213 00:08:33.666418 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" event={"ID":"41e8708a-e40d-4d28-846b-c52eda4d1755","Type":"ContainerStarted","Data":"6157d9b4a45ce8ff7012ffe9d970bdd94048579fb051455f5736cc13324c1c0d"} Dec 13 00:08:33 crc kubenswrapper[3562]: I1213 00:08:33.675041 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-644bb77b49-5x5xk" event={"ID":"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1","Type":"ContainerStarted","Data":"d1af20500d492c107419843beac84e705f78254d17a7e634953793564622974e"} Dec 13 00:08:33 crc kubenswrapper[3562]: I1213 00:08:33.691011 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" event={"ID":"21d29937-debd-4407-b2b1-d1053cb0f342","Type":"ContainerStarted","Data":"f62bf07f4ab3023f511f6471d158fb1ad1626f27dac6f381aeab57f8810ee9e7"} Dec 13 00:08:33 crc kubenswrapper[3562]: I1213 00:08:33.754083 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" event={"ID":"0b5c38ff-1fa8-4219-994d-15776acd4a4d","Type":"ContainerStarted","Data":"eb5ce297cf3a8130f56bfdf6dd27663f7d56c7d07bb644b4b26ac6eb7b11b671"} Dec 13 00:08:33 crc kubenswrapper[3562]: I1213 00:08:33.765488 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" event={"ID":"71af81a9-7d43-49b2-9287-c375900aa905","Type":"ContainerStarted","Data":"3cc9b0da34101b2f1db59399a1a166a0b51f7e3e8c3941404232ea49a4f53505"} Dec 13 00:08:33 crc kubenswrapper[3562]: I1213 00:08:33.789172 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" event={"ID":"34a48baf-1bee-4921-8bb2-9b7320e76f79","Type":"ContainerStarted","Data":"b7c03d222e2fefaa0516f46d72fca2d663eeaeafe0e2c4c2468d09f6d3739cfa"} Dec 13 00:08:33 crc kubenswrapper[3562]: I1213 00:08:33.789471 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 13 00:08:33 crc kubenswrapper[3562]: I1213 00:08:33.792429 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" event={"ID":"3482be94-0cdb-4e2a-889b-e5fac59fdbf5","Type":"ContainerStarted","Data":"7d935ae38eafd288b0bf6684585036fd5fa879811f459a17e350801d74397673"} Dec 13 00:08:33 crc kubenswrapper[3562]: I1213 00:08:33.793286 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:08:33 crc kubenswrapper[3562]: I1213 00:08:33.795839 3562 patch_prober.go:28] interesting pod/marketplace-operator-8b455464d-f9xdt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 13 00:08:33 crc kubenswrapper[3562]: I1213 00:08:33.796112 3562 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 13 00:08:33 crc kubenswrapper[3562]: I1213 00:08:33.796319 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" event={"ID":"1a3e81c3-c292-4130-9436-f94062c91efd","Type":"ContainerStarted","Data":"fb9e0655e69a744ff45b24a22f95a4ce47a206978562e5625906eaad2aa302db"} Dec 13 00:08:33 crc kubenswrapper[3562]: I1213 00:08:33.799564 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" event={"ID":"43ae1c37-047b-4ee2-9fee-41e337dd4ac8","Type":"ContainerStarted","Data":"dd638eb2ac513bd77039492918ec496f69c6862f70b151c9e641f4b9d20fb415"} Dec 13 00:08:33 crc kubenswrapper[3562]: I1213 00:08:33.812924 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" event={"ID":"01feb2e0-a0f4-4573-8335-34e364e0ef40","Type":"ContainerStarted","Data":"63cb8dc18e67b904539ecfd1033233549fe1428b71bb4270178b92feba3fc873"} Dec 13 00:08:33 crc kubenswrapper[3562]: I1213 00:08:33.815731 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" event={"ID":"0b5d722a-1123-4935-9740-52a08d018bc9","Type":"ContainerStarted","Data":"d6108ae63d52c05a9d9c4fb56a63678a8b56a4391bce2c25004e3d62badb568c"} Dec 13 00:08:33 crc kubenswrapper[3562]: I1213 00:08:33.820537 3562 generic.go:334] "Generic (PLEG): container finished" podID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" containerID="0577dd2efb3c46ded734294efb921191dfdc8167bdcc63fe3206f657b329da66" exitCode=0 Dec 13 00:08:33 crc kubenswrapper[3562]: I1213 00:08:33.820639 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7287f" event={"ID":"887d596e-c519-4bfa-af90-3edd9e1b2f0f","Type":"ContainerDied","Data":"0577dd2efb3c46ded734294efb921191dfdc8167bdcc63fe3206f657b329da66"} Dec 13 00:08:33 crc kubenswrapper[3562]: I1213 00:08:33.870416 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" event={"ID":"e9127708-ccfd-4891-8a3a-f0cacb77e0f4","Type":"ContainerStarted","Data":"3b29fd6af8eee0737f36ef67b73e03758e8a852495d7b03b49015ecfb1eceed1"} Dec 13 00:08:33 crc kubenswrapper[3562]: I1213 00:08:33.875277 3562 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 13 00:08:33 crc kubenswrapper[3562]: I1213 00:08:33.902445 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" event={"ID":"12e733dd-0939-4f1b-9cbb-13897e093787","Type":"ContainerStarted","Data":"f59609102fc087fbbe4ac3fa3014e469a9288c53dcf5094587c3ad19e090f120"} Dec 13 00:08:33 crc kubenswrapper[3562]: I1213 00:08:33.903107 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" event={"ID":"c782cf62-a827-4677-b3c2-6f82c5f09cbb","Type":"ContainerStarted","Data":"c3878d719398701044049e64b6685f4563a54d6352b17533a94949338ccda9ac"} Dec 13 00:08:33 crc kubenswrapper[3562]: I1213 00:08:33.924505 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" event={"ID":"d0f40333-c860-4c04-8058-a0bf572dcf12","Type":"ContainerStarted","Data":"1a75e130567b2c4856d6b67bdf76d6ec8aa4ec2e42624d58cd4444a639de9863"} Dec 13 00:08:33 crc kubenswrapper[3562]: I1213 00:08:33.975595 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jhz6" event={"ID":"3f4dca86-e6ee-4ec9-8324-86aff960225e","Type":"ContainerStarted","Data":"7bda89fde1d856e752d73604dfe94da99a00f11f45dc4b14b6395f58e9e3fa1e"} Dec 13 00:08:34 crc kubenswrapper[3562]: I1213 00:08:34.012739 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-gbw49" event={"ID":"13045510-8717-4a71-ade4-be95a76440a7","Type":"ContainerStarted","Data":"fb25e8ec28dcd8ef92f217b44e82f5629e96d579547d19bb5bb6a10a738e4650"} Dec 13 00:08:34 crc kubenswrapper[3562]: I1213 00:08:34.012957 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-gbw49" Dec 13 00:08:34 crc kubenswrapper[3562]: I1213 00:08:34.030863 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" event={"ID":"45a8038e-e7f2-4d93-a6f5-7753aa54e63f","Type":"ContainerStarted","Data":"582dba24ea83517326117340043209f0ebb1ae3e0f6adea5b370489ec6eb5879"} Dec 13 00:08:34 crc kubenswrapper[3562]: I1213 00:08:34.041589 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" event={"ID":"6d67253e-2acd-4bc1-8185-793587da4f17","Type":"ContainerStarted","Data":"e90e37cdf6c29992453f0090ce3fc105bdd729eb34a0e94aaa5949bae813d790"} Dec 13 00:08:34 crc kubenswrapper[3562]: I1213 00:08:34.057606 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" event={"ID":"a702c6d2-4dde-4077-ab8c-0f8df804bf7a","Type":"ContainerStarted","Data":"bbf9a8c1487fde60a90b717af91fd869b8eaf741b5bb6d9d0a845b08a1b0dc5f"} Dec 13 00:08:34 crc kubenswrapper[3562]: I1213 00:08:34.066140 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" event={"ID":"7d51f445-054a-4e4f-a67b-a828f5a32511","Type":"ContainerStarted","Data":"dd606f6de5d14c5d1d9165cfaf9027a944203a6fb262240af029a1d60347d046"} Dec 13 00:08:34 crc kubenswrapper[3562]: I1213 00:08:34.073502 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" event={"ID":"0f394926-bdb9-425c-b36e-264d7fd34550","Type":"ContainerStarted","Data":"53bf70ad572ca4ea1e8a69a0b5f8f02e62a0b92915f0425095bde4849d38a3ba"} Dec 13 00:08:34 crc kubenswrapper[3562]: I1213 00:08:34.091135 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" event={"ID":"10603adc-d495-423c-9459-4caa405960bb","Type":"ContainerStarted","Data":"40a0c7229ac6d93f4828fb0126e2fa62382f4b779090a59048a6f8caa19e107a"} Dec 13 00:08:34 crc kubenswrapper[3562]: I1213 00:08:34.098189 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" event={"ID":"120b38dc-8236-4fa6-a452-642b8ad738ee","Type":"ContainerStarted","Data":"454880f59aa3b22f3c076ef05c54192a766da4dce22117f5a64201ff5339ea49"} Dec 13 00:08:34 crc kubenswrapper[3562]: I1213 00:08:34.129237 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4jkp" event={"ID":"4092a9f8-5acc-4932-9e90-ef962eeb301a","Type":"ContainerStarted","Data":"12f76956853bcc4ef2d56a9b72aef74d7ef832b2e582a901495987a42af5d82f"} Dec 13 00:08:34 crc kubenswrapper[3562]: I1213 00:08:34.135131 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" event={"ID":"c085412c-b875-46c9-ae3e-e6b0d8067091","Type":"ContainerStarted","Data":"821f68adceb8970821ae93a7333174c7d26c11641eeb878d3859e804aee3d6bb"} Dec 13 00:08:34 crc kubenswrapper[3562]: I1213 00:08:34.135562 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:08:34 crc kubenswrapper[3562]: I1213 00:08:34.140955 3562 patch_prober.go:28] interesting pod/olm-operator-6d8474f75f-x54mh container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Dec 13 00:08:34 crc kubenswrapper[3562]: I1213 00:08:34.141040 3562 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Dec 13 00:08:35 crc kubenswrapper[3562]: I1213 00:08:35.141563 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" event={"ID":"c085412c-b875-46c9-ae3e-e6b0d8067091","Type":"ContainerStarted","Data":"2ab5dda1e525a256f51ca54176fc7bf66d6ebe5488e94f0cd6e203847f061457"} Dec 13 00:08:35 crc kubenswrapper[3562]: I1213 00:08:35.144421 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" event={"ID":"e4a7de23-6134-4044-902a-0900dc04a501","Type":"ContainerStarted","Data":"08177423b84281e6995315b72e3fb93776b902429e579de45c7964ca2b3503be"} Dec 13 00:08:35 crc kubenswrapper[3562]: I1213 00:08:35.144468 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" event={"ID":"e4a7de23-6134-4044-902a-0900dc04a501","Type":"ContainerStarted","Data":"186f1ca85bd8e10ab133957652f40978d789a5e111313a2dacc0a77fd5061e2e"} Dec 13 00:08:35 crc kubenswrapper[3562]: I1213 00:08:35.147791 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" event={"ID":"01feb2e0-a0f4-4573-8335-34e364e0ef40","Type":"ContainerStarted","Data":"1fa24445d9ae3b60e7cf6988c03e95416a2407f4549863f1dc4d0789cb7b4ac1"} Dec 13 00:08:35 crc kubenswrapper[3562]: I1213 00:08:35.150258 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:08:35 crc kubenswrapper[3562]: I1213 00:08:35.150777 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Dec 13 00:08:35 crc kubenswrapper[3562]: I1213 00:08:35.151373 3562 patch_prober.go:28] interesting pod/oauth-openshift-74fc7c67cc-xqf8b container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.72:6443/healthz\": dial tcp 10.217.0.72:6443: connect: connection refused" start-of-body= Dec 13 00:08:35 crc kubenswrapper[3562]: I1213 00:08:35.151441 3562 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.72:6443/healthz\": dial tcp 10.217.0.72:6443: connect: connection refused" Dec 13 00:08:35 crc kubenswrapper[3562]: I1213 00:08:35.153568 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" event={"ID":"0b5d722a-1123-4935-9740-52a08d018bc9","Type":"ContainerStarted","Data":"c534cf0669591466630a994aff3ed9e9337f048d16e63a2f260d8680e0b8abf9"} Dec 13 00:08:35 crc kubenswrapper[3562]: I1213 00:08:35.160501 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" event={"ID":"59748b9b-c309-4712-aa85-bb38d71c4915","Type":"ContainerStarted","Data":"e205518a05fb40857ed6ed41b3ddee6c4c7d54f8087a01ae9e04391fc66af386"} Dec 13 00:08:35 crc kubenswrapper[3562]: I1213 00:08:35.162471 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" event={"ID":"21d29937-debd-4407-b2b1-d1053cb0f342","Type":"ContainerStarted","Data":"164ff5a5827ca89b6d533ebb14dbb49229c77c8751ebea0355117bf5381f7ff7"} Dec 13 00:08:35 crc kubenswrapper[3562]: I1213 00:08:35.163276 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:08:35 crc kubenswrapper[3562]: I1213 00:08:35.164255 3562 patch_prober.go:28] interesting pod/route-controller-manager-776b8b7477-sfpvs container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.88:8443/healthz\": dial tcp 10.217.0.88:8443: connect: connection refused" start-of-body= Dec 13 00:08:35 crc kubenswrapper[3562]: I1213 00:08:35.164296 3562 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.88:8443/healthz\": dial tcp 10.217.0.88:8443: connect: connection refused" Dec 13 00:08:35 crc kubenswrapper[3562]: I1213 00:08:35.165103 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" event={"ID":"e9127708-ccfd-4891-8a3a-f0cacb77e0f4","Type":"ContainerStarted","Data":"263497b2e5858a3b59626542ca54a64704f81e6e92d1f7354af38ca4e16027e0"} Dec 13 00:08:35 crc kubenswrapper[3562]: I1213 00:08:35.165872 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:08:35 crc kubenswrapper[3562]: I1213 00:08:35.175308 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" event={"ID":"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be","Type":"ContainerStarted","Data":"a92a290bdeff2cd26d74a7f7b42694be1909e5ba301663d03c3144ed64a02706"} Dec 13 00:08:35 crc kubenswrapper[3562]: I1213 00:08:35.175342 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" event={"ID":"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be","Type":"ContainerStarted","Data":"c96898f303ad7e5c2dcd0c3b89ca4950093578030ad67efb367df7109ad95501"} Dec 13 00:08:35 crc kubenswrapper[3562]: I1213 00:08:35.176114 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 13 00:08:35 crc kubenswrapper[3562]: I1213 00:08:35.177962 3562 generic.go:334] "Generic (PLEG): container finished" podID="41e8708a-e40d-4d28-846b-c52eda4d1755" containerID="fea9498e99f8501a6f57df0c401649b5399dc0fb5a517821de0d92e4043084f8" exitCode=0 Dec 13 00:08:35 crc kubenswrapper[3562]: I1213 00:08:35.178011 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" event={"ID":"41e8708a-e40d-4d28-846b-c52eda4d1755","Type":"ContainerDied","Data":"fea9498e99f8501a6f57df0c401649b5399dc0fb5a517821de0d92e4043084f8"} Dec 13 00:08:35 crc kubenswrapper[3562]: I1213 00:08:35.180539 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" event={"ID":"12e733dd-0939-4f1b-9cbb-13897e093787","Type":"ContainerStarted","Data":"5ea55447540e643beff74915ba362b98fc2137049a1102af54df6a6869324e45"} Dec 13 00:08:35 crc kubenswrapper[3562]: I1213 00:08:35.184687 3562 generic.go:334] "Generic (PLEG): container finished" podID="4092a9f8-5acc-4932-9e90-ef962eeb301a" containerID="59afe7eaf0205414d90e0c74df18dd68b990d6daa45326ac28a6de2667f135ac" exitCode=0 Dec 13 00:08:35 crc kubenswrapper[3562]: I1213 00:08:35.184753 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4jkp" event={"ID":"4092a9f8-5acc-4932-9e90-ef962eeb301a","Type":"ContainerDied","Data":"59afe7eaf0205414d90e0c74df18dd68b990d6daa45326ac28a6de2667f135ac"} Dec 13 00:08:35 crc kubenswrapper[3562]: I1213 00:08:35.185699 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" event={"ID":"ed024e5d-8fc2-4c22-803d-73f3c9795f19","Type":"ContainerStarted","Data":"574c6c28c34ac3ec1a96a99b3e920ef4e68106eea44589639bfd3cfdb7edb313"} Dec 13 00:08:35 crc kubenswrapper[3562]: I1213 00:08:35.187954 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" event={"ID":"6d67253e-2acd-4bc1-8185-793587da4f17","Type":"ContainerStarted","Data":"9cc9bd60ab5cb1a1b840fe32b8c4e33c4c499c4feb764cf6b4287f4b93817d35"} Dec 13 00:08:35 crc kubenswrapper[3562]: I1213 00:08:35.194211 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" event={"ID":"7d51f445-054a-4e4f-a67b-a828f5a32511","Type":"ContainerStarted","Data":"9747465a078c9ef9791281de8811e879fa60a8c179d9cf7cee2e4a6a610a6b9f"} Dec 13 00:08:35 crc kubenswrapper[3562]: I1213 00:08:35.198281 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" event={"ID":"4f8aa612-9da0-4a2b-911e-6a1764a4e74e","Type":"ContainerStarted","Data":"05e89100476534d242bf587dc691d40d8a888c510dedea53e6e96c563393252e"} Dec 13 00:08:35 crc kubenswrapper[3562]: I1213 00:08:35.198308 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" event={"ID":"4f8aa612-9da0-4a2b-911e-6a1764a4e74e","Type":"ContainerStarted","Data":"57f47837f7061056d9e5a85d7222db287d5e2ec94b8e12f4ace97380dd92226d"} Dec 13 00:08:35 crc kubenswrapper[3562]: I1213 00:08:35.203018 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" event={"ID":"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7","Type":"ContainerStarted","Data":"d592b7bccf18e1ee4a120da660d1a36ad1f32fcc976fea6ce7e43ca8ad92e7cd"} Dec 13 00:08:35 crc kubenswrapper[3562]: I1213 00:08:35.205093 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-gbw49" event={"ID":"13045510-8717-4a71-ade4-be95a76440a7","Type":"ContainerStarted","Data":"3bd71792db363170ce44266ce69c6f8fdbc14d9398ece9ea3fdc02a944fb174b"} Dec 13 00:08:35 crc kubenswrapper[3562]: I1213 00:08:35.207132 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" event={"ID":"1a3e81c3-c292-4130-9436-f94062c91efd","Type":"ContainerStarted","Data":"32138d9495aa7f4e726774c0620ac5d16f0236d7d1db41cd6313f1f8266135dd"} Dec 13 00:08:35 crc kubenswrapper[3562]: I1213 00:08:35.208894 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:08:35 crc kubenswrapper[3562]: I1213 00:08:35.208994 3562 patch_prober.go:28] interesting pod/controller-manager-778975cc4f-x5vcf container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.87:8443/healthz\": dial tcp 10.217.0.87:8443: connect: connection refused" start-of-body= Dec 13 00:08:35 crc kubenswrapper[3562]: I1213 00:08:35.209030 3562 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.87:8443/healthz\": dial tcp 10.217.0.87:8443: connect: connection refused" Dec 13 00:08:35 crc kubenswrapper[3562]: I1213 00:08:35.211084 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" event={"ID":"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e","Type":"ContainerStarted","Data":"5d5bc451b25cf5600add1f4a152296d52f6402e54bfa4962a8c9c0237e9e5ad0"} Dec 13 00:08:35 crc kubenswrapper[3562]: I1213 00:08:35.214212 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" event={"ID":"f728c15e-d8de-4a9a-a3ea-fdcead95cb91","Type":"ContainerStarted","Data":"e0d13fd8f66e38765815c859bdae86d68ca4c52185ba850368910e66218b7a0a"} Dec 13 00:08:35 crc kubenswrapper[3562]: I1213 00:08:35.215333 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" event={"ID":"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf","Type":"ContainerStarted","Data":"d3f5d6785da9b2c392255cc3d57539744e49e410ea5fb1f00e237d66e736e773"} Dec 13 00:08:35 crc kubenswrapper[3562]: I1213 00:08:35.228078 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" event={"ID":"a702c6d2-4dde-4077-ab8c-0f8df804bf7a","Type":"ContainerStarted","Data":"eaa3ede9ba28138e7225a86b8324fcb37d2ac70a80d6e87db46c5ec3be201480"} Dec 13 00:08:35 crc kubenswrapper[3562]: I1213 00:08:35.238433 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-65476884b9-9wcvx" event={"ID":"6268b7fe-8910-4505-b404-6f1df638105c","Type":"ContainerStarted","Data":"ce1e253007fbcdd5258e69942efc9232c3960be864b79ef227ec087aefefb1af"} Dec 13 00:08:35 crc kubenswrapper[3562]: I1213 00:08:35.240170 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 13 00:08:35 crc kubenswrapper[3562]: I1213 00:08:35.244091 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" event={"ID":"120b38dc-8236-4fa6-a452-642b8ad738ee","Type":"ContainerStarted","Data":"deedb546b50abc091a925a9cf2375669f8742b920446907b011c2ffe7f591bc5"} Dec 13 00:08:35 crc kubenswrapper[3562]: I1213 00:08:35.247369 3562 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Dec 13 00:08:35 crc kubenswrapper[3562]: I1213 00:08:35.247460 3562 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Dec 13 00:08:35 crc kubenswrapper[3562]: I1213 00:08:35.249938 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" event={"ID":"cf1a8966-f594-490a-9fbb-eec5bafd13d3","Type":"ContainerStarted","Data":"3bd16a694434f9c1f1ef332ca5e5a859252815dfa5f21c7f1bb536002584c1d3"} Dec 13 00:08:35 crc kubenswrapper[3562]: I1213 00:08:35.249979 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" event={"ID":"cf1a8966-f594-490a-9fbb-eec5bafd13d3","Type":"ContainerStarted","Data":"ffb968d2555a63e7b198c90552346cd7711328baa375ac79c87ecf0a54a31e47"} Dec 13 00:08:35 crc kubenswrapper[3562]: I1213 00:08:35.255808 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" event={"ID":"0f394926-bdb9-425c-b36e-264d7fd34550","Type":"ContainerStarted","Data":"eab124c529b1df550d1cc2b09dbde0c9bf3cea071bf1437d6b0b284560eb9448"} Dec 13 00:08:35 crc kubenswrapper[3562]: I1213 00:08:35.287977 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" event={"ID":"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0","Type":"ContainerStarted","Data":"674d44102f126ddcfb2143afc01cd33a43158abfa8abc9dd039dc55cfd67c52b"} Dec 13 00:08:35 crc kubenswrapper[3562]: I1213 00:08:35.290602 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" event={"ID":"bd556935-a077-45df-ba3f-d42c39326ccd","Type":"ContainerStarted","Data":"9de5b6016c5be3f8e17f7e9a7bd3522345c211181ace2c06a9930d56202f625e"} Dec 13 00:08:35 crc kubenswrapper[3562]: I1213 00:08:35.291537 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:08:35 crc kubenswrapper[3562]: I1213 00:08:35.304676 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" event={"ID":"297ab9b6-2186-4d5b-a952-2bfd59af63c4","Type":"ContainerStarted","Data":"a51b9a7dac998d3c2b428d553c42e225ac6498bbefc32a61576d8fb4c66f2597"} Dec 13 00:08:35 crc kubenswrapper[3562]: I1213 00:08:35.357337 3562 generic.go:334] "Generic (PLEG): container finished" podID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" containerID="9671b146e6f929a27a09d1b5a4aa07e24fcf92df9eb56f0cfa559143c9279902" exitCode=0 Dec 13 00:08:35 crc kubenswrapper[3562]: I1213 00:08:35.357856 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sdddl" event={"ID":"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760","Type":"ContainerDied","Data":"9671b146e6f929a27a09d1b5a4aa07e24fcf92df9eb56f0cfa559143c9279902"} Dec 13 00:08:35 crc kubenswrapper[3562]: I1213 00:08:35.393084 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" event={"ID":"10603adc-d495-423c-9459-4caa405960bb","Type":"ContainerStarted","Data":"3dd9101c529e6593228198972e9d84e5ff4909b8968a70303d3299a3a52cedfe"} Dec 13 00:08:35 crc kubenswrapper[3562]: I1213 00:08:35.410171 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" event={"ID":"71af81a9-7d43-49b2-9287-c375900aa905","Type":"ContainerStarted","Data":"e4218d01a3b3bd1cced8c61983c278003315fbec60c32fd6f774a7cf176d5b62"} Dec 13 00:08:35 crc kubenswrapper[3562]: I1213 00:08:35.420938 3562 generic.go:334] "Generic (PLEG): container finished" podID="3f4dca86-e6ee-4ec9-8324-86aff960225e" containerID="e5154634081ca49fc68e1d7232116283b323c2f9aa2c68a394ff7e7932b67c29" exitCode=0 Dec 13 00:08:35 crc kubenswrapper[3562]: I1213 00:08:35.421020 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jhz6" event={"ID":"3f4dca86-e6ee-4ec9-8324-86aff960225e","Type":"ContainerDied","Data":"e5154634081ca49fc68e1d7232116283b323c2f9aa2c68a394ff7e7932b67c29"} Dec 13 00:08:35 crc kubenswrapper[3562]: I1213 00:08:35.437734 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" event={"ID":"8a5ae51d-d173-4531-8975-f164c975ce1f","Type":"ContainerStarted","Data":"a089fdd186cb9b0484fbd50119cbbd2469809e4c54d64a1bbc5d726235fdb6fb"} Dec 13 00:08:35 crc kubenswrapper[3562]: I1213 00:08:35.439056 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:08:35 crc kubenswrapper[3562]: I1213 00:08:35.446878 3562 generic.go:334] "Generic (PLEG): container finished" podID="5bacb25d-97b6-4491-8fb4-99feae1d802a" containerID="6c9737f8bda3a61a3fef8c0f344d58b958010f55aa2e533cc6fd5511b697d302" exitCode=0 Dec 13 00:08:35 crc kubenswrapper[3562]: I1213 00:08:35.446965 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" event={"ID":"5bacb25d-97b6-4491-8fb4-99feae1d802a","Type":"ContainerDied","Data":"6c9737f8bda3a61a3fef8c0f344d58b958010f55aa2e533cc6fd5511b697d302"} Dec 13 00:08:35 crc kubenswrapper[3562]: I1213 00:08:35.467321 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" event={"ID":"45a8038e-e7f2-4d93-a6f5-7753aa54e63f","Type":"ContainerStarted","Data":"bc7ef7cee17efc85da708fd4233524f653d85a2eeef763b6ec6c0be8b064ecaf"} Dec 13 00:08:35 crc kubenswrapper[3562]: I1213 00:08:35.468009 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Dec 13 00:08:35 crc kubenswrapper[3562]: I1213 00:08:35.472881 3562 generic.go:334] "Generic (PLEG): container finished" podID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" containerID="22be9d825296348dc766b0bae0f3dbc7758caaa53e88ed518d8aa124c2587a7f" exitCode=0 Dec 13 00:08:35 crc kubenswrapper[3562]: I1213 00:08:35.472995 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" event={"ID":"530553aa-0a1d-423e-8a22-f5eb4bdbb883","Type":"ContainerDied","Data":"22be9d825296348dc766b0bae0f3dbc7758caaa53e88ed518d8aa124c2587a7f"} Dec 13 00:08:35 crc kubenswrapper[3562]: I1213 00:08:35.475690 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" event={"ID":"0b5c38ff-1fa8-4219-994d-15776acd4a4d","Type":"ContainerStarted","Data":"a0354e8836b7f6bd6c90deeaca909cbc8dc3b54523f973dbb38fa197f8004019"} Dec 13 00:08:35 crc kubenswrapper[3562]: I1213 00:08:35.481427 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:08:35 crc kubenswrapper[3562]: I1213 00:08:35.629346 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Dec 13 00:08:35 crc kubenswrapper[3562]: I1213 00:08:35.649137 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Dec 13 00:08:36 crc kubenswrapper[3562]: I1213 00:08:36.501546 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" event={"ID":"ed024e5d-8fc2-4c22-803d-73f3c9795f19","Type":"ContainerStarted","Data":"88668764a6dfe4ad95c82d80741f0a74862906d24db0a4b1b24fb8faa63c13c1"} Dec 13 00:08:36 crc kubenswrapper[3562]: I1213 00:08:36.511393 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" event={"ID":"530553aa-0a1d-423e-8a22-f5eb4bdbb883","Type":"ContainerStarted","Data":"20089272784ff70b65f6b47e119baa5d7b763dbb882aeb4c6ebc36e943da04b7"} Dec 13 00:08:36 crc kubenswrapper[3562]: I1213 00:08:36.541309 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" event={"ID":"f728c15e-d8de-4a9a-a3ea-fdcead95cb91","Type":"ContainerStarted","Data":"4b0c3677ad712516ac23adc70127dca17c2d1d8336c2328cbf680f31d6ac5840"} Dec 13 00:08:36 crc kubenswrapper[3562]: I1213 00:08:36.541367 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" event={"ID":"f728c15e-d8de-4a9a-a3ea-fdcead95cb91","Type":"ContainerStarted","Data":"55ab20585afdd6e95077c6b56e07cd685a39eda3e60a6a0b4e79206455e085d3"} Dec 13 00:08:36 crc kubenswrapper[3562]: I1213 00:08:36.567433 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" event={"ID":"5bacb25d-97b6-4491-8fb4-99feae1d802a","Type":"ContainerStarted","Data":"22672358bf3aee24a7cee57921b387a668c3422f7444b47d586a9d2eeb602ae2"} Dec 13 00:08:36 crc kubenswrapper[3562]: I1213 00:08:36.579497 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" event={"ID":"41e8708a-e40d-4d28-846b-c52eda4d1755","Type":"ContainerStarted","Data":"dd9b347ec3a03e29fed9a6e43941519115dd6521476a7d25944353db53429d26"} Dec 13 00:08:36 crc kubenswrapper[3562]: I1213 00:08:36.611150 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-644bb77b49-5x5xk" event={"ID":"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1","Type":"ContainerStarted","Data":"82f4f5865ed2b920c64917ec9b6dca008c05591dcdba28c9bc442e619ba35edb"} Dec 13 00:08:36 crc kubenswrapper[3562]: I1213 00:08:36.632846 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" event={"ID":"10603adc-d495-423c-9459-4caa405960bb","Type":"ContainerStarted","Data":"0649702de80921745f78b06049d353539b700fca4ec22f4012228e951144b26f"} Dec 13 00:08:36 crc kubenswrapper[3562]: I1213 00:08:36.671546 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" event={"ID":"7d51f445-054a-4e4f-a67b-a828f5a32511","Type":"ContainerStarted","Data":"6c32e961dcbc396dec9e527d8eae41c351d98afdf1540cc1bc1900c5bf7a8901"} Dec 13 00:08:36 crc kubenswrapper[3562]: I1213 00:08:36.708454 3562 generic.go:334] "Generic (PLEG): container finished" podID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" containerID="68497c68fd7f295150ee02ff86f6124d9e5ecbd3900e8d5472d038f55a055c86" exitCode=0 Dec 13 00:08:36 crc kubenswrapper[3562]: I1213 00:08:36.710197 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" event={"ID":"c782cf62-a827-4677-b3c2-6f82c5f09cbb","Type":"ContainerDied","Data":"68497c68fd7f295150ee02ff86f6124d9e5ecbd3900e8d5472d038f55a055c86"} Dec 13 00:08:36 crc kubenswrapper[3562]: I1213 00:08:36.712883 3562 patch_prober.go:28] interesting pod/controller-manager-778975cc4f-x5vcf container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.87:8443/healthz\": dial tcp 10.217.0.87:8443: connect: connection refused" start-of-body= Dec 13 00:08:36 crc kubenswrapper[3562]: I1213 00:08:36.713010 3562 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.87:8443/healthz\": dial tcp 10.217.0.87:8443: connect: connection refused" Dec 13 00:08:36 crc kubenswrapper[3562]: I1213 00:08:36.713369 3562 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Dec 13 00:08:36 crc kubenswrapper[3562]: I1213 00:08:36.713393 3562 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Dec 13 00:08:36 crc kubenswrapper[3562]: I1213 00:08:36.731267 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 13 00:08:36 crc kubenswrapper[3562]: I1213 00:08:36.743763 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:08:36 crc kubenswrapper[3562]: I1213 00:08:36.753293 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Dec 13 00:08:37 crc kubenswrapper[3562]: I1213 00:08:37.128570 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Dec 13 00:08:37 crc kubenswrapper[3562]: I1213 00:08:37.352595 3562 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 13 00:08:37 crc kubenswrapper[3562]: I1213 00:08:37.521025 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:08:37 crc kubenswrapper[3562]: I1213 00:08:37.733606 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" event={"ID":"12e733dd-0939-4f1b-9cbb-13897e093787","Type":"ContainerStarted","Data":"8302f9b6ab0583c8073682e273756870e9209d8cfe90d34a84d6fc87bba0ecaf"} Dec 13 00:08:37 crc kubenswrapper[3562]: I1213 00:08:37.737790 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" event={"ID":"41e8708a-e40d-4d28-846b-c52eda4d1755","Type":"ContainerStarted","Data":"515fc1c9a0117f35cef31bd40c90f15084f14db9d36da2322f8b44b6e9277184"} Dec 13 00:08:37 crc kubenswrapper[3562]: I1213 00:08:37.741349 3562 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Dec 13 00:08:37 crc kubenswrapper[3562]: I1213 00:08:37.741410 3562 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Dec 13 00:08:37 crc kubenswrapper[3562]: I1213 00:08:37.750381 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:08:37 crc kubenswrapper[3562]: I1213 00:08:37.954954 3562 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-13T00:08:37.352854812Z","Handler":null,"Name":""} Dec 13 00:08:38 crc kubenswrapper[3562]: I1213 00:08:38.037636 3562 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 13 00:08:38 crc kubenswrapper[3562]: I1213 00:08:38.037678 3562 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 13 00:08:38 crc kubenswrapper[3562]: I1213 00:08:38.793669 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" event={"ID":"12e733dd-0939-4f1b-9cbb-13897e093787","Type":"ContainerStarted","Data":"545ed6ee3167a594144d05712528886478733804206954789bd0d1e18d811ed7"} Dec 13 00:08:38 crc kubenswrapper[3562]: I1213 00:08:38.819278 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Dec 13 00:08:39 crc kubenswrapper[3562]: I1213 00:08:39.809341 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" event={"ID":"12e733dd-0939-4f1b-9cbb-13897e093787","Type":"ContainerStarted","Data":"d5990be6b70c2ca72797397ed92191c96d3cfa153c12e77e08534baa9bfda363"} Dec 13 00:08:40 crc kubenswrapper[3562]: I1213 00:08:40.996654 3562 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:08:40 crc kubenswrapper[3562]: I1213 00:08:40.996703 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:08:41 crc kubenswrapper[3562]: I1213 00:08:41.006293 3562 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:08:41 crc kubenswrapper[3562]: I1213 00:08:41.392146 3562 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Dec 13 00:08:41 crc kubenswrapper[3562]: I1213 00:08:41.392246 3562 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Dec 13 00:08:41 crc kubenswrapper[3562]: I1213 00:08:41.392675 3562 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Dec 13 00:08:41 crc kubenswrapper[3562]: I1213 00:08:41.392822 3562 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Dec 13 00:08:41 crc kubenswrapper[3562]: I1213 00:08:41.732513 3562 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:08:41 crc kubenswrapper[3562]: I1213 00:08:41.732567 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:08:41 crc kubenswrapper[3562]: I1213 00:08:41.741887 3562 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:08:41 crc kubenswrapper[3562]: I1213 00:08:41.758558 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:08:41 crc kubenswrapper[3562]: I1213 00:08:41.760005 3562 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:08:41 crc kubenswrapper[3562]: I1213 00:08:41.765785 3562 patch_prober.go:28] interesting pod/console-644bb77b49-5x5xk container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.73:8443/health\": dial tcp 10.217.0.73:8443: connect: connection refused" start-of-body= Dec 13 00:08:41 crc kubenswrapper[3562]: I1213 00:08:41.765862 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" containerName="console" probeResult="failure" output="Get \"https://10.217.0.73:8443/health\": dial tcp 10.217.0.73:8443: connect: connection refused" Dec 13 00:08:41 crc kubenswrapper[3562]: I1213 00:08:41.830387 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Dec 13 00:08:41 crc kubenswrapper[3562]: I1213 00:08:41.831004 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Dec 13 00:08:42 crc kubenswrapper[3562]: I1213 00:08:42.991621 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-gbw49" Dec 13 00:08:43 crc kubenswrapper[3562]: I1213 00:08:43.053396 3562 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 00:08:43 crc kubenswrapper[3562]: I1213 00:08:43.053476 3562 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 00:08:45 crc kubenswrapper[3562]: I1213 00:08:45.631072 3562 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-pruner-29426400-qzd6w"] Dec 13 00:08:45 crc kubenswrapper[3562]: I1213 00:08:45.631182 3562 topology_manager.go:215] "Topology Admit Handler" podUID="65e6c5f4-6f11-4ec0-8ce6-a544e417c56e" podNamespace="openshift-image-registry" podName="image-pruner-29426400-qzd6w" Dec 13 00:08:45 crc kubenswrapper[3562]: I1213 00:08:45.631752 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29426400-qzd6w" Dec 13 00:08:45 crc kubenswrapper[3562]: I1213 00:08:45.634570 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"serviceca" Dec 13 00:08:45 crc kubenswrapper[3562]: I1213 00:08:45.634777 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"pruner-dockercfg-nzhll" Dec 13 00:08:45 crc kubenswrapper[3562]: I1213 00:08:45.646424 3562 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-m25wn"] Dec 13 00:08:45 crc kubenswrapper[3562]: I1213 00:08:45.646562 3562 topology_manager.go:215] "Topology Admit Handler" podUID="55f01f1d-1d56-4031-98bf-5747ea5930eb" podNamespace="openshift-marketplace" podName="redhat-operators-m25wn" Dec 13 00:08:45 crc kubenswrapper[3562]: I1213 00:08:45.647787 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m25wn" Dec 13 00:08:45 crc kubenswrapper[3562]: I1213 00:08:45.659237 3562 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-269h6"] Dec 13 00:08:45 crc kubenswrapper[3562]: I1213 00:08:45.659362 3562 topology_manager.go:215] "Topology Admit Handler" podUID="8f7307bd-8e93-41cb-a32f-14b02f9b700e" podNamespace="openshift-marketplace" podName="certified-operators-269h6" Dec 13 00:08:45 crc kubenswrapper[3562]: I1213 00:08:45.660385 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-269h6" Dec 13 00:08:45 crc kubenswrapper[3562]: I1213 00:08:45.670880 3562 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8b6x9"] Dec 13 00:08:45 crc kubenswrapper[3562]: I1213 00:08:45.671484 3562 topology_manager.go:215] "Topology Admit Handler" podUID="aa79e90a-de78-46e9-8a61-9f25c6962160" podNamespace="openshift-marketplace" podName="redhat-marketplace-8b6x9" Dec 13 00:08:45 crc kubenswrapper[3562]: I1213 00:08:45.675232 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8b6x9" Dec 13 00:08:45 crc kubenswrapper[3562]: I1213 00:08:45.676984 3562 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29426400-jg4z2"] Dec 13 00:08:45 crc kubenswrapper[3562]: I1213 00:08:45.677061 3562 topology_manager.go:215] "Topology Admit Handler" podUID="869f63ac-ddad-4292-8bfc-a3179d1e211b" podNamespace="openshift-operator-lifecycle-manager" podName="collect-profiles-29426400-jg4z2" Dec 13 00:08:45 crc kubenswrapper[3562]: I1213 00:08:45.677570 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29426400-jg4z2" Dec 13 00:08:45 crc kubenswrapper[3562]: I1213 00:08:45.682852 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29426400-qzd6w"] Dec 13 00:08:45 crc kubenswrapper[3562]: I1213 00:08:45.684102 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 13 00:08:45 crc kubenswrapper[3562]: I1213 00:08:45.684850 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-45g9d" Dec 13 00:08:45 crc kubenswrapper[3562]: I1213 00:08:45.687331 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-269h6"] Dec 13 00:08:45 crc kubenswrapper[3562]: I1213 00:08:45.689434 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29426400-jg4z2"] Dec 13 00:08:45 crc kubenswrapper[3562]: I1213 00:08:45.697392 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8b6x9"] Dec 13 00:08:45 crc kubenswrapper[3562]: I1213 00:08:45.702002 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m25wn"] Dec 13 00:08:45 crc kubenswrapper[3562]: I1213 00:08:45.794876 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55f01f1d-1d56-4031-98bf-5747ea5930eb-catalog-content\") pod \"redhat-operators-m25wn\" (UID: \"55f01f1d-1d56-4031-98bf-5747ea5930eb\") " pod="openshift-marketplace/redhat-operators-m25wn" Dec 13 00:08:45 crc kubenswrapper[3562]: I1213 00:08:45.795123 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa79e90a-de78-46e9-8a61-9f25c6962160-utilities\") pod \"redhat-marketplace-8b6x9\" (UID: \"aa79e90a-de78-46e9-8a61-9f25c6962160\") " pod="openshift-marketplace/redhat-marketplace-8b6x9" Dec 13 00:08:45 crc kubenswrapper[3562]: I1213 00:08:45.795189 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f7307bd-8e93-41cb-a32f-14b02f9b700e-catalog-content\") pod \"certified-operators-269h6\" (UID: \"8f7307bd-8e93-41cb-a32f-14b02f9b700e\") " pod="openshift-marketplace/certified-operators-269h6" Dec 13 00:08:45 crc kubenswrapper[3562]: I1213 00:08:45.795246 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r98xs\" (UniqueName: \"kubernetes.io/projected/55f01f1d-1d56-4031-98bf-5747ea5930eb-kube-api-access-r98xs\") pod \"redhat-operators-m25wn\" (UID: \"55f01f1d-1d56-4031-98bf-5747ea5930eb\") " pod="openshift-marketplace/redhat-operators-m25wn" Dec 13 00:08:45 crc kubenswrapper[3562]: I1213 00:08:45.795298 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/65e6c5f4-6f11-4ec0-8ce6-a544e417c56e-serviceca\") pod \"image-pruner-29426400-qzd6w\" (UID: \"65e6c5f4-6f11-4ec0-8ce6-a544e417c56e\") " pod="openshift-image-registry/image-pruner-29426400-qzd6w" Dec 13 00:08:45 crc kubenswrapper[3562]: I1213 00:08:45.795341 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwqqt\" (UniqueName: \"kubernetes.io/projected/869f63ac-ddad-4292-8bfc-a3179d1e211b-kube-api-access-wwqqt\") pod \"collect-profiles-29426400-jg4z2\" (UID: \"869f63ac-ddad-4292-8bfc-a3179d1e211b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426400-jg4z2" Dec 13 00:08:45 crc kubenswrapper[3562]: I1213 00:08:45.795383 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55f01f1d-1d56-4031-98bf-5747ea5930eb-utilities\") pod \"redhat-operators-m25wn\" (UID: \"55f01f1d-1d56-4031-98bf-5747ea5930eb\") " pod="openshift-marketplace/redhat-operators-m25wn" Dec 13 00:08:45 crc kubenswrapper[3562]: I1213 00:08:45.795437 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvz7m\" (UniqueName: \"kubernetes.io/projected/8f7307bd-8e93-41cb-a32f-14b02f9b700e-kube-api-access-mvz7m\") pod \"certified-operators-269h6\" (UID: \"8f7307bd-8e93-41cb-a32f-14b02f9b700e\") " pod="openshift-marketplace/certified-operators-269h6" Dec 13 00:08:45 crc kubenswrapper[3562]: I1213 00:08:45.795496 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/869f63ac-ddad-4292-8bfc-a3179d1e211b-secret-volume\") pod \"collect-profiles-29426400-jg4z2\" (UID: \"869f63ac-ddad-4292-8bfc-a3179d1e211b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426400-jg4z2" Dec 13 00:08:45 crc kubenswrapper[3562]: I1213 00:08:45.795638 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n87m\" (UniqueName: \"kubernetes.io/projected/65e6c5f4-6f11-4ec0-8ce6-a544e417c56e-kube-api-access-8n87m\") pod \"image-pruner-29426400-qzd6w\" (UID: \"65e6c5f4-6f11-4ec0-8ce6-a544e417c56e\") " pod="openshift-image-registry/image-pruner-29426400-qzd6w" Dec 13 00:08:45 crc kubenswrapper[3562]: I1213 00:08:45.795681 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa79e90a-de78-46e9-8a61-9f25c6962160-catalog-content\") pod \"redhat-marketplace-8b6x9\" (UID: \"aa79e90a-de78-46e9-8a61-9f25c6962160\") " pod="openshift-marketplace/redhat-marketplace-8b6x9" Dec 13 00:08:45 crc kubenswrapper[3562]: I1213 00:08:45.795712 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f7307bd-8e93-41cb-a32f-14b02f9b700e-utilities\") pod \"certified-operators-269h6\" (UID: \"8f7307bd-8e93-41cb-a32f-14b02f9b700e\") " pod="openshift-marketplace/certified-operators-269h6" Dec 13 00:08:45 crc kubenswrapper[3562]: I1213 00:08:45.795740 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8httk\" (UniqueName: \"kubernetes.io/projected/aa79e90a-de78-46e9-8a61-9f25c6962160-kube-api-access-8httk\") pod \"redhat-marketplace-8b6x9\" (UID: \"aa79e90a-de78-46e9-8a61-9f25c6962160\") " pod="openshift-marketplace/redhat-marketplace-8b6x9" Dec 13 00:08:45 crc kubenswrapper[3562]: I1213 00:08:45.795771 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/869f63ac-ddad-4292-8bfc-a3179d1e211b-config-volume\") pod \"collect-profiles-29426400-jg4z2\" (UID: \"869f63ac-ddad-4292-8bfc-a3179d1e211b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426400-jg4z2" Dec 13 00:08:45 crc kubenswrapper[3562]: I1213 00:08:45.897283 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55f01f1d-1d56-4031-98bf-5747ea5930eb-catalog-content\") pod \"redhat-operators-m25wn\" (UID: \"55f01f1d-1d56-4031-98bf-5747ea5930eb\") " pod="openshift-marketplace/redhat-operators-m25wn" Dec 13 00:08:45 crc kubenswrapper[3562]: I1213 00:08:45.897359 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa79e90a-de78-46e9-8a61-9f25c6962160-utilities\") pod \"redhat-marketplace-8b6x9\" (UID: \"aa79e90a-de78-46e9-8a61-9f25c6962160\") " pod="openshift-marketplace/redhat-marketplace-8b6x9" Dec 13 00:08:45 crc kubenswrapper[3562]: I1213 00:08:45.897386 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f7307bd-8e93-41cb-a32f-14b02f9b700e-catalog-content\") pod \"certified-operators-269h6\" (UID: \"8f7307bd-8e93-41cb-a32f-14b02f9b700e\") " pod="openshift-marketplace/certified-operators-269h6" Dec 13 00:08:45 crc kubenswrapper[3562]: I1213 00:08:45.897420 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r98xs\" (UniqueName: \"kubernetes.io/projected/55f01f1d-1d56-4031-98bf-5747ea5930eb-kube-api-access-r98xs\") pod \"redhat-operators-m25wn\" (UID: \"55f01f1d-1d56-4031-98bf-5747ea5930eb\") " pod="openshift-marketplace/redhat-operators-m25wn" Dec 13 00:08:45 crc kubenswrapper[3562]: I1213 00:08:45.897457 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/65e6c5f4-6f11-4ec0-8ce6-a544e417c56e-serviceca\") pod \"image-pruner-29426400-qzd6w\" (UID: \"65e6c5f4-6f11-4ec0-8ce6-a544e417c56e\") " pod="openshift-image-registry/image-pruner-29426400-qzd6w" Dec 13 00:08:45 crc kubenswrapper[3562]: I1213 00:08:45.897488 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wwqqt\" (UniqueName: \"kubernetes.io/projected/869f63ac-ddad-4292-8bfc-a3179d1e211b-kube-api-access-wwqqt\") pod \"collect-profiles-29426400-jg4z2\" (UID: \"869f63ac-ddad-4292-8bfc-a3179d1e211b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426400-jg4z2" Dec 13 00:08:45 crc kubenswrapper[3562]: I1213 00:08:45.897518 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55f01f1d-1d56-4031-98bf-5747ea5930eb-utilities\") pod \"redhat-operators-m25wn\" (UID: \"55f01f1d-1d56-4031-98bf-5747ea5930eb\") " pod="openshift-marketplace/redhat-operators-m25wn" Dec 13 00:08:45 crc kubenswrapper[3562]: I1213 00:08:45.897546 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-mvz7m\" (UniqueName: \"kubernetes.io/projected/8f7307bd-8e93-41cb-a32f-14b02f9b700e-kube-api-access-mvz7m\") pod \"certified-operators-269h6\" (UID: \"8f7307bd-8e93-41cb-a32f-14b02f9b700e\") " pod="openshift-marketplace/certified-operators-269h6" Dec 13 00:08:45 crc kubenswrapper[3562]: I1213 00:08:45.897578 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/869f63ac-ddad-4292-8bfc-a3179d1e211b-secret-volume\") pod \"collect-profiles-29426400-jg4z2\" (UID: \"869f63ac-ddad-4292-8bfc-a3179d1e211b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426400-jg4z2" Dec 13 00:08:45 crc kubenswrapper[3562]: I1213 00:08:45.897646 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8n87m\" (UniqueName: \"kubernetes.io/projected/65e6c5f4-6f11-4ec0-8ce6-a544e417c56e-kube-api-access-8n87m\") pod \"image-pruner-29426400-qzd6w\" (UID: \"65e6c5f4-6f11-4ec0-8ce6-a544e417c56e\") " pod="openshift-image-registry/image-pruner-29426400-qzd6w" Dec 13 00:08:45 crc kubenswrapper[3562]: I1213 00:08:45.897677 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa79e90a-de78-46e9-8a61-9f25c6962160-catalog-content\") pod \"redhat-marketplace-8b6x9\" (UID: \"aa79e90a-de78-46e9-8a61-9f25c6962160\") " pod="openshift-marketplace/redhat-marketplace-8b6x9" Dec 13 00:08:45 crc kubenswrapper[3562]: I1213 00:08:45.897705 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f7307bd-8e93-41cb-a32f-14b02f9b700e-utilities\") pod \"certified-operators-269h6\" (UID: \"8f7307bd-8e93-41cb-a32f-14b02f9b700e\") " pod="openshift-marketplace/certified-operators-269h6" Dec 13 00:08:45 crc kubenswrapper[3562]: I1213 00:08:45.897736 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8httk\" (UniqueName: \"kubernetes.io/projected/aa79e90a-de78-46e9-8a61-9f25c6962160-kube-api-access-8httk\") pod \"redhat-marketplace-8b6x9\" (UID: \"aa79e90a-de78-46e9-8a61-9f25c6962160\") " pod="openshift-marketplace/redhat-marketplace-8b6x9" Dec 13 00:08:45 crc kubenswrapper[3562]: I1213 00:08:45.897764 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/869f63ac-ddad-4292-8bfc-a3179d1e211b-config-volume\") pod \"collect-profiles-29426400-jg4z2\" (UID: \"869f63ac-ddad-4292-8bfc-a3179d1e211b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426400-jg4z2" Dec 13 00:08:45 crc kubenswrapper[3562]: I1213 00:08:45.898855 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa79e90a-de78-46e9-8a61-9f25c6962160-utilities\") pod \"redhat-marketplace-8b6x9\" (UID: \"aa79e90a-de78-46e9-8a61-9f25c6962160\") " pod="openshift-marketplace/redhat-marketplace-8b6x9" Dec 13 00:08:45 crc kubenswrapper[3562]: I1213 00:08:45.898954 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55f01f1d-1d56-4031-98bf-5747ea5930eb-catalog-content\") pod \"redhat-operators-m25wn\" (UID: \"55f01f1d-1d56-4031-98bf-5747ea5930eb\") " pod="openshift-marketplace/redhat-operators-m25wn" Dec 13 00:08:45 crc kubenswrapper[3562]: I1213 00:08:45.899011 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f7307bd-8e93-41cb-a32f-14b02f9b700e-utilities\") pod \"certified-operators-269h6\" (UID: \"8f7307bd-8e93-41cb-a32f-14b02f9b700e\") " pod="openshift-marketplace/certified-operators-269h6" Dec 13 00:08:45 crc kubenswrapper[3562]: I1213 00:08:45.899048 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa79e90a-de78-46e9-8a61-9f25c6962160-catalog-content\") pod \"redhat-marketplace-8b6x9\" (UID: \"aa79e90a-de78-46e9-8a61-9f25c6962160\") " pod="openshift-marketplace/redhat-marketplace-8b6x9" Dec 13 00:08:45 crc kubenswrapper[3562]: I1213 00:08:45.899245 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55f01f1d-1d56-4031-98bf-5747ea5930eb-utilities\") pod \"redhat-operators-m25wn\" (UID: \"55f01f1d-1d56-4031-98bf-5747ea5930eb\") " pod="openshift-marketplace/redhat-operators-m25wn" Dec 13 00:08:45 crc kubenswrapper[3562]: I1213 00:08:45.902842 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/65e6c5f4-6f11-4ec0-8ce6-a544e417c56e-serviceca\") pod \"image-pruner-29426400-qzd6w\" (UID: \"65e6c5f4-6f11-4ec0-8ce6-a544e417c56e\") " pod="openshift-image-registry/image-pruner-29426400-qzd6w" Dec 13 00:08:45 crc kubenswrapper[3562]: I1213 00:08:45.914161 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/869f63ac-ddad-4292-8bfc-a3179d1e211b-secret-volume\") pod \"collect-profiles-29426400-jg4z2\" (UID: \"869f63ac-ddad-4292-8bfc-a3179d1e211b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426400-jg4z2" Dec 13 00:08:45 crc kubenswrapper[3562]: I1213 00:08:45.921289 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/869f63ac-ddad-4292-8bfc-a3179d1e211b-config-volume\") pod \"collect-profiles-29426400-jg4z2\" (UID: \"869f63ac-ddad-4292-8bfc-a3179d1e211b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426400-jg4z2" Dec 13 00:08:45 crc kubenswrapper[3562]: I1213 00:08:45.927681 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f7307bd-8e93-41cb-a32f-14b02f9b700e-catalog-content\") pod \"certified-operators-269h6\" (UID: \"8f7307bd-8e93-41cb-a32f-14b02f9b700e\") " pod="openshift-marketplace/certified-operators-269h6" Dec 13 00:08:45 crc kubenswrapper[3562]: I1213 00:08:45.929077 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-8n87m\" (UniqueName: \"kubernetes.io/projected/65e6c5f4-6f11-4ec0-8ce6-a544e417c56e-kube-api-access-8n87m\") pod \"image-pruner-29426400-qzd6w\" (UID: \"65e6c5f4-6f11-4ec0-8ce6-a544e417c56e\") " pod="openshift-image-registry/image-pruner-29426400-qzd6w" Dec 13 00:08:45 crc kubenswrapper[3562]: I1213 00:08:45.934160 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-r98xs\" (UniqueName: \"kubernetes.io/projected/55f01f1d-1d56-4031-98bf-5747ea5930eb-kube-api-access-r98xs\") pod \"redhat-operators-m25wn\" (UID: \"55f01f1d-1d56-4031-98bf-5747ea5930eb\") " pod="openshift-marketplace/redhat-operators-m25wn" Dec 13 00:08:45 crc kubenswrapper[3562]: I1213 00:08:45.937304 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvz7m\" (UniqueName: \"kubernetes.io/projected/8f7307bd-8e93-41cb-a32f-14b02f9b700e-kube-api-access-mvz7m\") pod \"certified-operators-269h6\" (UID: \"8f7307bd-8e93-41cb-a32f-14b02f9b700e\") " pod="openshift-marketplace/certified-operators-269h6" Dec 13 00:08:45 crc kubenswrapper[3562]: I1213 00:08:45.937823 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwqqt\" (UniqueName: \"kubernetes.io/projected/869f63ac-ddad-4292-8bfc-a3179d1e211b-kube-api-access-wwqqt\") pod \"collect-profiles-29426400-jg4z2\" (UID: \"869f63ac-ddad-4292-8bfc-a3179d1e211b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426400-jg4z2" Dec 13 00:08:45 crc kubenswrapper[3562]: I1213 00:08:45.942371 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-8httk\" (UniqueName: \"kubernetes.io/projected/aa79e90a-de78-46e9-8a61-9f25c6962160-kube-api-access-8httk\") pod \"redhat-marketplace-8b6x9\" (UID: \"aa79e90a-de78-46e9-8a61-9f25c6962160\") " pod="openshift-marketplace/redhat-marketplace-8b6x9" Dec 13 00:08:45 crc kubenswrapper[3562]: I1213 00:08:45.967405 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29426400-qzd6w" Dec 13 00:08:45 crc kubenswrapper[3562]: I1213 00:08:45.982979 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m25wn" Dec 13 00:08:45 crc kubenswrapper[3562]: I1213 00:08:45.997445 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-269h6" Dec 13 00:08:46 crc kubenswrapper[3562]: I1213 00:08:46.019997 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8b6x9" Dec 13 00:08:46 crc kubenswrapper[3562]: I1213 00:08:46.040136 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29426400-jg4z2" Dec 13 00:08:51 crc kubenswrapper[3562]: I1213 00:08:51.397408 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-65476884b9-9wcvx" Dec 13 00:08:51 crc kubenswrapper[3562]: I1213 00:08:51.763501 3562 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:08:51 crc kubenswrapper[3562]: I1213 00:08:51.769238 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-644bb77b49-5x5xk" Dec 13 00:08:53 crc kubenswrapper[3562]: I1213 00:08:53.744851 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m25wn"] Dec 13 00:08:53 crc kubenswrapper[3562]: I1213 00:08:53.786035 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29426400-jg4z2"] Dec 13 00:09:03 crc kubenswrapper[3562]: I1213 00:09:03.728025 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 13 00:09:03 crc kubenswrapper[3562]: I1213 00:09:03.728613 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 13 00:09:03 crc kubenswrapper[3562]: I1213 00:09:03.728638 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 13 00:09:03 crc kubenswrapper[3562]: I1213 00:09:03.728665 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 13 00:09:03 crc kubenswrapper[3562]: I1213 00:09:03.728706 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 13 00:09:10 crc kubenswrapper[3562]: I1213 00:09:10.988030 3562 generic.go:334] "Generic (PLEG): container finished" podID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerID="22511aa8d6dc79c1754301b9800a2a0b1dc4d8841bd28ffd8250d62e84371bed" exitCode=0 Dec 13 00:09:10 crc kubenswrapper[3562]: I1213 00:09:10.988117 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" event={"ID":"aa90b3c2-febd-4588-a063-7fbbe82f00c1","Type":"ContainerDied","Data":"22511aa8d6dc79c1754301b9800a2a0b1dc4d8841bd28ffd8250d62e84371bed"} Dec 13 00:09:11 crc kubenswrapper[3562]: I1213 00:09:11.076714 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-v54bt" Dec 13 00:09:11 crc kubenswrapper[3562]: I1213 00:09:11.315119 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Dec 13 00:09:11 crc kubenswrapper[3562]: W1213 00:09:11.729557 3562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod869f63ac_ddad_4292_8bfc_a3179d1e211b.slice/crio-bcefc6b7879e5643ee33536769a8e67330fce7acd47837be2b8f5e10989eb05f WatchSource:0}: Error finding container bcefc6b7879e5643ee33536769a8e67330fce7acd47837be2b8f5e10989eb05f: Status 404 returned error can't find the container with id bcefc6b7879e5643ee33536769a8e67330fce7acd47837be2b8f5e10989eb05f Dec 13 00:09:11 crc kubenswrapper[3562]: I1213 00:09:11.994059 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29426400-jg4z2" event={"ID":"869f63ac-ddad-4292-8bfc-a3179d1e211b","Type":"ContainerStarted","Data":"bcefc6b7879e5643ee33536769a8e67330fce7acd47837be2b8f5e10989eb05f"} Dec 13 00:09:12 crc kubenswrapper[3562]: I1213 00:09:12.108695 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8b6x9"] Dec 13 00:09:12 crc kubenswrapper[3562]: I1213 00:09:12.176227 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29426400-qzd6w"] Dec 13 00:09:12 crc kubenswrapper[3562]: I1213 00:09:12.180235 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-269h6"] Dec 13 00:09:13 crc kubenswrapper[3562]: I1213 00:09:13.052710 3562 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 00:09:13 crc kubenswrapper[3562]: I1213 00:09:13.052826 3562 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 00:09:22 crc kubenswrapper[3562]: I1213 00:09:22.417077 3562 patch_prober.go:28] interesting pod/authentication-operator-7cc7ff75d5-g9qv8 container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 13 00:09:22 crc kubenswrapper[3562]: I1213 00:09:22.417675 3562 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 13 00:09:30 crc kubenswrapper[3562]: W1213 00:09:30.349491 3562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f7307bd_8e93_41cb_a32f_14b02f9b700e.slice/crio-5e8fbc635297183be9ecb4ce8fb67e417c6e0bc75ca99cdeec7db7bad7b5c26a WatchSource:0}: Error finding container 5e8fbc635297183be9ecb4ce8fb67e417c6e0bc75ca99cdeec7db7bad7b5c26a: Status 404 returned error can't find the container with id 5e8fbc635297183be9ecb4ce8fb67e417c6e0bc75ca99cdeec7db7bad7b5c26a Dec 13 00:09:30 crc kubenswrapper[3562]: W1213 00:09:30.350998 3562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa79e90a_de78_46e9_8a61_9f25c6962160.slice/crio-b36fae6134bd42247aca3356509e41287b3310ae56b59b341013efe70e5ba585 WatchSource:0}: Error finding container b36fae6134bd42247aca3356509e41287b3310ae56b59b341013efe70e5ba585: Status 404 returned error can't find the container with id b36fae6134bd42247aca3356509e41287b3310ae56b59b341013efe70e5ba585 Dec 13 00:09:31 crc kubenswrapper[3562]: I1213 00:09:31.085126 3562 generic.go:334] "Generic (PLEG): container finished" podID="55f01f1d-1d56-4031-98bf-5747ea5930eb" containerID="bd955c68381b02a2021f0f5ea539445f04b3c94f909bf63c8ae12524186cbff8" exitCode=0 Dec 13 00:09:31 crc kubenswrapper[3562]: I1213 00:09:31.085222 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m25wn" event={"ID":"55f01f1d-1d56-4031-98bf-5747ea5930eb","Type":"ContainerDied","Data":"bd955c68381b02a2021f0f5ea539445f04b3c94f909bf63c8ae12524186cbff8"} Dec 13 00:09:31 crc kubenswrapper[3562]: I1213 00:09:31.085740 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m25wn" event={"ID":"55f01f1d-1d56-4031-98bf-5747ea5930eb","Type":"ContainerStarted","Data":"11854d04c91d685417b2a4e2c9ce325f7c53c9d73f0a723dcf28ea050a3ff726"} Dec 13 00:09:31 crc kubenswrapper[3562]: I1213 00:09:31.087275 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29426400-jg4z2" event={"ID":"869f63ac-ddad-4292-8bfc-a3179d1e211b","Type":"ContainerStarted","Data":"01dafc9ca777cd94dab89e4845abb4cced01a7d328d2860fb98a059ed89d7aac"} Dec 13 00:09:31 crc kubenswrapper[3562]: I1213 00:09:31.090701 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29426400-qzd6w" event={"ID":"65e6c5f4-6f11-4ec0-8ce6-a544e417c56e","Type":"ContainerStarted","Data":"a3c629594c01827b2684e9bed2b1b04172774fefdaf1dc34171fa332619e4364"} Dec 13 00:09:31 crc kubenswrapper[3562]: I1213 00:09:31.090757 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29426400-qzd6w" event={"ID":"65e6c5f4-6f11-4ec0-8ce6-a544e417c56e","Type":"ContainerStarted","Data":"82b74fcd8468283cc5dbd7119dcb6524450a081b9e20eee148a13d696cd86748"} Dec 13 00:09:31 crc kubenswrapper[3562]: I1213 00:09:31.092270 3562 generic.go:334] "Generic (PLEG): container finished" podID="aa79e90a-de78-46e9-8a61-9f25c6962160" containerID="7806cc68e60d04203ee0f1209cdbde5993d326734f4fc921592f5f6788129fd9" exitCode=0 Dec 13 00:09:31 crc kubenswrapper[3562]: I1213 00:09:31.092341 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8b6x9" event={"ID":"aa79e90a-de78-46e9-8a61-9f25c6962160","Type":"ContainerDied","Data":"7806cc68e60d04203ee0f1209cdbde5993d326734f4fc921592f5f6788129fd9"} Dec 13 00:09:31 crc kubenswrapper[3562]: I1213 00:09:31.092382 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8b6x9" event={"ID":"aa79e90a-de78-46e9-8a61-9f25c6962160","Type":"ContainerStarted","Data":"b36fae6134bd42247aca3356509e41287b3310ae56b59b341013efe70e5ba585"} Dec 13 00:09:31 crc kubenswrapper[3562]: I1213 00:09:31.095257 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sdddl" event={"ID":"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760","Type":"ContainerStarted","Data":"dad6cb5334e0f97e7c9657edc6fcbb6b3055950af6406bf65e7e1be2ad84096b"} Dec 13 00:09:31 crc kubenswrapper[3562]: I1213 00:09:31.097049 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7287f" event={"ID":"887d596e-c519-4bfa-af90-3edd9e1b2f0f","Type":"ContainerStarted","Data":"5e639a435ec69948e440496faba9ac55d32c12817da197907e010b5ab1795519"} Dec 13 00:09:31 crc kubenswrapper[3562]: I1213 00:09:31.102794 3562 generic.go:334] "Generic (PLEG): container finished" podID="8f7307bd-8e93-41cb-a32f-14b02f9b700e" containerID="d6d4b9c28fded6df7896005f9653d64456cda55487c6764afe774f207a199fc0" exitCode=0 Dec 13 00:09:31 crc kubenswrapper[3562]: I1213 00:09:31.102880 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-269h6" event={"ID":"8f7307bd-8e93-41cb-a32f-14b02f9b700e","Type":"ContainerDied","Data":"d6d4b9c28fded6df7896005f9653d64456cda55487c6764afe774f207a199fc0"} Dec 13 00:09:31 crc kubenswrapper[3562]: I1213 00:09:31.102922 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-269h6" event={"ID":"8f7307bd-8e93-41cb-a32f-14b02f9b700e","Type":"ContainerStarted","Data":"5e8fbc635297183be9ecb4ce8fb67e417c6e0bc75ca99cdeec7db7bad7b5c26a"} Dec 13 00:09:31 crc kubenswrapper[3562]: I1213 00:09:31.107014 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" event={"ID":"aa90b3c2-febd-4588-a063-7fbbe82f00c1","Type":"ContainerStarted","Data":"9a91ad596b66a81e7f436cb217efca67112e2429dbe9589bab52d6fa92a3401e"} Dec 13 00:09:31 crc kubenswrapper[3562]: I1213 00:09:31.111644 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" event={"ID":"c782cf62-a827-4677-b3c2-6f82c5f09cbb","Type":"ContainerStarted","Data":"f69be4541a751a4e3537c768d6b4af918e429f429a9f8c5bc3f5af0cbb1e1118"} Dec 13 00:09:31 crc kubenswrapper[3562]: I1213 00:09:31.144963 3562 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29426400-jg4z2" podStartSLOduration=165.144892974 podStartE2EDuration="2m45.144892974s" podCreationTimestamp="2025-12-13 00:06:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 00:09:31.14214216 +0000 UTC m=+207.625372693" watchObservedRunningTime="2025-12-13 00:09:31.144892974 +0000 UTC m=+207.628123497" Dec 13 00:09:31 crc kubenswrapper[3562]: I1213 00:09:31.160389 3562 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-image-registry/image-pruner-29426400-qzd6w" podStartSLOduration=165.160353931 podStartE2EDuration="2m45.160353931s" podCreationTimestamp="2025-12-13 00:06:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 00:09:31.159349353 +0000 UTC m=+207.642579876" watchObservedRunningTime="2025-12-13 00:09:31.160353931 +0000 UTC m=+207.643584454" Dec 13 00:09:31 crc kubenswrapper[3562]: I1213 00:09:31.468823 3562 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Dec 13 00:09:31 crc kubenswrapper[3562]: I1213 00:09:31.471865 3562 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Dec 13 00:09:32 crc kubenswrapper[3562]: I1213 00:09:32.117112 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4jkp" event={"ID":"4092a9f8-5acc-4932-9e90-ef962eeb301a","Type":"ContainerStarted","Data":"08b2b1ba9113187f4ab836ca671536861f978f8ebf3c50117ce097f1fff70c4e"} Dec 13 00:09:32 crc kubenswrapper[3562]: I1213 00:09:32.119458 3562 generic.go:334] "Generic (PLEG): container finished" podID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" containerID="f69be4541a751a4e3537c768d6b4af918e429f429a9f8c5bc3f5af0cbb1e1118" exitCode=0 Dec 13 00:09:32 crc kubenswrapper[3562]: I1213 00:09:32.119528 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" event={"ID":"c782cf62-a827-4677-b3c2-6f82c5f09cbb","Type":"ContainerDied","Data":"f69be4541a751a4e3537c768d6b4af918e429f429a9f8c5bc3f5af0cbb1e1118"} Dec 13 00:09:32 crc kubenswrapper[3562]: I1213 00:09:32.121390 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jhz6" event={"ID":"3f4dca86-e6ee-4ec9-8324-86aff960225e","Type":"ContainerStarted","Data":"29b8871358c1451121da8e9f91cfa5d2cd4cc2d3d5601820229617f5e2b27852"} Dec 13 00:09:32 crc kubenswrapper[3562]: I1213 00:09:32.121732 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Dec 13 00:09:32 crc kubenswrapper[3562]: I1213 00:09:32.133125 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Dec 13 00:09:33 crc kubenswrapper[3562]: I1213 00:09:33.125780 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8b6x9" event={"ID":"aa79e90a-de78-46e9-8a61-9f25c6962160","Type":"ContainerStarted","Data":"2b55d077e50cbb7b2eb0e7615450f15faacc1e68c64d6ae0ceef984f7c2792c1"} Dec 13 00:09:33 crc kubenswrapper[3562]: I1213 00:09:33.128413 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-269h6" event={"ID":"8f7307bd-8e93-41cb-a32f-14b02f9b700e","Type":"ContainerStarted","Data":"6a9717e197eea464bc3c4dbb9b358e7fe44cdd06ab3e4c1407ff9c97cadbbde7"} Dec 13 00:09:34 crc kubenswrapper[3562]: I1213 00:09:34.164365 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m25wn" event={"ID":"55f01f1d-1d56-4031-98bf-5747ea5930eb","Type":"ContainerStarted","Data":"a7a3e703a11ab2d9ac344e1d995c26daf18dfb7281643f3c17e091c7a91dcfd2"} Dec 13 00:09:34 crc kubenswrapper[3562]: I1213 00:09:34.168967 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" event={"ID":"c782cf62-a827-4677-b3c2-6f82c5f09cbb","Type":"ContainerStarted","Data":"20fb453bb5b5891a5efb32730e5534354115809179e2939be024826b1a868574"} Dec 13 00:09:36 crc kubenswrapper[3562]: I1213 00:09:36.185518 3562 generic.go:334] "Generic (PLEG): container finished" podID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" containerID="5e639a435ec69948e440496faba9ac55d32c12817da197907e010b5ab1795519" exitCode=0 Dec 13 00:09:36 crc kubenswrapper[3562]: I1213 00:09:36.185586 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7287f" event={"ID":"887d596e-c519-4bfa-af90-3edd9e1b2f0f","Type":"ContainerDied","Data":"5e639a435ec69948e440496faba9ac55d32c12817da197907e010b5ab1795519"} Dec 13 00:09:37 crc kubenswrapper[3562]: I1213 00:09:37.194101 3562 generic.go:334] "Generic (PLEG): container finished" podID="869f63ac-ddad-4292-8bfc-a3179d1e211b" containerID="01dafc9ca777cd94dab89e4845abb4cced01a7d328d2860fb98a059ed89d7aac" exitCode=0 Dec 13 00:09:37 crc kubenswrapper[3562]: I1213 00:09:37.194273 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29426400-jg4z2" event={"ID":"869f63ac-ddad-4292-8bfc-a3179d1e211b","Type":"ContainerDied","Data":"01dafc9ca777cd94dab89e4845abb4cced01a7d328d2860fb98a059ed89d7aac"} Dec 13 00:09:38 crc kubenswrapper[3562]: I1213 00:09:38.810874 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29426400-jg4z2" Dec 13 00:09:38 crc kubenswrapper[3562]: I1213 00:09:38.884432 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/869f63ac-ddad-4292-8bfc-a3179d1e211b-secret-volume\") pod \"869f63ac-ddad-4292-8bfc-a3179d1e211b\" (UID: \"869f63ac-ddad-4292-8bfc-a3179d1e211b\") " Dec 13 00:09:38 crc kubenswrapper[3562]: I1213 00:09:38.884892 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwqqt\" (UniqueName: \"kubernetes.io/projected/869f63ac-ddad-4292-8bfc-a3179d1e211b-kube-api-access-wwqqt\") pod \"869f63ac-ddad-4292-8bfc-a3179d1e211b\" (UID: \"869f63ac-ddad-4292-8bfc-a3179d1e211b\") " Dec 13 00:09:38 crc kubenswrapper[3562]: I1213 00:09:38.885002 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/869f63ac-ddad-4292-8bfc-a3179d1e211b-config-volume\") pod \"869f63ac-ddad-4292-8bfc-a3179d1e211b\" (UID: \"869f63ac-ddad-4292-8bfc-a3179d1e211b\") " Dec 13 00:09:38 crc kubenswrapper[3562]: I1213 00:09:38.885652 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869f63ac-ddad-4292-8bfc-a3179d1e211b-config-volume" (OuterVolumeSpecName: "config-volume") pod "869f63ac-ddad-4292-8bfc-a3179d1e211b" (UID: "869f63ac-ddad-4292-8bfc-a3179d1e211b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 00:09:38 crc kubenswrapper[3562]: I1213 00:09:38.890137 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/869f63ac-ddad-4292-8bfc-a3179d1e211b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "869f63ac-ddad-4292-8bfc-a3179d1e211b" (UID: "869f63ac-ddad-4292-8bfc-a3179d1e211b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 00:09:38 crc kubenswrapper[3562]: I1213 00:09:38.903318 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869f63ac-ddad-4292-8bfc-a3179d1e211b-kube-api-access-wwqqt" (OuterVolumeSpecName: "kube-api-access-wwqqt") pod "869f63ac-ddad-4292-8bfc-a3179d1e211b" (UID: "869f63ac-ddad-4292-8bfc-a3179d1e211b"). InnerVolumeSpecName "kube-api-access-wwqqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 00:09:38 crc kubenswrapper[3562]: I1213 00:09:38.987636 3562 reconciler_common.go:300] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/869f63ac-ddad-4292-8bfc-a3179d1e211b-config-volume\") on node \"crc\" DevicePath \"\"" Dec 13 00:09:38 crc kubenswrapper[3562]: I1213 00:09:38.987673 3562 reconciler_common.go:300] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/869f63ac-ddad-4292-8bfc-a3179d1e211b-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 13 00:09:38 crc kubenswrapper[3562]: I1213 00:09:38.987689 3562 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-wwqqt\" (UniqueName: \"kubernetes.io/projected/869f63ac-ddad-4292-8bfc-a3179d1e211b-kube-api-access-wwqqt\") on node \"crc\" DevicePath \"\"" Dec 13 00:09:39 crc kubenswrapper[3562]: I1213 00:09:39.208529 3562 generic.go:334] "Generic (PLEG): container finished" podID="aa79e90a-de78-46e9-8a61-9f25c6962160" containerID="2b55d077e50cbb7b2eb0e7615450f15faacc1e68c64d6ae0ceef984f7c2792c1" exitCode=0 Dec 13 00:09:39 crc kubenswrapper[3562]: I1213 00:09:39.208620 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8b6x9" event={"ID":"aa79e90a-de78-46e9-8a61-9f25c6962160","Type":"ContainerDied","Data":"2b55d077e50cbb7b2eb0e7615450f15faacc1e68c64d6ae0ceef984f7c2792c1"} Dec 13 00:09:39 crc kubenswrapper[3562]: I1213 00:09:39.215634 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29426400-jg4z2" Dec 13 00:09:39 crc kubenswrapper[3562]: I1213 00:09:39.216029 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29426400-jg4z2" event={"ID":"869f63ac-ddad-4292-8bfc-a3179d1e211b","Type":"ContainerDied","Data":"bcefc6b7879e5643ee33536769a8e67330fce7acd47837be2b8f5e10989eb05f"} Dec 13 00:09:39 crc kubenswrapper[3562]: I1213 00:09:39.216076 3562 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bcefc6b7879e5643ee33536769a8e67330fce7acd47837be2b8f5e10989eb05f" Dec 13 00:09:39 crc kubenswrapper[3562]: I1213 00:09:39.225257 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7287f" event={"ID":"887d596e-c519-4bfa-af90-3edd9e1b2f0f","Type":"ContainerStarted","Data":"2a6b1dad8b1de3289669264a5a64b303562b741b5a7b200ec6d718ddf7da5445"} Dec 13 00:09:39 crc kubenswrapper[3562]: I1213 00:09:39.290265 3562 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2"] Dec 13 00:09:39 crc kubenswrapper[3562]: I1213 00:09:39.295472 3562 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2"] Dec 13 00:09:39 crc kubenswrapper[3562]: I1213 00:09:39.882455 3562 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="deaee4f4-7b7a-442d-99b7-c8ac62ef5f27" path="/var/lib/kubelet/pods/deaee4f4-7b7a-442d-99b7-c8ac62ef5f27/volumes" Dec 13 00:09:40 crc kubenswrapper[3562]: I1213 00:09:40.990255 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7287f" Dec 13 00:09:40 crc kubenswrapper[3562]: I1213 00:09:40.990604 3562 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7287f" Dec 13 00:09:41 crc kubenswrapper[3562]: I1213 00:09:41.356256 3562 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7287f" Dec 13 00:09:41 crc kubenswrapper[3562]: I1213 00:09:41.837307 3562 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 13 00:09:41 crc kubenswrapper[3562]: I1213 00:09:41.837390 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 13 00:09:41 crc kubenswrapper[3562]: I1213 00:09:41.953448 3562 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 13 00:09:42 crc kubenswrapper[3562]: I1213 00:09:42.242159 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8b6x9" event={"ID":"aa79e90a-de78-46e9-8a61-9f25c6962160","Type":"ContainerStarted","Data":"f241c4f91f1dc8c39f817b784e103b25d79938e912056fd03dfc0517d1698fd6"} Dec 13 00:09:42 crc kubenswrapper[3562]: I1213 00:09:42.343803 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 13 00:09:43 crc kubenswrapper[3562]: I1213 00:09:43.055863 3562 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 00:09:43 crc kubenswrapper[3562]: I1213 00:09:43.055949 3562 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 00:09:43 crc kubenswrapper[3562]: I1213 00:09:43.055987 3562 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Dec 13 00:09:43 crc kubenswrapper[3562]: I1213 00:09:43.056704 3562 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6f050250cf2ec95c071158dd93704a2edcae951b1c7c4cbefc8d8a73798e9ee6"} pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 13 00:09:43 crc kubenswrapper[3562]: I1213 00:09:43.056867 3562 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" containerID="cri-o://6f050250cf2ec95c071158dd93704a2edcae951b1c7c4cbefc8d8a73798e9ee6" gracePeriod=600 Dec 13 00:09:43 crc kubenswrapper[3562]: I1213 00:09:43.259275 3562 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8b6x9" podStartSLOduration=58.845501587 podStartE2EDuration="1m7.259224488s" podCreationTimestamp="2025-12-13 00:08:36 +0000 UTC" firstStartedPulling="2025-12-13 00:09:31.095124753 +0000 UTC m=+207.578355276" lastFinishedPulling="2025-12-13 00:09:39.508847654 +0000 UTC m=+215.992078177" observedRunningTime="2025-12-13 00:09:43.257301847 +0000 UTC m=+219.740532370" watchObservedRunningTime="2025-12-13 00:09:43.259224488 +0000 UTC m=+219.742455021" Dec 13 00:09:45 crc kubenswrapper[3562]: I1213 00:09:45.254373 3562 generic.go:334] "Generic (PLEG): container finished" podID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerID="6f050250cf2ec95c071158dd93704a2edcae951b1c7c4cbefc8d8a73798e9ee6" exitCode=0 Dec 13 00:09:45 crc kubenswrapper[3562]: I1213 00:09:45.254443 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerDied","Data":"6f050250cf2ec95c071158dd93704a2edcae951b1c7c4cbefc8d8a73798e9ee6"} Dec 13 00:09:46 crc kubenswrapper[3562]: I1213 00:09:46.021057 3562 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8b6x9" Dec 13 00:09:46 crc kubenswrapper[3562]: I1213 00:09:46.021323 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8b6x9" Dec 13 00:09:46 crc kubenswrapper[3562]: I1213 00:09:46.107075 3562 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8b6x9" Dec 13 00:09:46 crc kubenswrapper[3562]: I1213 00:09:46.352601 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8b6x9" Dec 13 00:09:46 crc kubenswrapper[3562]: I1213 00:09:46.574093 3562 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8b6x9"] Dec 13 00:09:47 crc kubenswrapper[3562]: I1213 00:09:47.085033 3562 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-269h6"] Dec 13 00:09:47 crc kubenswrapper[3562]: I1213 00:09:47.085291 3562 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-269h6" podUID="8f7307bd-8e93-41cb-a32f-14b02f9b700e" containerName="extract-content" containerID="cri-o://6a9717e197eea464bc3c4dbb9b358e7fe44cdd06ab3e4c1407ff9c97cadbbde7" gracePeriod=30 Dec 13 00:09:47 crc kubenswrapper[3562]: I1213 00:09:47.100463 3562 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7287f"] Dec 13 00:09:47 crc kubenswrapper[3562]: I1213 00:09:47.101142 3562 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" containerName="registry-server" containerID="cri-o://2a6b1dad8b1de3289669264a5a64b303562b741b5a7b200ec6d718ddf7da5445" gracePeriod=30 Dec 13 00:09:47 crc kubenswrapper[3562]: E1213 00:09:47.107703 3562 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2a6b1dad8b1de3289669264a5a64b303562b741b5a7b200ec6d718ddf7da5445" cmd=["grpc_health_probe","-addr=:50051"] Dec 13 00:09:47 crc kubenswrapper[3562]: E1213 00:09:47.110928 3562 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2a6b1dad8b1de3289669264a5a64b303562b741b5a7b200ec6d718ddf7da5445" cmd=["grpc_health_probe","-addr=:50051"] Dec 13 00:09:47 crc kubenswrapper[3562]: I1213 00:09:47.115061 3562 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8jhz6"] Dec 13 00:09:47 crc kubenswrapper[3562]: I1213 00:09:47.115590 3562 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" containerName="extract-content" containerID="cri-o://29b8871358c1451121da8e9f91cfa5d2cd4cc2d3d5601820229617f5e2b27852" gracePeriod=30 Dec 13 00:09:47 crc kubenswrapper[3562]: E1213 00:09:47.116348 3562 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2a6b1dad8b1de3289669264a5a64b303562b741b5a7b200ec6d718ddf7da5445" cmd=["grpc_health_probe","-addr=:50051"] Dec 13 00:09:47 crc kubenswrapper[3562]: E1213 00:09:47.116412 3562 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" containerName="registry-server" Dec 13 00:09:47 crc kubenswrapper[3562]: I1213 00:09:47.119432 3562 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sdddl"] Dec 13 00:09:47 crc kubenswrapper[3562]: I1213 00:09:47.119673 3562 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" containerName="extract-content" containerID="cri-o://dad6cb5334e0f97e7c9657edc6fcbb6b3055950af6406bf65e7e1be2ad84096b" gracePeriod=30 Dec 13 00:09:47 crc kubenswrapper[3562]: I1213 00:09:47.127647 3562 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-8b455464d-f9xdt"] Dec 13 00:09:47 crc kubenswrapper[3562]: I1213 00:09:47.128172 3562 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" containerID="cri-o://7d935ae38eafd288b0bf6684585036fd5fa879811f459a17e350801d74397673" gracePeriod=30 Dec 13 00:09:47 crc kubenswrapper[3562]: I1213 00:09:47.138616 3562 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8s8pc"] Dec 13 00:09:47 crc kubenswrapper[3562]: I1213 00:09:47.138871 3562 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" containerName="registry-server" containerID="cri-o://20fb453bb5b5891a5efb32730e5534354115809179e2939be024826b1a868574" gracePeriod=30 Dec 13 00:09:47 crc kubenswrapper[3562]: I1213 00:09:47.156317 3562 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f4jkp"] Dec 13 00:09:47 crc kubenswrapper[3562]: I1213 00:09:47.156639 3562 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" containerName="extract-content" containerID="cri-o://08b2b1ba9113187f4ab836ca671536861f978f8ebf3c50117ce097f1fff70c4e" gracePeriod=30 Dec 13 00:09:47 crc kubenswrapper[3562]: I1213 00:09:47.164823 3562 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m25wn"] Dec 13 00:09:47 crc kubenswrapper[3562]: I1213 00:09:47.165092 3562 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-m25wn" podUID="55f01f1d-1d56-4031-98bf-5747ea5930eb" containerName="extract-content" containerID="cri-o://a7a3e703a11ab2d9ac344e1d995c26daf18dfb7281643f3c17e091c7a91dcfd2" gracePeriod=30 Dec 13 00:09:47 crc kubenswrapper[3562]: I1213 00:09:47.169632 3562 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-8b455464d-cpx94"] Dec 13 00:09:47 crc kubenswrapper[3562]: I1213 00:09:47.169800 3562 topology_manager.go:215] "Topology Admit Handler" podUID="39e4a9ce-79cc-4ea3-b65e-fd5f4eb81c2b" podNamespace="openshift-marketplace" podName="marketplace-operator-8b455464d-cpx94" Dec 13 00:09:47 crc kubenswrapper[3562]: E1213 00:09:47.170623 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="869f63ac-ddad-4292-8bfc-a3179d1e211b" containerName="collect-profiles" Dec 13 00:09:47 crc kubenswrapper[3562]: I1213 00:09:47.170642 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="869f63ac-ddad-4292-8bfc-a3179d1e211b" containerName="collect-profiles" Dec 13 00:09:47 crc kubenswrapper[3562]: I1213 00:09:47.170741 3562 memory_manager.go:354] "RemoveStaleState removing state" podUID="869f63ac-ddad-4292-8bfc-a3179d1e211b" containerName="collect-profiles" Dec 13 00:09:47 crc kubenswrapper[3562]: I1213 00:09:47.171340 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-cpx94" Dec 13 00:09:47 crc kubenswrapper[3562]: I1213 00:09:47.175189 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-8b455464d-cpx94"] Dec 13 00:09:47 crc kubenswrapper[3562]: I1213 00:09:47.175264 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-b4zbk" Dec 13 00:09:47 crc kubenswrapper[3562]: I1213 00:09:47.191498 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/39e4a9ce-79cc-4ea3-b65e-fd5f4eb81c2b-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-cpx94\" (UID: \"39e4a9ce-79cc-4ea3-b65e-fd5f4eb81c2b\") " pod="openshift-marketplace/marketplace-operator-8b455464d-cpx94" Dec 13 00:09:47 crc kubenswrapper[3562]: I1213 00:09:47.191547 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/39e4a9ce-79cc-4ea3-b65e-fd5f4eb81c2b-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-cpx94\" (UID: \"39e4a9ce-79cc-4ea3-b65e-fd5f4eb81c2b\") " pod="openshift-marketplace/marketplace-operator-8b455464d-cpx94" Dec 13 00:09:47 crc kubenswrapper[3562]: I1213 00:09:47.191675 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dl5l9\" (UniqueName: \"kubernetes.io/projected/39e4a9ce-79cc-4ea3-b65e-fd5f4eb81c2b-kube-api-access-dl5l9\") pod \"marketplace-operator-8b455464d-cpx94\" (UID: \"39e4a9ce-79cc-4ea3-b65e-fd5f4eb81c2b\") " pod="openshift-marketplace/marketplace-operator-8b455464d-cpx94" Dec 13 00:09:47 crc kubenswrapper[3562]: I1213 00:09:47.279077 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-sdddl_fc9c9ba0-fcbb-4e78-8cf5-a059ec435760/extract-content/1.log" Dec 13 00:09:47 crc kubenswrapper[3562]: I1213 00:09:47.280255 3562 generic.go:334] "Generic (PLEG): container finished" podID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" containerID="dad6cb5334e0f97e7c9657edc6fcbb6b3055950af6406bf65e7e1be2ad84096b" exitCode=2 Dec 13 00:09:47 crc kubenswrapper[3562]: I1213 00:09:47.280320 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sdddl" event={"ID":"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760","Type":"ContainerDied","Data":"dad6cb5334e0f97e7c9657edc6fcbb6b3055950af6406bf65e7e1be2ad84096b"} Dec 13 00:09:47 crc kubenswrapper[3562]: I1213 00:09:47.285083 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-269h6_8f7307bd-8e93-41cb-a32f-14b02f9b700e/extract-content/0.log" Dec 13 00:09:47 crc kubenswrapper[3562]: I1213 00:09:47.286749 3562 generic.go:334] "Generic (PLEG): container finished" podID="8f7307bd-8e93-41cb-a32f-14b02f9b700e" containerID="6a9717e197eea464bc3c4dbb9b358e7fe44cdd06ab3e4c1407ff9c97cadbbde7" exitCode=2 Dec 13 00:09:47 crc kubenswrapper[3562]: I1213 00:09:47.286896 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-269h6" event={"ID":"8f7307bd-8e93-41cb-a32f-14b02f9b700e","Type":"ContainerDied","Data":"6a9717e197eea464bc3c4dbb9b358e7fe44cdd06ab3e4c1407ff9c97cadbbde7"} Dec 13 00:09:47 crc kubenswrapper[3562]: I1213 00:09:47.292674 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/39e4a9ce-79cc-4ea3-b65e-fd5f4eb81c2b-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-cpx94\" (UID: \"39e4a9ce-79cc-4ea3-b65e-fd5f4eb81c2b\") " pod="openshift-marketplace/marketplace-operator-8b455464d-cpx94" Dec 13 00:09:47 crc kubenswrapper[3562]: I1213 00:09:47.292732 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/39e4a9ce-79cc-4ea3-b65e-fd5f4eb81c2b-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-cpx94\" (UID: \"39e4a9ce-79cc-4ea3-b65e-fd5f4eb81c2b\") " pod="openshift-marketplace/marketplace-operator-8b455464d-cpx94" Dec 13 00:09:47 crc kubenswrapper[3562]: I1213 00:09:47.292770 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dl5l9\" (UniqueName: \"kubernetes.io/projected/39e4a9ce-79cc-4ea3-b65e-fd5f4eb81c2b-kube-api-access-dl5l9\") pod \"marketplace-operator-8b455464d-cpx94\" (UID: \"39e4a9ce-79cc-4ea3-b65e-fd5f4eb81c2b\") " pod="openshift-marketplace/marketplace-operator-8b455464d-cpx94" Dec 13 00:09:47 crc kubenswrapper[3562]: I1213 00:09:47.295474 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/39e4a9ce-79cc-4ea3-b65e-fd5f4eb81c2b-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-cpx94\" (UID: \"39e4a9ce-79cc-4ea3-b65e-fd5f4eb81c2b\") " pod="openshift-marketplace/marketplace-operator-8b455464d-cpx94" Dec 13 00:09:47 crc kubenswrapper[3562]: I1213 00:09:47.299590 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/39e4a9ce-79cc-4ea3-b65e-fd5f4eb81c2b-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-cpx94\" (UID: \"39e4a9ce-79cc-4ea3-b65e-fd5f4eb81c2b\") " pod="openshift-marketplace/marketplace-operator-8b455464d-cpx94" Dec 13 00:09:47 crc kubenswrapper[3562]: I1213 00:09:47.311980 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-dl5l9\" (UniqueName: \"kubernetes.io/projected/39e4a9ce-79cc-4ea3-b65e-fd5f4eb81c2b-kube-api-access-dl5l9\") pod \"marketplace-operator-8b455464d-cpx94\" (UID: \"39e4a9ce-79cc-4ea3-b65e-fd5f4eb81c2b\") " pod="openshift-marketplace/marketplace-operator-8b455464d-cpx94" Dec 13 00:09:47 crc kubenswrapper[3562]: I1213 00:09:47.718867 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-cpx94" Dec 13 00:09:47 crc kubenswrapper[3562]: I1213 00:09:47.888377 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-8b455464d-cpx94"] Dec 13 00:09:47 crc kubenswrapper[3562]: W1213 00:09:47.894571 3562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39e4a9ce_79cc_4ea3_b65e_fd5f4eb81c2b.slice/crio-76686e888b2c5a64787a53ccbc03c886c9d164e5e84fa989f2195c33c10e1695 WatchSource:0}: Error finding container 76686e888b2c5a64787a53ccbc03c886c9d164e5e84fa989f2195c33c10e1695: Status 404 returned error can't find the container with id 76686e888b2c5a64787a53ccbc03c886c9d164e5e84fa989f2195c33c10e1695 Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.177560 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-m25wn_55f01f1d-1d56-4031-98bf-5747ea5930eb/extract-content/0.log" Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.178654 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m25wn" Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.208284 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r98xs\" (UniqueName: \"kubernetes.io/projected/55f01f1d-1d56-4031-98bf-5747ea5930eb-kube-api-access-r98xs\") pod \"55f01f1d-1d56-4031-98bf-5747ea5930eb\" (UID: \"55f01f1d-1d56-4031-98bf-5747ea5930eb\") " Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.209534 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55f01f1d-1d56-4031-98bf-5747ea5930eb-catalog-content\") pod \"55f01f1d-1d56-4031-98bf-5747ea5930eb\" (UID: \"55f01f1d-1d56-4031-98bf-5747ea5930eb\") " Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.209615 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55f01f1d-1d56-4031-98bf-5747ea5930eb-utilities\") pod \"55f01f1d-1d56-4031-98bf-5747ea5930eb\" (UID: \"55f01f1d-1d56-4031-98bf-5747ea5930eb\") " Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.210228 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55f01f1d-1d56-4031-98bf-5747ea5930eb-utilities" (OuterVolumeSpecName: "utilities") pod "55f01f1d-1d56-4031-98bf-5747ea5930eb" (UID: "55f01f1d-1d56-4031-98bf-5747ea5930eb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.229354 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55f01f1d-1d56-4031-98bf-5747ea5930eb-kube-api-access-r98xs" (OuterVolumeSpecName: "kube-api-access-r98xs") pod "55f01f1d-1d56-4031-98bf-5747ea5930eb" (UID: "55f01f1d-1d56-4031-98bf-5747ea5930eb"). InnerVolumeSpecName "kube-api-access-r98xs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.276884 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-sdddl_fc9c9ba0-fcbb-4e78-8cf5-a059ec435760/extract-content/1.log" Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.277633 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.305826 3562 generic.go:334] "Generic (PLEG): container finished" podID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerID="7d935ae38eafd288b0bf6684585036fd5fa879811f459a17e350801d74397673" exitCode=0 Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.305876 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" event={"ID":"3482be94-0cdb-4e2a-889b-e5fac59fdbf5","Type":"ContainerDied","Data":"7d935ae38eafd288b0bf6684585036fd5fa879811f459a17e350801d74397673"} Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.307442 3562 generic.go:334] "Generic (PLEG): container finished" podID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" containerID="20fb453bb5b5891a5efb32730e5534354115809179e2939be024826b1a868574" exitCode=0 Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.307482 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" event={"ID":"c782cf62-a827-4677-b3c2-6f82c5f09cbb","Type":"ContainerDied","Data":"20fb453bb5b5891a5efb32730e5534354115809179e2939be024826b1a868574"} Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.309389 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-m25wn_55f01f1d-1d56-4031-98bf-5747ea5930eb/extract-content/0.log" Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.309625 3562 generic.go:334] "Generic (PLEG): container finished" podID="55f01f1d-1d56-4031-98bf-5747ea5930eb" containerID="a7a3e703a11ab2d9ac344e1d995c26daf18dfb7281643f3c17e091c7a91dcfd2" exitCode=2 Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.309664 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m25wn" event={"ID":"55f01f1d-1d56-4031-98bf-5747ea5930eb","Type":"ContainerDied","Data":"a7a3e703a11ab2d9ac344e1d995c26daf18dfb7281643f3c17e091c7a91dcfd2"} Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.309679 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m25wn" event={"ID":"55f01f1d-1d56-4031-98bf-5747ea5930eb","Type":"ContainerDied","Data":"11854d04c91d685417b2a4e2c9ce325f7c53c9d73f0a723dcf28ea050a3ff726"} Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.309695 3562 scope.go:117] "RemoveContainer" containerID="a7a3e703a11ab2d9ac344e1d995c26daf18dfb7281643f3c17e091c7a91dcfd2" Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.309811 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m25wn" Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.310266 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-catalog-content\") pod \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.310319 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9p8gt\" (UniqueName: \"kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt\") pod \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.310383 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-utilities\") pod \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.310580 3562 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-r98xs\" (UniqueName: \"kubernetes.io/projected/55f01f1d-1d56-4031-98bf-5747ea5930eb-kube-api-access-r98xs\") on node \"crc\" DevicePath \"\"" Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.310595 3562 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55f01f1d-1d56-4031-98bf-5747ea5930eb-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.311387 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-utilities" (OuterVolumeSpecName: "utilities") pod "fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" (UID: "fc9c9ba0-fcbb-4e78-8cf5-a059ec435760"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.318729 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt" (OuterVolumeSpecName: "kube-api-access-9p8gt") pod "fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" (UID: "fc9c9ba0-fcbb-4e78-8cf5-a059ec435760"). InnerVolumeSpecName "kube-api-access-9p8gt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.322522 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-f4jkp_4092a9f8-5acc-4932-9e90-ef962eeb301a/extract-content/1.log" Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.323954 3562 generic.go:334] "Generic (PLEG): container finished" podID="4092a9f8-5acc-4932-9e90-ef962eeb301a" containerID="08b2b1ba9113187f4ab836ca671536861f978f8ebf3c50117ce097f1fff70c4e" exitCode=2 Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.324060 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4jkp" event={"ID":"4092a9f8-5acc-4932-9e90-ef962eeb301a","Type":"ContainerDied","Data":"08b2b1ba9113187f4ab836ca671536861f978f8ebf3c50117ce097f1fff70c4e"} Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.331599 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-cpx94" event={"ID":"39e4a9ce-79cc-4ea3-b65e-fd5f4eb81c2b","Type":"ContainerStarted","Data":"7c812d27cfaa6d43692b15930ce5bf1f650e7a18e9c23e20c2a11911dfaec05f"} Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.331639 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-cpx94" event={"ID":"39e4a9ce-79cc-4ea3-b65e-fd5f4eb81c2b","Type":"ContainerStarted","Data":"76686e888b2c5a64787a53ccbc03c886c9d164e5e84fa989f2195c33c10e1695"} Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.333191 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8jhz6_3f4dca86-e6ee-4ec9-8324-86aff960225e/extract-content/1.log" Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.335538 3562 generic.go:334] "Generic (PLEG): container finished" podID="3f4dca86-e6ee-4ec9-8324-86aff960225e" containerID="29b8871358c1451121da8e9f91cfa5d2cd4cc2d3d5601820229617f5e2b27852" exitCode=2 Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.335569 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jhz6" event={"ID":"3f4dca86-e6ee-4ec9-8324-86aff960225e","Type":"ContainerDied","Data":"29b8871358c1451121da8e9f91cfa5d2cd4cc2d3d5601820229617f5e2b27852"} Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.338024 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-sdddl_fc9c9ba0-fcbb-4e78-8cf5-a059ec435760/extract-content/1.log" Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.338403 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sdddl" event={"ID":"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760","Type":"ContainerDied","Data":"5391da27b802f38330bcf9e4f82a5b3e218a9f77ceb2a778f9618a3305d99b9e"} Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.338481 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.341183 3562 generic.go:334] "Generic (PLEG): container finished" podID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" containerID="2a6b1dad8b1de3289669264a5a64b303562b741b5a7b200ec6d718ddf7da5445" exitCode=0 Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.341249 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7287f" event={"ID":"887d596e-c519-4bfa-af90-3edd9e1b2f0f","Type":"ContainerDied","Data":"2a6b1dad8b1de3289669264a5a64b303562b741b5a7b200ec6d718ddf7da5445"} Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.344347 3562 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8b6x9" podUID="aa79e90a-de78-46e9-8a61-9f25c6962160" containerName="registry-server" containerID="cri-o://f241c4f91f1dc8c39f817b784e103b25d79938e912056fd03dfc0517d1698fd6" gracePeriod=2 Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.344604 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerStarted","Data":"7a6e9eb4ac073e0dfa6ae2034844d4a0104e16d5711e34811caed4fa91b88545"} Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.412208 3562 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.412242 3562 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-9p8gt\" (UniqueName: \"kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt\") on node \"crc\" DevicePath \"\"" Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.443264 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8jhz6_3f4dca86-e6ee-4ec9-8324-86aff960225e/extract-content/1.log" Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.443657 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.454511 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-f4jkp_4092a9f8-5acc-4932-9e90-ef962eeb301a/extract-content/1.log" Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.454832 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.513684 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"4092a9f8-5acc-4932-9e90-ef962eeb301a\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.514064 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"3f4dca86-e6ee-4ec9-8324-86aff960225e\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.514107 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f4dca86-e6ee-4ec9-8324-86aff960225e-catalog-content\") pod \"3f4dca86-e6ee-4ec9-8324-86aff960225e\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.514139 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f4dca86-e6ee-4ec9-8324-86aff960225e-utilities\") pod \"3f4dca86-e6ee-4ec9-8324-86aff960225e\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.514171 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4092a9f8-5acc-4932-9e90-ef962eeb301a-utilities\") pod \"4092a9f8-5acc-4932-9e90-ef962eeb301a\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.514203 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4092a9f8-5acc-4932-9e90-ef962eeb301a-catalog-content\") pod \"4092a9f8-5acc-4932-9e90-ef962eeb301a\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.516542 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f4dca86-e6ee-4ec9-8324-86aff960225e-utilities" (OuterVolumeSpecName: "utilities") pod "3f4dca86-e6ee-4ec9-8324-86aff960225e" (UID: "3f4dca86-e6ee-4ec9-8324-86aff960225e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.517384 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4092a9f8-5acc-4932-9e90-ef962eeb301a-utilities" (OuterVolumeSpecName: "utilities") pod "4092a9f8-5acc-4932-9e90-ef962eeb301a" (UID: "4092a9f8-5acc-4932-9e90-ef962eeb301a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.521184 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt" (OuterVolumeSpecName: "kube-api-access-n6sqt") pod "3f4dca86-e6ee-4ec9-8324-86aff960225e" (UID: "3f4dca86-e6ee-4ec9-8324-86aff960225e"). InnerVolumeSpecName "kube-api-access-n6sqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.522983 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb" (OuterVolumeSpecName: "kube-api-access-ptdrb") pod "4092a9f8-5acc-4932-9e90-ef962eeb301a" (UID: "4092a9f8-5acc-4932-9e90-ef962eeb301a"). InnerVolumeSpecName "kube-api-access-ptdrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.557190 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-269h6_8f7307bd-8e93-41cb-a32f-14b02f9b700e/extract-content/0.log" Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.557709 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-269h6" Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.591998 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.594986 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.601532 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.614937 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f7307bd-8e93-41cb-a32f-14b02f9b700e-catalog-content\") pod \"8f7307bd-8e93-41cb-a32f-14b02f9b700e\" (UID: \"8f7307bd-8e93-41cb-a32f-14b02f9b700e\") " Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.615689 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvz7m\" (UniqueName: \"kubernetes.io/projected/8f7307bd-8e93-41cb-a32f-14b02f9b700e-kube-api-access-mvz7m\") pod \"8f7307bd-8e93-41cb-a32f-14b02f9b700e\" (UID: \"8f7307bd-8e93-41cb-a32f-14b02f9b700e\") " Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.615775 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f7307bd-8e93-41cb-a32f-14b02f9b700e-utilities\") pod \"8f7307bd-8e93-41cb-a32f-14b02f9b700e\" (UID: \"8f7307bd-8e93-41cb-a32f-14b02f9b700e\") " Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.616315 3562 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") on node \"crc\" DevicePath \"\"" Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.616343 3562 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") on node \"crc\" DevicePath \"\"" Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.616357 3562 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f4dca86-e6ee-4ec9-8324-86aff960225e-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.616371 3562 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4092a9f8-5acc-4932-9e90-ef962eeb301a-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.616508 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f7307bd-8e93-41cb-a32f-14b02f9b700e-utilities" (OuterVolumeSpecName: "utilities") pod "8f7307bd-8e93-41cb-a32f-14b02f9b700e" (UID: "8f7307bd-8e93-41cb-a32f-14b02f9b700e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.617921 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f7307bd-8e93-41cb-a32f-14b02f9b700e-kube-api-access-mvz7m" (OuterVolumeSpecName: "kube-api-access-mvz7m") pod "8f7307bd-8e93-41cb-a32f-14b02f9b700e" (UID: "8f7307bd-8e93-41cb-a32f-14b02f9b700e"). InnerVolumeSpecName "kube-api-access-mvz7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.717733 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.717775 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c782cf62-a827-4677-b3c2-6f82c5f09cbb-catalog-content\") pod \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.717798 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.717867 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.717895 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.717972 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c782cf62-a827-4677-b3c2-6f82c5f09cbb-utilities\") pod \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.718005 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/887d596e-c519-4bfa-af90-3edd9e1b2f0f-utilities\") pod \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.718031 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.718070 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/887d596e-c519-4bfa-af90-3edd9e1b2f0f-catalog-content\") pod \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.718337 3562 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-mvz7m\" (UniqueName: \"kubernetes.io/projected/8f7307bd-8e93-41cb-a32f-14b02f9b700e-kube-api-access-mvz7m\") on node \"crc\" DevicePath \"\"" Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.718355 3562 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f7307bd-8e93-41cb-a32f-14b02f9b700e-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.719579 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "3482be94-0cdb-4e2a-889b-e5fac59fdbf5" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.720051 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c782cf62-a827-4677-b3c2-6f82c5f09cbb-utilities" (OuterVolumeSpecName: "utilities") pod "c782cf62-a827-4677-b3c2-6f82c5f09cbb" (UID: "c782cf62-a827-4677-b3c2-6f82c5f09cbb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.720112 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/887d596e-c519-4bfa-af90-3edd9e1b2f0f-utilities" (OuterVolumeSpecName: "utilities") pod "887d596e-c519-4bfa-af90-3edd9e1b2f0f" (UID: "887d596e-c519-4bfa-af90-3edd9e1b2f0f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.721613 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "3482be94-0cdb-4e2a-889b-e5fac59fdbf5" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.721650 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r" (OuterVolumeSpecName: "kube-api-access-tf29r") pod "c782cf62-a827-4677-b3c2-6f82c5f09cbb" (UID: "c782cf62-a827-4677-b3c2-6f82c5f09cbb"). InnerVolumeSpecName "kube-api-access-tf29r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.721870 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5" (OuterVolumeSpecName: "kube-api-access-ncrf5") pod "887d596e-c519-4bfa-af90-3edd9e1b2f0f" (UID: "887d596e-c519-4bfa-af90-3edd9e1b2f0f"). InnerVolumeSpecName "kube-api-access-ncrf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.723552 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg" (OuterVolumeSpecName: "kube-api-access-rg2zg") pod "3482be94-0cdb-4e2a-889b-e5fac59fdbf5" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5"). InnerVolumeSpecName "kube-api-access-rg2zg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.819258 3562 reconciler_common.go:300] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.819290 3562 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") on node \"crc\" DevicePath \"\"" Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.819301 3562 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c782cf62-a827-4677-b3c2-6f82c5f09cbb-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.819311 3562 reconciler_common.go:300] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.819321 3562 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/887d596e-c519-4bfa-af90-3edd9e1b2f0f-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.819331 3562 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") on node \"crc\" DevicePath \"\"" Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.819341 3562 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") on node \"crc\" DevicePath \"\"" Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.888226 3562 scope.go:117] "RemoveContainer" containerID="bd955c68381b02a2021f0f5ea539445f04b3c94f909bf63c8ae12524186cbff8" Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.932147 3562 scope.go:117] "RemoveContainer" containerID="a7a3e703a11ab2d9ac344e1d995c26daf18dfb7281643f3c17e091c7a91dcfd2" Dec 13 00:09:48 crc kubenswrapper[3562]: E1213 00:09:48.940292 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7a3e703a11ab2d9ac344e1d995c26daf18dfb7281643f3c17e091c7a91dcfd2\": container with ID starting with a7a3e703a11ab2d9ac344e1d995c26daf18dfb7281643f3c17e091c7a91dcfd2 not found: ID does not exist" containerID="a7a3e703a11ab2d9ac344e1d995c26daf18dfb7281643f3c17e091c7a91dcfd2" Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.940342 3562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7a3e703a11ab2d9ac344e1d995c26daf18dfb7281643f3c17e091c7a91dcfd2"} err="failed to get container status \"a7a3e703a11ab2d9ac344e1d995c26daf18dfb7281643f3c17e091c7a91dcfd2\": rpc error: code = NotFound desc = could not find container \"a7a3e703a11ab2d9ac344e1d995c26daf18dfb7281643f3c17e091c7a91dcfd2\": container with ID starting with a7a3e703a11ab2d9ac344e1d995c26daf18dfb7281643f3c17e091c7a91dcfd2 not found: ID does not exist" Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.940357 3562 scope.go:117] "RemoveContainer" containerID="bd955c68381b02a2021f0f5ea539445f04b3c94f909bf63c8ae12524186cbff8" Dec 13 00:09:48 crc kubenswrapper[3562]: E1213 00:09:48.941265 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd955c68381b02a2021f0f5ea539445f04b3c94f909bf63c8ae12524186cbff8\": container with ID starting with bd955c68381b02a2021f0f5ea539445f04b3c94f909bf63c8ae12524186cbff8 not found: ID does not exist" containerID="bd955c68381b02a2021f0f5ea539445f04b3c94f909bf63c8ae12524186cbff8" Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.941363 3562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd955c68381b02a2021f0f5ea539445f04b3c94f909bf63c8ae12524186cbff8"} err="failed to get container status \"bd955c68381b02a2021f0f5ea539445f04b3c94f909bf63c8ae12524186cbff8\": rpc error: code = NotFound desc = could not find container \"bd955c68381b02a2021f0f5ea539445f04b3c94f909bf63c8ae12524186cbff8\": container with ID starting with bd955c68381b02a2021f0f5ea539445f04b3c94f909bf63c8ae12524186cbff8 not found: ID does not exist" Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.941376 3562 scope.go:117] "RemoveContainer" containerID="dad6cb5334e0f97e7c9657edc6fcbb6b3055950af6406bf65e7e1be2ad84096b" Dec 13 00:09:48 crc kubenswrapper[3562]: I1213 00:09:48.962693 3562 scope.go:117] "RemoveContainer" containerID="9671b146e6f929a27a09d1b5a4aa07e24fcf92df9eb56f0cfa559143c9279902" Dec 13 00:09:49 crc kubenswrapper[3562]: I1213 00:09:49.349275 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" event={"ID":"3482be94-0cdb-4e2a-889b-e5fac59fdbf5","Type":"ContainerDied","Data":"b80d6a778ead2c7c6ae610f4fefb614976b20c604b9176d4a7f76b6c7ca9d50d"} Dec 13 00:09:49 crc kubenswrapper[3562]: I1213 00:09:49.349326 3562 scope.go:117] "RemoveContainer" containerID="7d935ae38eafd288b0bf6684585036fd5fa879811f459a17e350801d74397673" Dec 13 00:09:49 crc kubenswrapper[3562]: I1213 00:09:49.349422 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Dec 13 00:09:49 crc kubenswrapper[3562]: I1213 00:09:49.354164 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-f4jkp_4092a9f8-5acc-4932-9e90-ef962eeb301a/extract-content/1.log" Dec 13 00:09:49 crc kubenswrapper[3562]: I1213 00:09:49.354568 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4jkp" event={"ID":"4092a9f8-5acc-4932-9e90-ef962eeb301a","Type":"ContainerDied","Data":"12f76956853bcc4ef2d56a9b72aef74d7ef832b2e582a901495987a42af5d82f"} Dec 13 00:09:49 crc kubenswrapper[3562]: I1213 00:09:49.354588 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Dec 13 00:09:49 crc kubenswrapper[3562]: I1213 00:09:49.358149 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" event={"ID":"c782cf62-a827-4677-b3c2-6f82c5f09cbb","Type":"ContainerDied","Data":"c3878d719398701044049e64b6685f4563a54d6352b17533a94949338ccda9ac"} Dec 13 00:09:49 crc kubenswrapper[3562]: I1213 00:09:49.358228 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Dec 13 00:09:49 crc kubenswrapper[3562]: I1213 00:09:49.365791 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7287f" event={"ID":"887d596e-c519-4bfa-af90-3edd9e1b2f0f","Type":"ContainerDied","Data":"7d0bc7d4c91e939d39cde0865eb12e34bf21ba6e9949a18d8639eb77d6c02074"} Dec 13 00:09:49 crc kubenswrapper[3562]: I1213 00:09:49.365823 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Dec 13 00:09:49 crc kubenswrapper[3562]: I1213 00:09:49.388602 3562 scope.go:117] "RemoveContainer" containerID="08b2b1ba9113187f4ab836ca671536861f978f8ebf3c50117ce097f1fff70c4e" Dec 13 00:09:49 crc kubenswrapper[3562]: I1213 00:09:49.388795 3562 generic.go:334] "Generic (PLEG): container finished" podID="aa79e90a-de78-46e9-8a61-9f25c6962160" containerID="f241c4f91f1dc8c39f817b784e103b25d79938e912056fd03dfc0517d1698fd6" exitCode=0 Dec 13 00:09:49 crc kubenswrapper[3562]: I1213 00:09:49.388849 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8b6x9" event={"ID":"aa79e90a-de78-46e9-8a61-9f25c6962160","Type":"ContainerDied","Data":"f241c4f91f1dc8c39f817b784e103b25d79938e912056fd03dfc0517d1698fd6"} Dec 13 00:09:49 crc kubenswrapper[3562]: I1213 00:09:49.399138 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-269h6_8f7307bd-8e93-41cb-a32f-14b02f9b700e/extract-content/0.log" Dec 13 00:09:49 crc kubenswrapper[3562]: I1213 00:09:49.399491 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-269h6" event={"ID":"8f7307bd-8e93-41cb-a32f-14b02f9b700e","Type":"ContainerDied","Data":"5e8fbc635297183be9ecb4ce8fb67e417c6e0bc75ca99cdeec7db7bad7b5c26a"} Dec 13 00:09:49 crc kubenswrapper[3562]: I1213 00:09:49.399578 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-269h6" Dec 13 00:09:49 crc kubenswrapper[3562]: I1213 00:09:49.401801 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8jhz6_3f4dca86-e6ee-4ec9-8324-86aff960225e/extract-content/1.log" Dec 13 00:09:49 crc kubenswrapper[3562]: I1213 00:09:49.403589 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Dec 13 00:09:49 crc kubenswrapper[3562]: I1213 00:09:49.403678 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jhz6" event={"ID":"3f4dca86-e6ee-4ec9-8324-86aff960225e","Type":"ContainerDied","Data":"7bda89fde1d856e752d73604dfe94da99a00f11f45dc4b14b6395f58e9e3fa1e"} Dec 13 00:09:49 crc kubenswrapper[3562]: I1213 00:09:49.403700 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-8b455464d-cpx94" Dec 13 00:09:49 crc kubenswrapper[3562]: I1213 00:09:49.408027 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-8b455464d-cpx94" Dec 13 00:09:49 crc kubenswrapper[3562]: I1213 00:09:49.426086 3562 scope.go:117] "RemoveContainer" containerID="59afe7eaf0205414d90e0c74df18dd68b990d6daa45326ac28a6de2667f135ac" Dec 13 00:09:49 crc kubenswrapper[3562]: I1213 00:09:49.434728 3562 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-8b455464d-f9xdt"] Dec 13 00:09:49 crc kubenswrapper[3562]: I1213 00:09:49.438029 3562 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-8b455464d-f9xdt"] Dec 13 00:09:49 crc kubenswrapper[3562]: I1213 00:09:49.449063 3562 scope.go:117] "RemoveContainer" containerID="20fb453bb5b5891a5efb32730e5534354115809179e2939be024826b1a868574" Dec 13 00:09:49 crc kubenswrapper[3562]: I1213 00:09:49.463472 3562 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-8b455464d-cpx94" podStartSLOduration=2.463437303 podStartE2EDuration="2.463437303s" podCreationTimestamp="2025-12-13 00:09:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 00:09:49.459393615 +0000 UTC m=+225.942624138" watchObservedRunningTime="2025-12-13 00:09:49.463437303 +0000 UTC m=+225.946667836" Dec 13 00:09:49 crc kubenswrapper[3562]: I1213 00:09:49.484508 3562 scope.go:117] "RemoveContainer" containerID="f69be4541a751a4e3537c768d6b4af918e429f429a9f8c5bc3f5af0cbb1e1118" Dec 13 00:09:49 crc kubenswrapper[3562]: I1213 00:09:49.512561 3562 scope.go:117] "RemoveContainer" containerID="68497c68fd7f295150ee02ff86f6124d9e5ecbd3900e8d5472d038f55a055c86" Dec 13 00:09:49 crc kubenswrapper[3562]: I1213 00:09:49.534387 3562 scope.go:117] "RemoveContainer" containerID="2a6b1dad8b1de3289669264a5a64b303562b741b5a7b200ec6d718ddf7da5445" Dec 13 00:09:49 crc kubenswrapper[3562]: I1213 00:09:49.555856 3562 scope.go:117] "RemoveContainer" containerID="5e639a435ec69948e440496faba9ac55d32c12817da197907e010b5ab1795519" Dec 13 00:09:49 crc kubenswrapper[3562]: I1213 00:09:49.586306 3562 scope.go:117] "RemoveContainer" containerID="0577dd2efb3c46ded734294efb921191dfdc8167bdcc63fe3206f657b329da66" Dec 13 00:09:49 crc kubenswrapper[3562]: I1213 00:09:49.611417 3562 scope.go:117] "RemoveContainer" containerID="6a9717e197eea464bc3c4dbb9b358e7fe44cdd06ab3e4c1407ff9c97cadbbde7" Dec 13 00:09:49 crc kubenswrapper[3562]: I1213 00:09:49.642958 3562 scope.go:117] "RemoveContainer" containerID="d6d4b9c28fded6df7896005f9653d64456cda55487c6764afe774f207a199fc0" Dec 13 00:09:49 crc kubenswrapper[3562]: I1213 00:09:49.662729 3562 scope.go:117] "RemoveContainer" containerID="29b8871358c1451121da8e9f91cfa5d2cd4cc2d3d5601820229617f5e2b27852" Dec 13 00:09:49 crc kubenswrapper[3562]: I1213 00:09:49.680324 3562 scope.go:117] "RemoveContainer" containerID="e5154634081ca49fc68e1d7232116283b323c2f9aa2c68a394ff7e7932b67c29" Dec 13 00:09:49 crc kubenswrapper[3562]: I1213 00:09:49.883875 3562 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" path="/var/lib/kubelet/pods/3482be94-0cdb-4e2a-889b-e5fac59fdbf5/volumes" Dec 13 00:09:49 crc kubenswrapper[3562]: I1213 00:09:49.935886 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c782cf62-a827-4677-b3c2-6f82c5f09cbb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c782cf62-a827-4677-b3c2-6f82c5f09cbb" (UID: "c782cf62-a827-4677-b3c2-6f82c5f09cbb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 00:09:49 crc kubenswrapper[3562]: I1213 00:09:49.938048 3562 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c782cf62-a827-4677-b3c2-6f82c5f09cbb-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 00:09:49 crc kubenswrapper[3562]: I1213 00:09:49.995052 3562 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8s8pc"] Dec 13 00:09:49 crc kubenswrapper[3562]: I1213 00:09:49.997550 3562 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8s8pc"] Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.057997 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4092a9f8-5acc-4932-9e90-ef962eeb301a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4092a9f8-5acc-4932-9e90-ef962eeb301a" (UID: "4092a9f8-5acc-4932-9e90-ef962eeb301a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.065300 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55f01f1d-1d56-4031-98bf-5747ea5930eb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "55f01f1d-1d56-4031-98bf-5747ea5930eb" (UID: "55f01f1d-1d56-4031-98bf-5747ea5930eb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.108855 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/887d596e-c519-4bfa-af90-3edd9e1b2f0f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "887d596e-c519-4bfa-af90-3edd9e1b2f0f" (UID: "887d596e-c519-4bfa-af90-3edd9e1b2f0f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.128545 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f7307bd-8e93-41cb-a32f-14b02f9b700e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8f7307bd-8e93-41cb-a32f-14b02f9b700e" (UID: "8f7307bd-8e93-41cb-a32f-14b02f9b700e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.141828 3562 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4092a9f8-5acc-4932-9e90-ef962eeb301a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.142210 3562 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55f01f1d-1d56-4031-98bf-5747ea5930eb-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.142302 3562 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/887d596e-c519-4bfa-af90-3edd9e1b2f0f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.142377 3562 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f7307bd-8e93-41cb-a32f-14b02f9b700e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.173620 3562 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m25wn"] Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.176850 3562 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-m25wn"] Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.232189 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f4dca86-e6ee-4ec9-8324-86aff960225e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3f4dca86-e6ee-4ec9-8324-86aff960225e" (UID: "3f4dca86-e6ee-4ec9-8324-86aff960225e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.243367 3562 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f4dca86-e6ee-4ec9-8324-86aff960225e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.250107 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8b6x9" Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.288780 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" (UID: "fc9c9ba0-fcbb-4e78-8cf5-a059ec435760"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.320685 3562 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f4jkp"] Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.328004 3562 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-f4jkp"] Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.332846 3562 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7287f"] Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.338764 3562 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7287f"] Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.345674 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8httk\" (UniqueName: \"kubernetes.io/projected/aa79e90a-de78-46e9-8a61-9f25c6962160-kube-api-access-8httk\") pod \"aa79e90a-de78-46e9-8a61-9f25c6962160\" (UID: \"aa79e90a-de78-46e9-8a61-9f25c6962160\") " Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.345754 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa79e90a-de78-46e9-8a61-9f25c6962160-catalog-content\") pod \"aa79e90a-de78-46e9-8a61-9f25c6962160\" (UID: \"aa79e90a-de78-46e9-8a61-9f25c6962160\") " Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.345945 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa79e90a-de78-46e9-8a61-9f25c6962160-utilities\") pod \"aa79e90a-de78-46e9-8a61-9f25c6962160\" (UID: \"aa79e90a-de78-46e9-8a61-9f25c6962160\") " Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.346529 3562 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.347327 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa79e90a-de78-46e9-8a61-9f25c6962160-utilities" (OuterVolumeSpecName: "utilities") pod "aa79e90a-de78-46e9-8a61-9f25c6962160" (UID: "aa79e90a-de78-46e9-8a61-9f25c6962160"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.358651 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa79e90a-de78-46e9-8a61-9f25c6962160-kube-api-access-8httk" (OuterVolumeSpecName: "kube-api-access-8httk") pod "aa79e90a-de78-46e9-8a61-9f25c6962160" (UID: "aa79e90a-de78-46e9-8a61-9f25c6962160"). InnerVolumeSpecName "kube-api-access-8httk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.380532 3562 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8jhz6"] Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.385593 3562 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8jhz6"] Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.391480 3562 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-269h6"] Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.393887 3562 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-269h6"] Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.413957 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8b6x9" event={"ID":"aa79e90a-de78-46e9-8a61-9f25c6962160","Type":"ContainerDied","Data":"b36fae6134bd42247aca3356509e41287b3310ae56b59b341013efe70e5ba585"} Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.413996 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8b6x9" Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.414005 3562 scope.go:117] "RemoveContainer" containerID="f241c4f91f1dc8c39f817b784e103b25d79938e912056fd03dfc0517d1698fd6" Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.448363 3562 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa79e90a-de78-46e9-8a61-9f25c6962160-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.448396 3562 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-8httk\" (UniqueName: \"kubernetes.io/projected/aa79e90a-de78-46e9-8a61-9f25c6962160-kube-api-access-8httk\") on node \"crc\" DevicePath \"\"" Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.450804 3562 scope.go:117] "RemoveContainer" containerID="2b55d077e50cbb7b2eb0e7615450f15faacc1e68c64d6ae0ceef984f7c2792c1" Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.476220 3562 scope.go:117] "RemoveContainer" containerID="7806cc68e60d04203ee0f1209cdbde5993d326734f4fc921592f5f6788129fd9" Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.486218 3562 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sdddl"] Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.489644 3562 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sdddl"] Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.490002 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa79e90a-de78-46e9-8a61-9f25c6962160-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aa79e90a-de78-46e9-8a61-9f25c6962160" (UID: "aa79e90a-de78-46e9-8a61-9f25c6962160"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.549487 3562 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa79e90a-de78-46e9-8a61-9f25c6962160-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.750421 3562 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8b6x9"] Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.754298 3562 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8b6x9"] Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.987836 3562 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kk2tn"] Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.988014 3562 topology_manager.go:215] "Topology Admit Handler" podUID="9f237253-7db9-43eb-99b5-b6bd778581d8" podNamespace="openshift-marketplace" podName="certified-operators-kk2tn" Dec 13 00:09:50 crc kubenswrapper[3562]: E1213 00:09:50.988185 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.988199 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" Dec 13 00:09:50 crc kubenswrapper[3562]: E1213 00:09:50.988213 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="aa79e90a-de78-46e9-8a61-9f25c6962160" containerName="extract-content" Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.988220 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa79e90a-de78-46e9-8a61-9f25c6962160" containerName="extract-content" Dec 13 00:09:50 crc kubenswrapper[3562]: E1213 00:09:50.988230 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" containerName="extract-utilities" Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.988236 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" containerName="extract-utilities" Dec 13 00:09:50 crc kubenswrapper[3562]: E1213 00:09:50.988246 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" containerName="extract-content" Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.988252 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" containerName="extract-content" Dec 13 00:09:50 crc kubenswrapper[3562]: E1213 00:09:50.988261 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" containerName="extract-content" Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.988268 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" containerName="extract-content" Dec 13 00:09:50 crc kubenswrapper[3562]: E1213 00:09:50.988279 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" containerName="extract-content" Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.988285 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" containerName="extract-content" Dec 13 00:09:50 crc kubenswrapper[3562]: E1213 00:09:50.988294 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" containerName="extract-content" Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.988300 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" containerName="extract-content" Dec 13 00:09:50 crc kubenswrapper[3562]: E1213 00:09:50.988311 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" containerName="extract-content" Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.988318 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" containerName="extract-content" Dec 13 00:09:50 crc kubenswrapper[3562]: E1213 00:09:50.988329 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" containerName="registry-server" Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.988336 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" containerName="registry-server" Dec 13 00:09:50 crc kubenswrapper[3562]: E1213 00:09:50.988346 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="8f7307bd-8e93-41cb-a32f-14b02f9b700e" containerName="extract-utilities" Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.988355 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f7307bd-8e93-41cb-a32f-14b02f9b700e" containerName="extract-utilities" Dec 13 00:09:50 crc kubenswrapper[3562]: E1213 00:09:50.988363 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="aa79e90a-de78-46e9-8a61-9f25c6962160" containerName="registry-server" Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.988370 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa79e90a-de78-46e9-8a61-9f25c6962160" containerName="registry-server" Dec 13 00:09:50 crc kubenswrapper[3562]: E1213 00:09:50.988377 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" containerName="extract-utilities" Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.988383 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" containerName="extract-utilities" Dec 13 00:09:50 crc kubenswrapper[3562]: E1213 00:09:50.988393 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="8f7307bd-8e93-41cb-a32f-14b02f9b700e" containerName="extract-content" Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.988399 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f7307bd-8e93-41cb-a32f-14b02f9b700e" containerName="extract-content" Dec 13 00:09:50 crc kubenswrapper[3562]: E1213 00:09:50.988469 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="aa79e90a-de78-46e9-8a61-9f25c6962160" containerName="extract-utilities" Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.988476 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa79e90a-de78-46e9-8a61-9f25c6962160" containerName="extract-utilities" Dec 13 00:09:50 crc kubenswrapper[3562]: E1213 00:09:50.988484 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" containerName="registry-server" Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.988491 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" containerName="registry-server" Dec 13 00:09:50 crc kubenswrapper[3562]: E1213 00:09:50.988499 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" containerName="extract-utilities" Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.988506 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" containerName="extract-utilities" Dec 13 00:09:50 crc kubenswrapper[3562]: E1213 00:09:50.988518 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" containerName="extract-utilities" Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.988525 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" containerName="extract-utilities" Dec 13 00:09:50 crc kubenswrapper[3562]: E1213 00:09:50.988535 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" containerName="extract-utilities" Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.988542 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" containerName="extract-utilities" Dec 13 00:09:50 crc kubenswrapper[3562]: E1213 00:09:50.988552 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="55f01f1d-1d56-4031-98bf-5747ea5930eb" containerName="extract-utilities" Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.988558 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="55f01f1d-1d56-4031-98bf-5747ea5930eb" containerName="extract-utilities" Dec 13 00:09:50 crc kubenswrapper[3562]: E1213 00:09:50.988567 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="55f01f1d-1d56-4031-98bf-5747ea5930eb" containerName="extract-content" Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.988574 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="55f01f1d-1d56-4031-98bf-5747ea5930eb" containerName="extract-content" Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.988705 3562 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" containerName="extract-content" Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.988721 3562 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f7307bd-8e93-41cb-a32f-14b02f9b700e" containerName="extract-content" Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.988730 3562 memory_manager.go:354] "RemoveStaleState removing state" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" containerName="extract-content" Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.988741 3562 memory_manager.go:354] "RemoveStaleState removing state" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" containerName="registry-server" Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.988748 3562 memory_manager.go:354] "RemoveStaleState removing state" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" containerName="registry-server" Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.988758 3562 memory_manager.go:354] "RemoveStaleState removing state" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.988767 3562 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" containerName="extract-content" Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.988794 3562 memory_manager.go:354] "RemoveStaleState removing state" podUID="55f01f1d-1d56-4031-98bf-5747ea5930eb" containerName="extract-content" Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.988806 3562 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa79e90a-de78-46e9-8a61-9f25c6962160" containerName="registry-server" Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.989748 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kk2tn" Dec 13 00:09:50 crc kubenswrapper[3562]: I1213 00:09:50.992274 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-twmwc" Dec 13 00:09:51 crc kubenswrapper[3562]: I1213 00:09:51.002335 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kk2tn"] Dec 13 00:09:51 crc kubenswrapper[3562]: I1213 00:09:51.054883 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f237253-7db9-43eb-99b5-b6bd778581d8-utilities\") pod \"certified-operators-kk2tn\" (UID: \"9f237253-7db9-43eb-99b5-b6bd778581d8\") " pod="openshift-marketplace/certified-operators-kk2tn" Dec 13 00:09:51 crc kubenswrapper[3562]: I1213 00:09:51.054942 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zfvc\" (UniqueName: \"kubernetes.io/projected/9f237253-7db9-43eb-99b5-b6bd778581d8-kube-api-access-6zfvc\") pod \"certified-operators-kk2tn\" (UID: \"9f237253-7db9-43eb-99b5-b6bd778581d8\") " pod="openshift-marketplace/certified-operators-kk2tn" Dec 13 00:09:51 crc kubenswrapper[3562]: I1213 00:09:51.054974 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f237253-7db9-43eb-99b5-b6bd778581d8-catalog-content\") pod \"certified-operators-kk2tn\" (UID: \"9f237253-7db9-43eb-99b5-b6bd778581d8\") " pod="openshift-marketplace/certified-operators-kk2tn" Dec 13 00:09:51 crc kubenswrapper[3562]: I1213 00:09:51.156261 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f237253-7db9-43eb-99b5-b6bd778581d8-utilities\") pod \"certified-operators-kk2tn\" (UID: \"9f237253-7db9-43eb-99b5-b6bd778581d8\") " pod="openshift-marketplace/certified-operators-kk2tn" Dec 13 00:09:51 crc kubenswrapper[3562]: I1213 00:09:51.156310 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6zfvc\" (UniqueName: \"kubernetes.io/projected/9f237253-7db9-43eb-99b5-b6bd778581d8-kube-api-access-6zfvc\") pod \"certified-operators-kk2tn\" (UID: \"9f237253-7db9-43eb-99b5-b6bd778581d8\") " pod="openshift-marketplace/certified-operators-kk2tn" Dec 13 00:09:51 crc kubenswrapper[3562]: I1213 00:09:51.156343 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f237253-7db9-43eb-99b5-b6bd778581d8-catalog-content\") pod \"certified-operators-kk2tn\" (UID: \"9f237253-7db9-43eb-99b5-b6bd778581d8\") " pod="openshift-marketplace/certified-operators-kk2tn" Dec 13 00:09:51 crc kubenswrapper[3562]: I1213 00:09:51.156963 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f237253-7db9-43eb-99b5-b6bd778581d8-utilities\") pod \"certified-operators-kk2tn\" (UID: \"9f237253-7db9-43eb-99b5-b6bd778581d8\") " pod="openshift-marketplace/certified-operators-kk2tn" Dec 13 00:09:51 crc kubenswrapper[3562]: I1213 00:09:51.157033 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f237253-7db9-43eb-99b5-b6bd778581d8-catalog-content\") pod \"certified-operators-kk2tn\" (UID: \"9f237253-7db9-43eb-99b5-b6bd778581d8\") " pod="openshift-marketplace/certified-operators-kk2tn" Dec 13 00:09:51 crc kubenswrapper[3562]: I1213 00:09:51.185105 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zfvc\" (UniqueName: \"kubernetes.io/projected/9f237253-7db9-43eb-99b5-b6bd778581d8-kube-api-access-6zfvc\") pod \"certified-operators-kk2tn\" (UID: \"9f237253-7db9-43eb-99b5-b6bd778581d8\") " pod="openshift-marketplace/certified-operators-kk2tn" Dec 13 00:09:51 crc kubenswrapper[3562]: I1213 00:09:51.330771 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kk2tn" Dec 13 00:09:51 crc kubenswrapper[3562]: I1213 00:09:51.694105 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kk2tn"] Dec 13 00:09:51 crc kubenswrapper[3562]: I1213 00:09:51.880233 3562 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" path="/var/lib/kubelet/pods/3f4dca86-e6ee-4ec9-8324-86aff960225e/volumes" Dec 13 00:09:51 crc kubenswrapper[3562]: I1213 00:09:51.881148 3562 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" path="/var/lib/kubelet/pods/4092a9f8-5acc-4932-9e90-ef962eeb301a/volumes" Dec 13 00:09:51 crc kubenswrapper[3562]: I1213 00:09:51.882363 3562 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55f01f1d-1d56-4031-98bf-5747ea5930eb" path="/var/lib/kubelet/pods/55f01f1d-1d56-4031-98bf-5747ea5930eb/volumes" Dec 13 00:09:51 crc kubenswrapper[3562]: I1213 00:09:51.882987 3562 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" path="/var/lib/kubelet/pods/887d596e-c519-4bfa-af90-3edd9e1b2f0f/volumes" Dec 13 00:09:51 crc kubenswrapper[3562]: I1213 00:09:51.884367 3562 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f7307bd-8e93-41cb-a32f-14b02f9b700e" path="/var/lib/kubelet/pods/8f7307bd-8e93-41cb-a32f-14b02f9b700e/volumes" Dec 13 00:09:51 crc kubenswrapper[3562]: I1213 00:09:51.885117 3562 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa79e90a-de78-46e9-8a61-9f25c6962160" path="/var/lib/kubelet/pods/aa79e90a-de78-46e9-8a61-9f25c6962160/volumes" Dec 13 00:09:51 crc kubenswrapper[3562]: I1213 00:09:51.885989 3562 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" path="/var/lib/kubelet/pods/c782cf62-a827-4677-b3c2-6f82c5f09cbb/volumes" Dec 13 00:09:51 crc kubenswrapper[3562]: I1213 00:09:51.887518 3562 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" path="/var/lib/kubelet/pods/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760/volumes" Dec 13 00:09:51 crc kubenswrapper[3562]: I1213 00:09:51.980134 3562 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jrtnd"] Dec 13 00:09:51 crc kubenswrapper[3562]: I1213 00:09:51.980221 3562 topology_manager.go:215] "Topology Admit Handler" podUID="182f4041-a9bc-4b35-98c6-fa2baa049a4d" podNamespace="openshift-marketplace" podName="community-operators-jrtnd" Dec 13 00:09:51 crc kubenswrapper[3562]: I1213 00:09:51.981168 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jrtnd" Dec 13 00:09:51 crc kubenswrapper[3562]: I1213 00:09:51.991242 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jrtnd"] Dec 13 00:09:51 crc kubenswrapper[3562]: I1213 00:09:51.991310 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-sv888" Dec 13 00:09:52 crc kubenswrapper[3562]: I1213 00:09:52.067820 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/182f4041-a9bc-4b35-98c6-fa2baa049a4d-catalog-content\") pod \"community-operators-jrtnd\" (UID: \"182f4041-a9bc-4b35-98c6-fa2baa049a4d\") " pod="openshift-marketplace/community-operators-jrtnd" Dec 13 00:09:52 crc kubenswrapper[3562]: I1213 00:09:52.067887 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/182f4041-a9bc-4b35-98c6-fa2baa049a4d-utilities\") pod \"community-operators-jrtnd\" (UID: \"182f4041-a9bc-4b35-98c6-fa2baa049a4d\") " pod="openshift-marketplace/community-operators-jrtnd" Dec 13 00:09:52 crc kubenswrapper[3562]: I1213 00:09:52.068065 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqlhg\" (UniqueName: \"kubernetes.io/projected/182f4041-a9bc-4b35-98c6-fa2baa049a4d-kube-api-access-nqlhg\") pod \"community-operators-jrtnd\" (UID: \"182f4041-a9bc-4b35-98c6-fa2baa049a4d\") " pod="openshift-marketplace/community-operators-jrtnd" Dec 13 00:09:52 crc kubenswrapper[3562]: I1213 00:09:52.169018 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/182f4041-a9bc-4b35-98c6-fa2baa049a4d-catalog-content\") pod \"community-operators-jrtnd\" (UID: \"182f4041-a9bc-4b35-98c6-fa2baa049a4d\") " pod="openshift-marketplace/community-operators-jrtnd" Dec 13 00:09:52 crc kubenswrapper[3562]: I1213 00:09:52.169396 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/182f4041-a9bc-4b35-98c6-fa2baa049a4d-utilities\") pod \"community-operators-jrtnd\" (UID: \"182f4041-a9bc-4b35-98c6-fa2baa049a4d\") " pod="openshift-marketplace/community-operators-jrtnd" Dec 13 00:09:52 crc kubenswrapper[3562]: I1213 00:09:52.169445 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nqlhg\" (UniqueName: \"kubernetes.io/projected/182f4041-a9bc-4b35-98c6-fa2baa049a4d-kube-api-access-nqlhg\") pod \"community-operators-jrtnd\" (UID: \"182f4041-a9bc-4b35-98c6-fa2baa049a4d\") " pod="openshift-marketplace/community-operators-jrtnd" Dec 13 00:09:52 crc kubenswrapper[3562]: I1213 00:09:52.169567 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/182f4041-a9bc-4b35-98c6-fa2baa049a4d-catalog-content\") pod \"community-operators-jrtnd\" (UID: \"182f4041-a9bc-4b35-98c6-fa2baa049a4d\") " pod="openshift-marketplace/community-operators-jrtnd" Dec 13 00:09:52 crc kubenswrapper[3562]: I1213 00:09:52.169808 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/182f4041-a9bc-4b35-98c6-fa2baa049a4d-utilities\") pod \"community-operators-jrtnd\" (UID: \"182f4041-a9bc-4b35-98c6-fa2baa049a4d\") " pod="openshift-marketplace/community-operators-jrtnd" Dec 13 00:09:52 crc kubenswrapper[3562]: I1213 00:09:52.192444 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqlhg\" (UniqueName: \"kubernetes.io/projected/182f4041-a9bc-4b35-98c6-fa2baa049a4d-kube-api-access-nqlhg\") pod \"community-operators-jrtnd\" (UID: \"182f4041-a9bc-4b35-98c6-fa2baa049a4d\") " pod="openshift-marketplace/community-operators-jrtnd" Dec 13 00:09:52 crc kubenswrapper[3562]: I1213 00:09:52.307384 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jrtnd" Dec 13 00:09:52 crc kubenswrapper[3562]: I1213 00:09:52.378880 3562 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5hp6q"] Dec 13 00:09:52 crc kubenswrapper[3562]: I1213 00:09:52.378984 3562 topology_manager.go:215] "Topology Admit Handler" podUID="23298a96-6335-4197-a139-bafd98b8fa52" podNamespace="openshift-marketplace" podName="community-operators-5hp6q" Dec 13 00:09:52 crc kubenswrapper[3562]: I1213 00:09:52.379765 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5hp6q" Dec 13 00:09:52 crc kubenswrapper[3562]: I1213 00:09:52.397308 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5hp6q"] Dec 13 00:09:52 crc kubenswrapper[3562]: I1213 00:09:52.435129 3562 generic.go:334] "Generic (PLEG): container finished" podID="9f237253-7db9-43eb-99b5-b6bd778581d8" containerID="4c84cc33f55b40f526810e4cfa0c268d432262c5e5c41889d0cc02eeb9850695" exitCode=0 Dec 13 00:09:52 crc kubenswrapper[3562]: I1213 00:09:52.435184 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kk2tn" event={"ID":"9f237253-7db9-43eb-99b5-b6bd778581d8","Type":"ContainerDied","Data":"4c84cc33f55b40f526810e4cfa0c268d432262c5e5c41889d0cc02eeb9850695"} Dec 13 00:09:52 crc kubenswrapper[3562]: I1213 00:09:52.435210 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kk2tn" event={"ID":"9f237253-7db9-43eb-99b5-b6bd778581d8","Type":"ContainerStarted","Data":"140fa7c47595e6e801d7cfc270418e5b089bf9823345f29948857a7d69bd3181"} Dec 13 00:09:52 crc kubenswrapper[3562]: I1213 00:09:52.472472 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23298a96-6335-4197-a139-bafd98b8fa52-catalog-content\") pod \"community-operators-5hp6q\" (UID: \"23298a96-6335-4197-a139-bafd98b8fa52\") " pod="openshift-marketplace/community-operators-5hp6q" Dec 13 00:09:52 crc kubenswrapper[3562]: I1213 00:09:52.472543 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhz7q\" (UniqueName: \"kubernetes.io/projected/23298a96-6335-4197-a139-bafd98b8fa52-kube-api-access-fhz7q\") pod \"community-operators-5hp6q\" (UID: \"23298a96-6335-4197-a139-bafd98b8fa52\") " pod="openshift-marketplace/community-operators-5hp6q" Dec 13 00:09:52 crc kubenswrapper[3562]: I1213 00:09:52.472566 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23298a96-6335-4197-a139-bafd98b8fa52-utilities\") pod \"community-operators-5hp6q\" (UID: \"23298a96-6335-4197-a139-bafd98b8fa52\") " pod="openshift-marketplace/community-operators-5hp6q" Dec 13 00:09:52 crc kubenswrapper[3562]: I1213 00:09:52.477828 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jrtnd"] Dec 13 00:09:52 crc kubenswrapper[3562]: W1213 00:09:52.484968 3562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod182f4041_a9bc_4b35_98c6_fa2baa049a4d.slice/crio-7d0240d6e16ec9c1845249357c42a2a64223866d53b206348e5871103e360327 WatchSource:0}: Error finding container 7d0240d6e16ec9c1845249357c42a2a64223866d53b206348e5871103e360327: Status 404 returned error can't find the container with id 7d0240d6e16ec9c1845249357c42a2a64223866d53b206348e5871103e360327 Dec 13 00:09:52 crc kubenswrapper[3562]: I1213 00:09:52.574175 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23298a96-6335-4197-a139-bafd98b8fa52-catalog-content\") pod \"community-operators-5hp6q\" (UID: \"23298a96-6335-4197-a139-bafd98b8fa52\") " pod="openshift-marketplace/community-operators-5hp6q" Dec 13 00:09:52 crc kubenswrapper[3562]: I1213 00:09:52.574501 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-fhz7q\" (UniqueName: \"kubernetes.io/projected/23298a96-6335-4197-a139-bafd98b8fa52-kube-api-access-fhz7q\") pod \"community-operators-5hp6q\" (UID: \"23298a96-6335-4197-a139-bafd98b8fa52\") " pod="openshift-marketplace/community-operators-5hp6q" Dec 13 00:09:52 crc kubenswrapper[3562]: I1213 00:09:52.574538 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23298a96-6335-4197-a139-bafd98b8fa52-utilities\") pod \"community-operators-5hp6q\" (UID: \"23298a96-6335-4197-a139-bafd98b8fa52\") " pod="openshift-marketplace/community-operators-5hp6q" Dec 13 00:09:52 crc kubenswrapper[3562]: I1213 00:09:52.574847 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23298a96-6335-4197-a139-bafd98b8fa52-utilities\") pod \"community-operators-5hp6q\" (UID: \"23298a96-6335-4197-a139-bafd98b8fa52\") " pod="openshift-marketplace/community-operators-5hp6q" Dec 13 00:09:52 crc kubenswrapper[3562]: I1213 00:09:52.574966 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23298a96-6335-4197-a139-bafd98b8fa52-catalog-content\") pod \"community-operators-5hp6q\" (UID: \"23298a96-6335-4197-a139-bafd98b8fa52\") " pod="openshift-marketplace/community-operators-5hp6q" Dec 13 00:09:52 crc kubenswrapper[3562]: I1213 00:09:52.581619 3562 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sfhjq"] Dec 13 00:09:52 crc kubenswrapper[3562]: I1213 00:09:52.581711 3562 topology_manager.go:215] "Topology Admit Handler" podUID="7d00c5c9-f08d-49fd-bd4f-ffd4772b86f7" podNamespace="openshift-marketplace" podName="redhat-operators-sfhjq" Dec 13 00:09:52 crc kubenswrapper[3562]: I1213 00:09:52.583119 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sfhjq" Dec 13 00:09:52 crc kubenswrapper[3562]: I1213 00:09:52.585329 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-dwn4s" Dec 13 00:09:52 crc kubenswrapper[3562]: I1213 00:09:52.590566 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sfhjq"] Dec 13 00:09:52 crc kubenswrapper[3562]: I1213 00:09:52.595038 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhz7q\" (UniqueName: \"kubernetes.io/projected/23298a96-6335-4197-a139-bafd98b8fa52-kube-api-access-fhz7q\") pod \"community-operators-5hp6q\" (UID: \"23298a96-6335-4197-a139-bafd98b8fa52\") " pod="openshift-marketplace/community-operators-5hp6q" Dec 13 00:09:52 crc kubenswrapper[3562]: I1213 00:09:52.675304 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d00c5c9-f08d-49fd-bd4f-ffd4772b86f7-catalog-content\") pod \"redhat-operators-sfhjq\" (UID: \"7d00c5c9-f08d-49fd-bd4f-ffd4772b86f7\") " pod="openshift-marketplace/redhat-operators-sfhjq" Dec 13 00:09:52 crc kubenswrapper[3562]: I1213 00:09:52.675506 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm54g\" (UniqueName: \"kubernetes.io/projected/7d00c5c9-f08d-49fd-bd4f-ffd4772b86f7-kube-api-access-jm54g\") pod \"redhat-operators-sfhjq\" (UID: \"7d00c5c9-f08d-49fd-bd4f-ffd4772b86f7\") " pod="openshift-marketplace/redhat-operators-sfhjq" Dec 13 00:09:52 crc kubenswrapper[3562]: I1213 00:09:52.675666 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d00c5c9-f08d-49fd-bd4f-ffd4772b86f7-utilities\") pod \"redhat-operators-sfhjq\" (UID: \"7d00c5c9-f08d-49fd-bd4f-ffd4772b86f7\") " pod="openshift-marketplace/redhat-operators-sfhjq" Dec 13 00:09:52 crc kubenswrapper[3562]: I1213 00:09:52.714294 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5hp6q" Dec 13 00:09:52 crc kubenswrapper[3562]: I1213 00:09:52.777172 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d00c5c9-f08d-49fd-bd4f-ffd4772b86f7-catalog-content\") pod \"redhat-operators-sfhjq\" (UID: \"7d00c5c9-f08d-49fd-bd4f-ffd4772b86f7\") " pod="openshift-marketplace/redhat-operators-sfhjq" Dec 13 00:09:52 crc kubenswrapper[3562]: I1213 00:09:52.777228 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-jm54g\" (UniqueName: \"kubernetes.io/projected/7d00c5c9-f08d-49fd-bd4f-ffd4772b86f7-kube-api-access-jm54g\") pod \"redhat-operators-sfhjq\" (UID: \"7d00c5c9-f08d-49fd-bd4f-ffd4772b86f7\") " pod="openshift-marketplace/redhat-operators-sfhjq" Dec 13 00:09:52 crc kubenswrapper[3562]: I1213 00:09:52.777262 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d00c5c9-f08d-49fd-bd4f-ffd4772b86f7-utilities\") pod \"redhat-operators-sfhjq\" (UID: \"7d00c5c9-f08d-49fd-bd4f-ffd4772b86f7\") " pod="openshift-marketplace/redhat-operators-sfhjq" Dec 13 00:09:52 crc kubenswrapper[3562]: I1213 00:09:52.777680 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d00c5c9-f08d-49fd-bd4f-ffd4772b86f7-catalog-content\") pod \"redhat-operators-sfhjq\" (UID: \"7d00c5c9-f08d-49fd-bd4f-ffd4772b86f7\") " pod="openshift-marketplace/redhat-operators-sfhjq" Dec 13 00:09:52 crc kubenswrapper[3562]: I1213 00:09:52.777694 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d00c5c9-f08d-49fd-bd4f-ffd4772b86f7-utilities\") pod \"redhat-operators-sfhjq\" (UID: \"7d00c5c9-f08d-49fd-bd4f-ffd4772b86f7\") " pod="openshift-marketplace/redhat-operators-sfhjq" Dec 13 00:09:52 crc kubenswrapper[3562]: I1213 00:09:52.795973 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-jm54g\" (UniqueName: \"kubernetes.io/projected/7d00c5c9-f08d-49fd-bd4f-ffd4772b86f7-kube-api-access-jm54g\") pod \"redhat-operators-sfhjq\" (UID: \"7d00c5c9-f08d-49fd-bd4f-ffd4772b86f7\") " pod="openshift-marketplace/redhat-operators-sfhjq" Dec 13 00:09:52 crc kubenswrapper[3562]: I1213 00:09:52.874399 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5hp6q"] Dec 13 00:09:52 crc kubenswrapper[3562]: W1213 00:09:52.886611 3562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23298a96_6335_4197_a139_bafd98b8fa52.slice/crio-309125cf7832bf41ac3ae5c0f0a073cee3da01358cf0c383294c454f095e8202 WatchSource:0}: Error finding container 309125cf7832bf41ac3ae5c0f0a073cee3da01358cf0c383294c454f095e8202: Status 404 returned error can't find the container with id 309125cf7832bf41ac3ae5c0f0a073cee3da01358cf0c383294c454f095e8202 Dec 13 00:09:52 crc kubenswrapper[3562]: I1213 00:09:52.905021 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sfhjq" Dec 13 00:09:53 crc kubenswrapper[3562]: I1213 00:09:53.075166 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sfhjq"] Dec 13 00:09:53 crc kubenswrapper[3562]: I1213 00:09:53.440182 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kk2tn" event={"ID":"9f237253-7db9-43eb-99b5-b6bd778581d8","Type":"ContainerStarted","Data":"9b88182f668309a98d0743372859deec79df7e3a2931a64dc2ba634bf1e3a0e4"} Dec 13 00:09:53 crc kubenswrapper[3562]: I1213 00:09:53.441495 3562 generic.go:334] "Generic (PLEG): container finished" podID="7d00c5c9-f08d-49fd-bd4f-ffd4772b86f7" containerID="fdc52eab2a3e442ee1a007f6318147a98327637ffb8cc0487a70e02162f4fe37" exitCode=0 Dec 13 00:09:53 crc kubenswrapper[3562]: I1213 00:09:53.441559 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sfhjq" event={"ID":"7d00c5c9-f08d-49fd-bd4f-ffd4772b86f7","Type":"ContainerDied","Data":"fdc52eab2a3e442ee1a007f6318147a98327637ffb8cc0487a70e02162f4fe37"} Dec 13 00:09:53 crc kubenswrapper[3562]: I1213 00:09:53.441846 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sfhjq" event={"ID":"7d00c5c9-f08d-49fd-bd4f-ffd4772b86f7","Type":"ContainerStarted","Data":"804105e0d93f12c9783b9aba05f59213ba038f1fc80ea4b4a37275e4a443b5b8"} Dec 13 00:09:53 crc kubenswrapper[3562]: I1213 00:09:53.443032 3562 generic.go:334] "Generic (PLEG): container finished" podID="23298a96-6335-4197-a139-bafd98b8fa52" containerID="3a8eb6fc35b6119abe81e3ddb6b6e56ac138eba6a4270b1fbb53974853ebfc90" exitCode=0 Dec 13 00:09:53 crc kubenswrapper[3562]: I1213 00:09:53.443092 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5hp6q" event={"ID":"23298a96-6335-4197-a139-bafd98b8fa52","Type":"ContainerDied","Data":"3a8eb6fc35b6119abe81e3ddb6b6e56ac138eba6a4270b1fbb53974853ebfc90"} Dec 13 00:09:53 crc kubenswrapper[3562]: I1213 00:09:53.443235 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5hp6q" event={"ID":"23298a96-6335-4197-a139-bafd98b8fa52","Type":"ContainerStarted","Data":"309125cf7832bf41ac3ae5c0f0a073cee3da01358cf0c383294c454f095e8202"} Dec 13 00:09:53 crc kubenswrapper[3562]: I1213 00:09:53.446660 3562 generic.go:334] "Generic (PLEG): container finished" podID="182f4041-a9bc-4b35-98c6-fa2baa049a4d" containerID="7b208aaa22971232c426c7c2587975318bb00ca3ade74e9104b9869348da7d91" exitCode=0 Dec 13 00:09:53 crc kubenswrapper[3562]: I1213 00:09:53.446692 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jrtnd" event={"ID":"182f4041-a9bc-4b35-98c6-fa2baa049a4d","Type":"ContainerDied","Data":"7b208aaa22971232c426c7c2587975318bb00ca3ade74e9104b9869348da7d91"} Dec 13 00:09:53 crc kubenswrapper[3562]: I1213 00:09:53.446709 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jrtnd" event={"ID":"182f4041-a9bc-4b35-98c6-fa2baa049a4d","Type":"ContainerStarted","Data":"7d0240d6e16ec9c1845249357c42a2a64223866d53b206348e5871103e360327"} Dec 13 00:09:53 crc kubenswrapper[3562]: I1213 00:09:53.981561 3562 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-64qm7"] Dec 13 00:09:53 crc kubenswrapper[3562]: I1213 00:09:53.981701 3562 topology_manager.go:215] "Topology Admit Handler" podUID="ae12f059-fbec-4ccb-93d1-d8c606b0c6b7" podNamespace="openshift-marketplace" podName="redhat-marketplace-64qm7" Dec 13 00:09:53 crc kubenswrapper[3562]: I1213 00:09:53.982926 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-64qm7" Dec 13 00:09:53 crc kubenswrapper[3562]: I1213 00:09:53.986185 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-kpdvz" Dec 13 00:09:53 crc kubenswrapper[3562]: I1213 00:09:53.994751 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-64qm7"] Dec 13 00:09:54 crc kubenswrapper[3562]: E1213 00:09:54.004297 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="unmounted volumes=[registry-storage], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Dec 13 00:09:54 crc kubenswrapper[3562]: I1213 00:09:54.097302 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae12f059-fbec-4ccb-93d1-d8c606b0c6b7-utilities\") pod \"redhat-marketplace-64qm7\" (UID: \"ae12f059-fbec-4ccb-93d1-d8c606b0c6b7\") " pod="openshift-marketplace/redhat-marketplace-64qm7" Dec 13 00:09:54 crc kubenswrapper[3562]: I1213 00:09:54.097456 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae12f059-fbec-4ccb-93d1-d8c606b0c6b7-catalog-content\") pod \"redhat-marketplace-64qm7\" (UID: \"ae12f059-fbec-4ccb-93d1-d8c606b0c6b7\") " pod="openshift-marketplace/redhat-marketplace-64qm7" Dec 13 00:09:54 crc kubenswrapper[3562]: I1213 00:09:54.097547 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7p7h\" (UniqueName: \"kubernetes.io/projected/ae12f059-fbec-4ccb-93d1-d8c606b0c6b7-kube-api-access-w7p7h\") pod \"redhat-marketplace-64qm7\" (UID: \"ae12f059-fbec-4ccb-93d1-d8c606b0c6b7\") " pod="openshift-marketplace/redhat-marketplace-64qm7" Dec 13 00:09:54 crc kubenswrapper[3562]: I1213 00:09:54.198823 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-w7p7h\" (UniqueName: \"kubernetes.io/projected/ae12f059-fbec-4ccb-93d1-d8c606b0c6b7-kube-api-access-w7p7h\") pod \"redhat-marketplace-64qm7\" (UID: \"ae12f059-fbec-4ccb-93d1-d8c606b0c6b7\") " pod="openshift-marketplace/redhat-marketplace-64qm7" Dec 13 00:09:54 crc kubenswrapper[3562]: I1213 00:09:54.199211 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae12f059-fbec-4ccb-93d1-d8c606b0c6b7-utilities\") pod \"redhat-marketplace-64qm7\" (UID: \"ae12f059-fbec-4ccb-93d1-d8c606b0c6b7\") " pod="openshift-marketplace/redhat-marketplace-64qm7" Dec 13 00:09:54 crc kubenswrapper[3562]: I1213 00:09:54.199295 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae12f059-fbec-4ccb-93d1-d8c606b0c6b7-catalog-content\") pod \"redhat-marketplace-64qm7\" (UID: \"ae12f059-fbec-4ccb-93d1-d8c606b0c6b7\") " pod="openshift-marketplace/redhat-marketplace-64qm7" Dec 13 00:09:54 crc kubenswrapper[3562]: I1213 00:09:54.199707 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae12f059-fbec-4ccb-93d1-d8c606b0c6b7-utilities\") pod \"redhat-marketplace-64qm7\" (UID: \"ae12f059-fbec-4ccb-93d1-d8c606b0c6b7\") " pod="openshift-marketplace/redhat-marketplace-64qm7" Dec 13 00:09:54 crc kubenswrapper[3562]: I1213 00:09:54.199781 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae12f059-fbec-4ccb-93d1-d8c606b0c6b7-catalog-content\") pod \"redhat-marketplace-64qm7\" (UID: \"ae12f059-fbec-4ccb-93d1-d8c606b0c6b7\") " pod="openshift-marketplace/redhat-marketplace-64qm7" Dec 13 00:09:54 crc kubenswrapper[3562]: I1213 00:09:54.218768 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7p7h\" (UniqueName: \"kubernetes.io/projected/ae12f059-fbec-4ccb-93d1-d8c606b0c6b7-kube-api-access-w7p7h\") pod \"redhat-marketplace-64qm7\" (UID: \"ae12f059-fbec-4ccb-93d1-d8c606b0c6b7\") " pod="openshift-marketplace/redhat-marketplace-64qm7" Dec 13 00:09:54 crc kubenswrapper[3562]: I1213 00:09:54.300716 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-64qm7" Dec 13 00:09:54 crc kubenswrapper[3562]: I1213 00:09:54.817265 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-64qm7"] Dec 13 00:09:54 crc kubenswrapper[3562]: W1213 00:09:54.823151 3562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae12f059_fbec_4ccb_93d1_d8c606b0c6b7.slice/crio-a6e36a02a16e0ac1d9a2c36448f6448fb962963f2a2ac9df2fcba0117f58eb0f WatchSource:0}: Error finding container a6e36a02a16e0ac1d9a2c36448f6448fb962963f2a2ac9df2fcba0117f58eb0f: Status 404 returned error can't find the container with id a6e36a02a16e0ac1d9a2c36448f6448fb962963f2a2ac9df2fcba0117f58eb0f Dec 13 00:09:55 crc kubenswrapper[3562]: I1213 00:09:55.456082 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jrtnd" event={"ID":"182f4041-a9bc-4b35-98c6-fa2baa049a4d","Type":"ContainerStarted","Data":"8351be9c2f168b8fe378af42347f06746e0b0d2d3457f772b0a1d987103b29c2"} Dec 13 00:09:55 crc kubenswrapper[3562]: I1213 00:09:55.460683 3562 generic.go:334] "Generic (PLEG): container finished" podID="9f237253-7db9-43eb-99b5-b6bd778581d8" containerID="9b88182f668309a98d0743372859deec79df7e3a2931a64dc2ba634bf1e3a0e4" exitCode=0 Dec 13 00:09:55 crc kubenswrapper[3562]: I1213 00:09:55.460765 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kk2tn" event={"ID":"9f237253-7db9-43eb-99b5-b6bd778581d8","Type":"ContainerDied","Data":"9b88182f668309a98d0743372859deec79df7e3a2931a64dc2ba634bf1e3a0e4"} Dec 13 00:09:55 crc kubenswrapper[3562]: I1213 00:09:55.470892 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sfhjq" event={"ID":"7d00c5c9-f08d-49fd-bd4f-ffd4772b86f7","Type":"ContainerStarted","Data":"ce0b61ce33a4ec36b9f4bb275ada4b7fb368061bdc74fde27db96b38381c8a4b"} Dec 13 00:09:55 crc kubenswrapper[3562]: I1213 00:09:55.475416 3562 generic.go:334] "Generic (PLEG): container finished" podID="ae12f059-fbec-4ccb-93d1-d8c606b0c6b7" containerID="575e8a8215a47d53e0f5a57f6ca008dc59416d9fe2a567e2476e64d517e1237e" exitCode=0 Dec 13 00:09:55 crc kubenswrapper[3562]: I1213 00:09:55.475490 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-64qm7" event={"ID":"ae12f059-fbec-4ccb-93d1-d8c606b0c6b7","Type":"ContainerDied","Data":"575e8a8215a47d53e0f5a57f6ca008dc59416d9fe2a567e2476e64d517e1237e"} Dec 13 00:09:55 crc kubenswrapper[3562]: I1213 00:09:55.475515 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-64qm7" event={"ID":"ae12f059-fbec-4ccb-93d1-d8c606b0c6b7","Type":"ContainerStarted","Data":"a6e36a02a16e0ac1d9a2c36448f6448fb962963f2a2ac9df2fcba0117f58eb0f"} Dec 13 00:09:55 crc kubenswrapper[3562]: I1213 00:09:55.479197 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5hp6q" event={"ID":"23298a96-6335-4197-a139-bafd98b8fa52","Type":"ContainerStarted","Data":"1084562b9be9ea5c1690763adfe61300a7853febe24c9fed0e5d210bd80bf96b"} Dec 13 00:09:59 crc kubenswrapper[3562]: I1213 00:09:59.502693 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-64qm7" event={"ID":"ae12f059-fbec-4ccb-93d1-d8c606b0c6b7","Type":"ContainerStarted","Data":"96c9b60fbbecf9d3f7f517941c5b0438b35398274df73dba5eeb077206034db6"} Dec 13 00:10:00 crc kubenswrapper[3562]: I1213 00:10:00.508426 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kk2tn" event={"ID":"9f237253-7db9-43eb-99b5-b6bd778581d8","Type":"ContainerStarted","Data":"5f03387fd7323de919aaca3fe2741c5e3644cf709c9b537e20e43e1a4278e273"} Dec 13 00:10:00 crc kubenswrapper[3562]: I1213 00:10:00.509668 3562 generic.go:334] "Generic (PLEG): container finished" podID="23298a96-6335-4197-a139-bafd98b8fa52" containerID="1084562b9be9ea5c1690763adfe61300a7853febe24c9fed0e5d210bd80bf96b" exitCode=0 Dec 13 00:10:00 crc kubenswrapper[3562]: I1213 00:10:00.509703 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5hp6q" event={"ID":"23298a96-6335-4197-a139-bafd98b8fa52","Type":"ContainerDied","Data":"1084562b9be9ea5c1690763adfe61300a7853febe24c9fed0e5d210bd80bf96b"} Dec 13 00:10:01 crc kubenswrapper[3562]: I1213 00:10:01.519362 3562 generic.go:334] "Generic (PLEG): container finished" podID="182f4041-a9bc-4b35-98c6-fa2baa049a4d" containerID="8351be9c2f168b8fe378af42347f06746e0b0d2d3457f772b0a1d987103b29c2" exitCode=0 Dec 13 00:10:01 crc kubenswrapper[3562]: I1213 00:10:01.519535 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jrtnd" event={"ID":"182f4041-a9bc-4b35-98c6-fa2baa049a4d","Type":"ContainerDied","Data":"8351be9c2f168b8fe378af42347f06746e0b0d2d3457f772b0a1d987103b29c2"} Dec 13 00:10:01 crc kubenswrapper[3562]: I1213 00:10:01.542430 3562 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kk2tn" podStartSLOduration=6.453270477 podStartE2EDuration="11.541988624s" podCreationTimestamp="2025-12-13 00:09:50 +0000 UTC" firstStartedPulling="2025-12-13 00:09:52.44415013 +0000 UTC m=+228.927380653" lastFinishedPulling="2025-12-13 00:09:57.532868277 +0000 UTC m=+234.016098800" observedRunningTime="2025-12-13 00:10:01.538120729 +0000 UTC m=+238.021351252" watchObservedRunningTime="2025-12-13 00:10:01.541988624 +0000 UTC m=+238.025219137" Dec 13 00:10:03 crc kubenswrapper[3562]: I1213 00:10:03.532663 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5hp6q" event={"ID":"23298a96-6335-4197-a139-bafd98b8fa52","Type":"ContainerStarted","Data":"0b4dfe17356fe15598c70a328472e2680db9044c65d931fd002a02a3dfd333c6"} Dec 13 00:10:03 crc kubenswrapper[3562]: I1213 00:10:03.534791 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jrtnd" event={"ID":"182f4041-a9bc-4b35-98c6-fa2baa049a4d","Type":"ContainerStarted","Data":"de9aa589b067a54c3cc22c3dcc9c5162a9486bb8452ba0ed154330888e2c2e0e"} Dec 13 00:10:03 crc kubenswrapper[3562]: I1213 00:10:03.729257 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 13 00:10:03 crc kubenswrapper[3562]: I1213 00:10:03.729809 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 13 00:10:03 crc kubenswrapper[3562]: I1213 00:10:03.729847 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 13 00:10:03 crc kubenswrapper[3562]: I1213 00:10:03.729891 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 13 00:10:03 crc kubenswrapper[3562]: I1213 00:10:03.729957 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 13 00:10:04 crc kubenswrapper[3562]: E1213 00:10:04.028164 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a56163bd96976ea74aba1c86f22da617d6a03538ac47eacc7910be637d7bf8ff\": container with ID starting with a56163bd96976ea74aba1c86f22da617d6a03538ac47eacc7910be637d7bf8ff not found: ID does not exist" containerID="a56163bd96976ea74aba1c86f22da617d6a03538ac47eacc7910be637d7bf8ff" Dec 13 00:10:04 crc kubenswrapper[3562]: I1213 00:10:04.028214 3562 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="a56163bd96976ea74aba1c86f22da617d6a03538ac47eacc7910be637d7bf8ff" err="rpc error: code = NotFound desc = could not find container \"a56163bd96976ea74aba1c86f22da617d6a03538ac47eacc7910be637d7bf8ff\": container with ID starting with a56163bd96976ea74aba1c86f22da617d6a03538ac47eacc7910be637d7bf8ff not found: ID does not exist" Dec 13 00:10:04 crc kubenswrapper[3562]: E1213 00:10:04.028585 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79c283f99efa65aebdd5c70a860e4be8de07c70a02e110724c8d177e28696649\": container with ID starting with 79c283f99efa65aebdd5c70a860e4be8de07c70a02e110724c8d177e28696649 not found: ID does not exist" containerID="79c283f99efa65aebdd5c70a860e4be8de07c70a02e110724c8d177e28696649" Dec 13 00:10:04 crc kubenswrapper[3562]: I1213 00:10:04.028671 3562 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="79c283f99efa65aebdd5c70a860e4be8de07c70a02e110724c8d177e28696649" err="rpc error: code = NotFound desc = could not find container \"79c283f99efa65aebdd5c70a860e4be8de07c70a02e110724c8d177e28696649\": container with ID starting with 79c283f99efa65aebdd5c70a860e4be8de07c70a02e110724c8d177e28696649 not found: ID does not exist" Dec 13 00:10:04 crc kubenswrapper[3562]: E1213 00:10:04.029145 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58b55f32eafae666203cdd6fbc4d2636fee478a2b24e4b57e1b52230cdf74843\": container with ID starting with 58b55f32eafae666203cdd6fbc4d2636fee478a2b24e4b57e1b52230cdf74843 not found: ID does not exist" containerID="58b55f32eafae666203cdd6fbc4d2636fee478a2b24e4b57e1b52230cdf74843" Dec 13 00:10:04 crc kubenswrapper[3562]: I1213 00:10:04.029207 3562 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="58b55f32eafae666203cdd6fbc4d2636fee478a2b24e4b57e1b52230cdf74843" err="rpc error: code = NotFound desc = could not find container \"58b55f32eafae666203cdd6fbc4d2636fee478a2b24e4b57e1b52230cdf74843\": container with ID starting with 58b55f32eafae666203cdd6fbc4d2636fee478a2b24e4b57e1b52230cdf74843 not found: ID does not exist" Dec 13 00:10:04 crc kubenswrapper[3562]: E1213 00:10:04.029756 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f432c7fb9551b92a90db75e3b1c003f4281525efd6e3f7f351865ef35c5ea786\": container with ID starting with f432c7fb9551b92a90db75e3b1c003f4281525efd6e3f7f351865ef35c5ea786 not found: ID does not exist" containerID="f432c7fb9551b92a90db75e3b1c003f4281525efd6e3f7f351865ef35c5ea786" Dec 13 00:10:04 crc kubenswrapper[3562]: I1213 00:10:04.029787 3562 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="f432c7fb9551b92a90db75e3b1c003f4281525efd6e3f7f351865ef35c5ea786" err="rpc error: code = NotFound desc = could not find container \"f432c7fb9551b92a90db75e3b1c003f4281525efd6e3f7f351865ef35c5ea786\": container with ID starting with f432c7fb9551b92a90db75e3b1c003f4281525efd6e3f7f351865ef35c5ea786 not found: ID does not exist" Dec 13 00:10:04 crc kubenswrapper[3562]: E1213 00:10:04.030238 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e919419d7e26f5e613ad3f3c9052fdc42524d23434e8deabbaeb09b182eb8f6\": container with ID starting with 3e919419d7e26f5e613ad3f3c9052fdc42524d23434e8deabbaeb09b182eb8f6 not found: ID does not exist" containerID="3e919419d7e26f5e613ad3f3c9052fdc42524d23434e8deabbaeb09b182eb8f6" Dec 13 00:10:04 crc kubenswrapper[3562]: I1213 00:10:04.030282 3562 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="3e919419d7e26f5e613ad3f3c9052fdc42524d23434e8deabbaeb09b182eb8f6" err="rpc error: code = NotFound desc = could not find container \"3e919419d7e26f5e613ad3f3c9052fdc42524d23434e8deabbaeb09b182eb8f6\": container with ID starting with 3e919419d7e26f5e613ad3f3c9052fdc42524d23434e8deabbaeb09b182eb8f6 not found: ID does not exist" Dec 13 00:10:04 crc kubenswrapper[3562]: E1213 00:10:04.030554 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96a85267c5ac9e1059a54b9538ada7b67633a30ca7adf1d4d16cf6033471c5f4\": container with ID starting with 96a85267c5ac9e1059a54b9538ada7b67633a30ca7adf1d4d16cf6033471c5f4 not found: ID does not exist" containerID="96a85267c5ac9e1059a54b9538ada7b67633a30ca7adf1d4d16cf6033471c5f4" Dec 13 00:10:04 crc kubenswrapper[3562]: I1213 00:10:04.030579 3562 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="96a85267c5ac9e1059a54b9538ada7b67633a30ca7adf1d4d16cf6033471c5f4" err="rpc error: code = NotFound desc = could not find container \"96a85267c5ac9e1059a54b9538ada7b67633a30ca7adf1d4d16cf6033471c5f4\": container with ID starting with 96a85267c5ac9e1059a54b9538ada7b67633a30ca7adf1d4d16cf6033471c5f4 not found: ID does not exist" Dec 13 00:10:04 crc kubenswrapper[3562]: E1213 00:10:04.030988 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"936c532d2ea4335be6418d05f1cceffee6284c4c1f755194bb383a6e75f88636\": container with ID starting with 936c532d2ea4335be6418d05f1cceffee6284c4c1f755194bb383a6e75f88636 not found: ID does not exist" containerID="936c532d2ea4335be6418d05f1cceffee6284c4c1f755194bb383a6e75f88636" Dec 13 00:10:04 crc kubenswrapper[3562]: I1213 00:10:04.031020 3562 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="936c532d2ea4335be6418d05f1cceffee6284c4c1f755194bb383a6e75f88636" err="rpc error: code = NotFound desc = could not find container \"936c532d2ea4335be6418d05f1cceffee6284c4c1f755194bb383a6e75f88636\": container with ID starting with 936c532d2ea4335be6418d05f1cceffee6284c4c1f755194bb383a6e75f88636 not found: ID does not exist" Dec 13 00:10:04 crc kubenswrapper[3562]: E1213 00:10:04.031373 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"821137b1cd0b6ecccbd1081c1b451b19bfea6dd2e089a4b1001a6cdb31a4256f\": container with ID starting with 821137b1cd0b6ecccbd1081c1b451b19bfea6dd2e089a4b1001a6cdb31a4256f not found: ID does not exist" containerID="821137b1cd0b6ecccbd1081c1b451b19bfea6dd2e089a4b1001a6cdb31a4256f" Dec 13 00:10:04 crc kubenswrapper[3562]: I1213 00:10:04.031403 3562 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="821137b1cd0b6ecccbd1081c1b451b19bfea6dd2e089a4b1001a6cdb31a4256f" err="rpc error: code = NotFound desc = could not find container \"821137b1cd0b6ecccbd1081c1b451b19bfea6dd2e089a4b1001a6cdb31a4256f\": container with ID starting with 821137b1cd0b6ecccbd1081c1b451b19bfea6dd2e089a4b1001a6cdb31a4256f not found: ID does not exist" Dec 13 00:10:04 crc kubenswrapper[3562]: E1213 00:10:04.033773 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f758649dde5a0955fe3ef141a27a7c8eea7852f10da149d3fc5720018c059f9\": container with ID starting with 2f758649dde5a0955fe3ef141a27a7c8eea7852f10da149d3fc5720018c059f9 not found: ID does not exist" containerID="2f758649dde5a0955fe3ef141a27a7c8eea7852f10da149d3fc5720018c059f9" Dec 13 00:10:04 crc kubenswrapper[3562]: I1213 00:10:04.033821 3562 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="2f758649dde5a0955fe3ef141a27a7c8eea7852f10da149d3fc5720018c059f9" err="rpc error: code = NotFound desc = could not find container \"2f758649dde5a0955fe3ef141a27a7c8eea7852f10da149d3fc5720018c059f9\": container with ID starting with 2f758649dde5a0955fe3ef141a27a7c8eea7852f10da149d3fc5720018c059f9 not found: ID does not exist" Dec 13 00:10:04 crc kubenswrapper[3562]: E1213 00:10:04.034236 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba42ad15bc6c92353d4b7ae95deb709fa5499a0d5b16b9c9c6153679fed8f077\": container with ID starting with ba42ad15bc6c92353d4b7ae95deb709fa5499a0d5b16b9c9c6153679fed8f077 not found: ID does not exist" containerID="ba42ad15bc6c92353d4b7ae95deb709fa5499a0d5b16b9c9c6153679fed8f077" Dec 13 00:10:04 crc kubenswrapper[3562]: I1213 00:10:04.034264 3562 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="ba42ad15bc6c92353d4b7ae95deb709fa5499a0d5b16b9c9c6153679fed8f077" err="rpc error: code = NotFound desc = could not find container \"ba42ad15bc6c92353d4b7ae95deb709fa5499a0d5b16b9c9c6153679fed8f077\": container with ID starting with ba42ad15bc6c92353d4b7ae95deb709fa5499a0d5b16b9c9c6153679fed8f077 not found: ID does not exist" Dec 13 00:10:04 crc kubenswrapper[3562]: E1213 00:10:04.035885 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0faea5dd6bb8aefd0e2039a30acf20b3bfe9e917754e8d9b2a898f4051a2c5dc\": container with ID starting with 0faea5dd6bb8aefd0e2039a30acf20b3bfe9e917754e8d9b2a898f4051a2c5dc not found: ID does not exist" containerID="0faea5dd6bb8aefd0e2039a30acf20b3bfe9e917754e8d9b2a898f4051a2c5dc" Dec 13 00:10:04 crc kubenswrapper[3562]: I1213 00:10:04.035928 3562 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="0faea5dd6bb8aefd0e2039a30acf20b3bfe9e917754e8d9b2a898f4051a2c5dc" err="rpc error: code = NotFound desc = could not find container \"0faea5dd6bb8aefd0e2039a30acf20b3bfe9e917754e8d9b2a898f4051a2c5dc\": container with ID starting with 0faea5dd6bb8aefd0e2039a30acf20b3bfe9e917754e8d9b2a898f4051a2c5dc not found: ID does not exist" Dec 13 00:10:04 crc kubenswrapper[3562]: E1213 00:10:04.036205 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c74c246d46562df6bafe28139d83ae2ba55d3f0fc666dc8077050a654e246963\": container with ID starting with c74c246d46562df6bafe28139d83ae2ba55d3f0fc666dc8077050a654e246963 not found: ID does not exist" containerID="c74c246d46562df6bafe28139d83ae2ba55d3f0fc666dc8077050a654e246963" Dec 13 00:10:04 crc kubenswrapper[3562]: I1213 00:10:04.036233 3562 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="c74c246d46562df6bafe28139d83ae2ba55d3f0fc666dc8077050a654e246963" err="rpc error: code = NotFound desc = could not find container \"c74c246d46562df6bafe28139d83ae2ba55d3f0fc666dc8077050a654e246963\": container with ID starting with c74c246d46562df6bafe28139d83ae2ba55d3f0fc666dc8077050a654e246963 not found: ID does not exist" Dec 13 00:10:04 crc kubenswrapper[3562]: E1213 00:10:04.036562 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"955cfa5558a348b4ee35f6a2b6d73e526c9554a025e5023e0fb461373cb0f4d0\": container with ID starting with 955cfa5558a348b4ee35f6a2b6d73e526c9554a025e5023e0fb461373cb0f4d0 not found: ID does not exist" containerID="955cfa5558a348b4ee35f6a2b6d73e526c9554a025e5023e0fb461373cb0f4d0" Dec 13 00:10:04 crc kubenswrapper[3562]: I1213 00:10:04.036608 3562 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="955cfa5558a348b4ee35f6a2b6d73e526c9554a025e5023e0fb461373cb0f4d0" err="rpc error: code = NotFound desc = could not find container \"955cfa5558a348b4ee35f6a2b6d73e526c9554a025e5023e0fb461373cb0f4d0\": container with ID starting with 955cfa5558a348b4ee35f6a2b6d73e526c9554a025e5023e0fb461373cb0f4d0 not found: ID does not exist" Dec 13 00:10:04 crc kubenswrapper[3562]: E1213 00:10:04.036978 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"319ec802f9a442097e69485c29cd0a5e07ea7f1fe43cf8778e08e37b4cf9f85f\": container with ID starting with 319ec802f9a442097e69485c29cd0a5e07ea7f1fe43cf8778e08e37b4cf9f85f not found: ID does not exist" containerID="319ec802f9a442097e69485c29cd0a5e07ea7f1fe43cf8778e08e37b4cf9f85f" Dec 13 00:10:04 crc kubenswrapper[3562]: I1213 00:10:04.037007 3562 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="319ec802f9a442097e69485c29cd0a5e07ea7f1fe43cf8778e08e37b4cf9f85f" err="rpc error: code = NotFound desc = could not find container \"319ec802f9a442097e69485c29cd0a5e07ea7f1fe43cf8778e08e37b4cf9f85f\": container with ID starting with 319ec802f9a442097e69485c29cd0a5e07ea7f1fe43cf8778e08e37b4cf9f85f not found: ID does not exist" Dec 13 00:10:04 crc kubenswrapper[3562]: E1213 00:10:04.037283 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30f87fc063214351a2d7f693b5af7355f78f438f8ce6d39d48f6177dfb07e5e8\": container with ID starting with 30f87fc063214351a2d7f693b5af7355f78f438f8ce6d39d48f6177dfb07e5e8 not found: ID does not exist" containerID="30f87fc063214351a2d7f693b5af7355f78f438f8ce6d39d48f6177dfb07e5e8" Dec 13 00:10:04 crc kubenswrapper[3562]: I1213 00:10:04.037317 3562 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="30f87fc063214351a2d7f693b5af7355f78f438f8ce6d39d48f6177dfb07e5e8" err="rpc error: code = NotFound desc = could not find container \"30f87fc063214351a2d7f693b5af7355f78f438f8ce6d39d48f6177dfb07e5e8\": container with ID starting with 30f87fc063214351a2d7f693b5af7355f78f438f8ce6d39d48f6177dfb07e5e8 not found: ID does not exist" Dec 13 00:10:04 crc kubenswrapper[3562]: E1213 00:10:04.037595 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bacbddb576219793667d7bc1f3ccf593e0bd7c1662b2c71d8f1655ddbbcd82e8\": container with ID starting with bacbddb576219793667d7bc1f3ccf593e0bd7c1662b2c71d8f1655ddbbcd82e8 not found: ID does not exist" containerID="bacbddb576219793667d7bc1f3ccf593e0bd7c1662b2c71d8f1655ddbbcd82e8" Dec 13 00:10:04 crc kubenswrapper[3562]: I1213 00:10:04.037623 3562 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="bacbddb576219793667d7bc1f3ccf593e0bd7c1662b2c71d8f1655ddbbcd82e8" err="rpc error: code = NotFound desc = could not find container \"bacbddb576219793667d7bc1f3ccf593e0bd7c1662b2c71d8f1655ddbbcd82e8\": container with ID starting with bacbddb576219793667d7bc1f3ccf593e0bd7c1662b2c71d8f1655ddbbcd82e8 not found: ID does not exist" Dec 13 00:10:04 crc kubenswrapper[3562]: I1213 00:10:04.544443 3562 generic.go:334] "Generic (PLEG): container finished" podID="ae12f059-fbec-4ccb-93d1-d8c606b0c6b7" containerID="96c9b60fbbecf9d3f7f517941c5b0438b35398274df73dba5eeb077206034db6" exitCode=0 Dec 13 00:10:04 crc kubenswrapper[3562]: I1213 00:10:04.545662 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-64qm7" event={"ID":"ae12f059-fbec-4ccb-93d1-d8c606b0c6b7","Type":"ContainerDied","Data":"96c9b60fbbecf9d3f7f517941c5b0438b35398274df73dba5eeb077206034db6"} Dec 13 00:10:04 crc kubenswrapper[3562]: I1213 00:10:04.570723 3562 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jrtnd" podStartSLOduration=4.655592033 podStartE2EDuration="13.570679213s" podCreationTimestamp="2025-12-13 00:09:51 +0000 UTC" firstStartedPulling="2025-12-13 00:09:53.447529316 +0000 UTC m=+229.930759839" lastFinishedPulling="2025-12-13 00:10:02.362616496 +0000 UTC m=+238.845847019" observedRunningTime="2025-12-13 00:10:04.568593297 +0000 UTC m=+241.051823820" watchObservedRunningTime="2025-12-13 00:10:04.570679213 +0000 UTC m=+241.053909726" Dec 13 00:10:04 crc kubenswrapper[3562]: I1213 00:10:04.609155 3562 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5hp6q" podStartSLOduration=5.130508985 podStartE2EDuration="12.609104299s" podCreationTimestamp="2025-12-13 00:09:52 +0000 UTC" firstStartedPulling="2025-12-13 00:09:53.444621557 +0000 UTC m=+229.927852080" lastFinishedPulling="2025-12-13 00:10:00.923216861 +0000 UTC m=+237.406447394" observedRunningTime="2025-12-13 00:10:04.608855832 +0000 UTC m=+241.092086355" watchObservedRunningTime="2025-12-13 00:10:04.609104299 +0000 UTC m=+241.092334832" Dec 13 00:10:07 crc kubenswrapper[3562]: I1213 00:10:07.874666 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:10:11 crc kubenswrapper[3562]: I1213 00:10:11.331228 3562 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kk2tn" Dec 13 00:10:11 crc kubenswrapper[3562]: I1213 00:10:11.331566 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kk2tn" Dec 13 00:10:11 crc kubenswrapper[3562]: I1213 00:10:11.409179 3562 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kk2tn" Dec 13 00:10:11 crc kubenswrapper[3562]: I1213 00:10:11.654766 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kk2tn" Dec 13 00:10:12 crc kubenswrapper[3562]: I1213 00:10:12.307970 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jrtnd" Dec 13 00:10:12 crc kubenswrapper[3562]: I1213 00:10:12.308011 3562 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jrtnd" Dec 13 00:10:12 crc kubenswrapper[3562]: I1213 00:10:12.392039 3562 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jrtnd" Dec 13 00:10:12 crc kubenswrapper[3562]: I1213 00:10:12.671562 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jrtnd" Dec 13 00:10:12 crc kubenswrapper[3562]: I1213 00:10:12.714499 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5hp6q" Dec 13 00:10:12 crc kubenswrapper[3562]: I1213 00:10:12.714592 3562 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5hp6q" Dec 13 00:10:12 crc kubenswrapper[3562]: I1213 00:10:12.807090 3562 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5hp6q" Dec 13 00:10:13 crc kubenswrapper[3562]: I1213 00:10:13.672889 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5hp6q" Dec 13 00:10:13 crc kubenswrapper[3562]: I1213 00:10:13.712364 3562 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5hp6q"] Dec 13 00:10:15 crc kubenswrapper[3562]: I1213 00:10:15.594064 3562 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5hp6q" podUID="23298a96-6335-4197-a139-bafd98b8fa52" containerName="registry-server" containerID="cri-o://0b4dfe17356fe15598c70a328472e2680db9044c65d931fd002a02a3dfd333c6" gracePeriod=2 Dec 13 00:10:22 crc kubenswrapper[3562]: E1213 00:10:22.715790 3562 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0b4dfe17356fe15598c70a328472e2680db9044c65d931fd002a02a3dfd333c6 is running failed: container process not found" containerID="0b4dfe17356fe15598c70a328472e2680db9044c65d931fd002a02a3dfd333c6" cmd=["grpc_health_probe","-addr=:50051"] Dec 13 00:10:22 crc kubenswrapper[3562]: E1213 00:10:22.716441 3562 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0b4dfe17356fe15598c70a328472e2680db9044c65d931fd002a02a3dfd333c6 is running failed: container process not found" containerID="0b4dfe17356fe15598c70a328472e2680db9044c65d931fd002a02a3dfd333c6" cmd=["grpc_health_probe","-addr=:50051"] Dec 13 00:10:22 crc kubenswrapper[3562]: E1213 00:10:22.716655 3562 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0b4dfe17356fe15598c70a328472e2680db9044c65d931fd002a02a3dfd333c6 is running failed: container process not found" containerID="0b4dfe17356fe15598c70a328472e2680db9044c65d931fd002a02a3dfd333c6" cmd=["grpc_health_probe","-addr=:50051"] Dec 13 00:10:22 crc kubenswrapper[3562]: E1213 00:10:22.716684 3562 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0b4dfe17356fe15598c70a328472e2680db9044c65d931fd002a02a3dfd333c6 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-5hp6q" podUID="23298a96-6335-4197-a139-bafd98b8fa52" containerName="registry-server" Dec 13 00:10:23 crc kubenswrapper[3562]: I1213 00:10:23.369226 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5hp6q_23298a96-6335-4197-a139-bafd98b8fa52/registry-server/0.log" Dec 13 00:10:23 crc kubenswrapper[3562]: I1213 00:10:23.370665 3562 generic.go:334] "Generic (PLEG): container finished" podID="23298a96-6335-4197-a139-bafd98b8fa52" containerID="0b4dfe17356fe15598c70a328472e2680db9044c65d931fd002a02a3dfd333c6" exitCode=-1 Dec 13 00:10:23 crc kubenswrapper[3562]: I1213 00:10:23.370721 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5hp6q" event={"ID":"23298a96-6335-4197-a139-bafd98b8fa52","Type":"ContainerDied","Data":"0b4dfe17356fe15598c70a328472e2680db9044c65d931fd002a02a3dfd333c6"} Dec 13 00:10:24 crc kubenswrapper[3562]: I1213 00:10:24.376825 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-64qm7" event={"ID":"ae12f059-fbec-4ccb-93d1-d8c606b0c6b7","Type":"ContainerStarted","Data":"5db7793626d7a19582f9855fa2977c277586bcca9b7198570985af98ef92818f"} Dec 13 00:10:24 crc kubenswrapper[3562]: I1213 00:10:24.395891 3562 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-64qm7" podStartSLOduration=21.613248905 podStartE2EDuration="31.39583449s" podCreationTimestamp="2025-12-13 00:09:53 +0000 UTC" firstStartedPulling="2025-12-13 00:09:55.476727514 +0000 UTC m=+231.959958037" lastFinishedPulling="2025-12-13 00:10:05.259313099 +0000 UTC m=+241.742543622" observedRunningTime="2025-12-13 00:10:24.393041734 +0000 UTC m=+260.876272277" watchObservedRunningTime="2025-12-13 00:10:24.39583449 +0000 UTC m=+260.879065013" Dec 13 00:10:26 crc kubenswrapper[3562]: I1213 00:10:26.698046 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5hp6q_23298a96-6335-4197-a139-bafd98b8fa52/registry-server/0.log" Dec 13 00:10:26 crc kubenswrapper[3562]: I1213 00:10:26.699391 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5hp6q" Dec 13 00:10:26 crc kubenswrapper[3562]: I1213 00:10:26.867262 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23298a96-6335-4197-a139-bafd98b8fa52-catalog-content\") pod \"23298a96-6335-4197-a139-bafd98b8fa52\" (UID: \"23298a96-6335-4197-a139-bafd98b8fa52\") " Dec 13 00:10:26 crc kubenswrapper[3562]: I1213 00:10:26.867339 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23298a96-6335-4197-a139-bafd98b8fa52-utilities\") pod \"23298a96-6335-4197-a139-bafd98b8fa52\" (UID: \"23298a96-6335-4197-a139-bafd98b8fa52\") " Dec 13 00:10:26 crc kubenswrapper[3562]: I1213 00:10:26.867374 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhz7q\" (UniqueName: \"kubernetes.io/projected/23298a96-6335-4197-a139-bafd98b8fa52-kube-api-access-fhz7q\") pod \"23298a96-6335-4197-a139-bafd98b8fa52\" (UID: \"23298a96-6335-4197-a139-bafd98b8fa52\") " Dec 13 00:10:26 crc kubenswrapper[3562]: I1213 00:10:26.869875 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23298a96-6335-4197-a139-bafd98b8fa52-utilities" (OuterVolumeSpecName: "utilities") pod "23298a96-6335-4197-a139-bafd98b8fa52" (UID: "23298a96-6335-4197-a139-bafd98b8fa52"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 00:10:26 crc kubenswrapper[3562]: I1213 00:10:26.880759 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23298a96-6335-4197-a139-bafd98b8fa52-kube-api-access-fhz7q" (OuterVolumeSpecName: "kube-api-access-fhz7q") pod "23298a96-6335-4197-a139-bafd98b8fa52" (UID: "23298a96-6335-4197-a139-bafd98b8fa52"). InnerVolumeSpecName "kube-api-access-fhz7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 00:10:26 crc kubenswrapper[3562]: I1213 00:10:26.968984 3562 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23298a96-6335-4197-a139-bafd98b8fa52-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 00:10:26 crc kubenswrapper[3562]: I1213 00:10:26.969024 3562 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-fhz7q\" (UniqueName: \"kubernetes.io/projected/23298a96-6335-4197-a139-bafd98b8fa52-kube-api-access-fhz7q\") on node \"crc\" DevicePath \"\"" Dec 13 00:10:28 crc kubenswrapper[3562]: I1213 00:10:28.127424 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5hp6q_23298a96-6335-4197-a139-bafd98b8fa52/registry-server/0.log" Dec 13 00:10:28 crc kubenswrapper[3562]: I1213 00:10:28.130026 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5hp6q" event={"ID":"23298a96-6335-4197-a139-bafd98b8fa52","Type":"ContainerDied","Data":"309125cf7832bf41ac3ae5c0f0a073cee3da01358cf0c383294c454f095e8202"} Dec 13 00:10:28 crc kubenswrapper[3562]: I1213 00:10:28.130074 3562 scope.go:117] "RemoveContainer" containerID="0b4dfe17356fe15598c70a328472e2680db9044c65d931fd002a02a3dfd333c6" Dec 13 00:10:29 crc kubenswrapper[3562]: I1213 00:10:29.135197 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5hp6q" Dec 13 00:10:30 crc kubenswrapper[3562]: I1213 00:10:30.051663 3562 scope.go:117] "RemoveContainer" containerID="1084562b9be9ea5c1690763adfe61300a7853febe24c9fed0e5d210bd80bf96b" Dec 13 00:10:30 crc kubenswrapper[3562]: I1213 00:10:30.096701 3562 scope.go:117] "RemoveContainer" containerID="3a8eb6fc35b6119abe81e3ddb6b6e56ac138eba6a4270b1fbb53974853ebfc90" Dec 13 00:10:32 crc kubenswrapper[3562]: I1213 00:10:32.956631 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:10:34 crc kubenswrapper[3562]: I1213 00:10:34.300986 3562 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-64qm7" Dec 13 00:10:34 crc kubenswrapper[3562]: I1213 00:10:34.301040 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-64qm7" Dec 13 00:10:34 crc kubenswrapper[3562]: I1213 00:10:34.374222 3562 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-64qm7" Dec 13 00:10:35 crc kubenswrapper[3562]: I1213 00:10:35.232176 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-64qm7" Dec 13 00:10:36 crc kubenswrapper[3562]: I1213 00:10:36.934373 3562 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 13 00:10:36 crc kubenswrapper[3562]: I1213 00:10:36.934443 3562 operation_generator.go:664] "MountVolume.MountDevice succeeded for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6ea5f9a7192af1960ec8c50a86fd2d9a756dbf85695798868f611e04a03ec009/globalmount\"" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:10:37 crc kubenswrapper[3562]: I1213 00:10:37.023073 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:10:37 crc kubenswrapper[3562]: I1213 00:10:37.278642 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-q786x" Dec 13 00:10:37 crc kubenswrapper[3562]: I1213 00:10:37.287595 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:10:37 crc kubenswrapper[3562]: W1213 00:10:37.689054 3562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9a7bc46_2f44_4aff_9cb5_97c97a4a8319.slice/crio-ce558c209d6117269c8377cbe364cf9dc9f464b7745c2e681907ca27a02091a7 WatchSource:0}: Error finding container ce558c209d6117269c8377cbe364cf9dc9f464b7745c2e681907ca27a02091a7: Status 404 returned error can't find the container with id ce558c209d6117269c8377cbe364cf9dc9f464b7745c2e681907ca27a02091a7 Dec 13 00:10:38 crc kubenswrapper[3562]: I1213 00:10:38.173923 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" event={"ID":"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319","Type":"ContainerStarted","Data":"ce558c209d6117269c8377cbe364cf9dc9f464b7745c2e681907ca27a02091a7"} Dec 13 00:10:38 crc kubenswrapper[3562]: I1213 00:10:38.558077 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23298a96-6335-4197-a139-bafd98b8fa52-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "23298a96-6335-4197-a139-bafd98b8fa52" (UID: "23298a96-6335-4197-a139-bafd98b8fa52"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 00:10:38 crc kubenswrapper[3562]: I1213 00:10:38.619382 3562 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23298a96-6335-4197-a139-bafd98b8fa52-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 00:10:38 crc kubenswrapper[3562]: I1213 00:10:38.765761 3562 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5hp6q"] Dec 13 00:10:38 crc kubenswrapper[3562]: I1213 00:10:38.769490 3562 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5hp6q"] Dec 13 00:10:39 crc kubenswrapper[3562]: I1213 00:10:39.179818 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" event={"ID":"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319","Type":"ContainerStarted","Data":"7bcc9373144aa1baeead614defc177c9a227076a45e5bdcad17ffc8920843d2c"} Dec 13 00:10:39 crc kubenswrapper[3562]: I1213 00:10:39.179876 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:10:39 crc kubenswrapper[3562]: I1213 00:10:39.182715 3562 generic.go:334] "Generic (PLEG): container finished" podID="7d00c5c9-f08d-49fd-bd4f-ffd4772b86f7" containerID="ce0b61ce33a4ec36b9f4bb275ada4b7fb368061bdc74fde27db96b38381c8a4b" exitCode=0 Dec 13 00:10:39 crc kubenswrapper[3562]: I1213 00:10:39.182766 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sfhjq" event={"ID":"7d00c5c9-f08d-49fd-bd4f-ffd4772b86f7","Type":"ContainerDied","Data":"ce0b61ce33a4ec36b9f4bb275ada4b7fb368061bdc74fde27db96b38381c8a4b"} Dec 13 00:10:39 crc kubenswrapper[3562]: I1213 00:10:39.186596 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/4.log" Dec 13 00:10:39 crc kubenswrapper[3562]: I1213 00:10:39.186645 3562 generic.go:334] "Generic (PLEG): container finished" podID="7d51f445-054a-4e4f-a67b-a828f5a32511" containerID="9747465a078c9ef9791281de8811e879fa60a8c179d9cf7cee2e4a6a610a6b9f" exitCode=1 Dec 13 00:10:39 crc kubenswrapper[3562]: I1213 00:10:39.186672 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" event={"ID":"7d51f445-054a-4e4f-a67b-a828f5a32511","Type":"ContainerDied","Data":"9747465a078c9ef9791281de8811e879fa60a8c179d9cf7cee2e4a6a610a6b9f"} Dec 13 00:10:39 crc kubenswrapper[3562]: I1213 00:10:39.187119 3562 scope.go:117] "RemoveContainer" containerID="9747465a078c9ef9791281de8811e879fa60a8c179d9cf7cee2e4a6a610a6b9f" Dec 13 00:10:39 crc kubenswrapper[3562]: I1213 00:10:39.882438 3562 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23298a96-6335-4197-a139-bafd98b8fa52" path="/var/lib/kubelet/pods/23298a96-6335-4197-a139-bafd98b8fa52/volumes" Dec 13 00:10:40 crc kubenswrapper[3562]: I1213 00:10:40.193688 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sfhjq" event={"ID":"7d00c5c9-f08d-49fd-bd4f-ffd4772b86f7","Type":"ContainerStarted","Data":"e791f6849dbca4c4e8d6345a3d7b175c22716a18003b7956d2bb64bb60e6df6b"} Dec 13 00:10:40 crc kubenswrapper[3562]: I1213 00:10:40.195788 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/4.log" Dec 13 00:10:40 crc kubenswrapper[3562]: I1213 00:10:40.195886 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" event={"ID":"7d51f445-054a-4e4f-a67b-a828f5a32511","Type":"ContainerStarted","Data":"2f9efb07f33045cd3c4a9ccf124a9fe59e0b81dc15b32b3dd42dc34be5cec8b2"} Dec 13 00:10:40 crc kubenswrapper[3562]: I1213 00:10:40.238751 3562 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sfhjq" podStartSLOduration=1.987032988 podStartE2EDuration="48.238704565s" podCreationTimestamp="2025-12-13 00:09:52 +0000 UTC" firstStartedPulling="2025-12-13 00:09:53.442307145 +0000 UTC m=+229.925537668" lastFinishedPulling="2025-12-13 00:10:39.693978712 +0000 UTC m=+276.177209245" observedRunningTime="2025-12-13 00:10:40.222229325 +0000 UTC m=+276.705459848" watchObservedRunningTime="2025-12-13 00:10:40.238704565 +0000 UTC m=+276.721935088" Dec 13 00:10:42 crc kubenswrapper[3562]: I1213 00:10:42.905764 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sfhjq" Dec 13 00:10:42 crc kubenswrapper[3562]: I1213 00:10:42.906157 3562 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sfhjq" Dec 13 00:10:43 crc kubenswrapper[3562]: I1213 00:10:43.987651 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sfhjq" podUID="7d00c5c9-f08d-49fd-bd4f-ffd4772b86f7" containerName="registry-server" probeResult="failure" output=< Dec 13 00:10:43 crc kubenswrapper[3562]: timeout: failed to connect service ":50051" within 1s Dec 13 00:10:43 crc kubenswrapper[3562]: > Dec 13 00:10:52 crc kubenswrapper[3562]: I1213 00:10:52.986141 3562 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sfhjq" Dec 13 00:10:53 crc kubenswrapper[3562]: I1213 00:10:53.081762 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sfhjq" Dec 13 00:10:57 crc kubenswrapper[3562]: I1213 00:10:57.294801 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:11:03 crc kubenswrapper[3562]: I1213 00:11:03.730364 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 13 00:11:03 crc kubenswrapper[3562]: I1213 00:11:03.730944 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 13 00:11:03 crc kubenswrapper[3562]: I1213 00:11:03.730996 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 13 00:11:03 crc kubenswrapper[3562]: I1213 00:11:03.731089 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 13 00:11:03 crc kubenswrapper[3562]: I1213 00:11:03.731118 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 13 00:11:09 crc kubenswrapper[3562]: I1213 00:11:09.333650 3562 generic.go:334] "Generic (PLEG): container finished" podID="65e6c5f4-6f11-4ec0-8ce6-a544e417c56e" containerID="a3c629594c01827b2684e9bed2b1b04172774fefdaf1dc34171fa332619e4364" exitCode=0 Dec 13 00:11:09 crc kubenswrapper[3562]: I1213 00:11:09.333748 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29426400-qzd6w" event={"ID":"65e6c5f4-6f11-4ec0-8ce6-a544e417c56e","Type":"ContainerDied","Data":"a3c629594c01827b2684e9bed2b1b04172774fefdaf1dc34171fa332619e4364"} Dec 13 00:11:10 crc kubenswrapper[3562]: I1213 00:11:10.542976 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29426400-qzd6w" Dec 13 00:11:10 crc kubenswrapper[3562]: I1213 00:11:10.708097 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8n87m\" (UniqueName: \"kubernetes.io/projected/65e6c5f4-6f11-4ec0-8ce6-a544e417c56e-kube-api-access-8n87m\") pod \"65e6c5f4-6f11-4ec0-8ce6-a544e417c56e\" (UID: \"65e6c5f4-6f11-4ec0-8ce6-a544e417c56e\") " Dec 13 00:11:10 crc kubenswrapper[3562]: I1213 00:11:10.708148 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/65e6c5f4-6f11-4ec0-8ce6-a544e417c56e-serviceca\") pod \"65e6c5f4-6f11-4ec0-8ce6-a544e417c56e\" (UID: \"65e6c5f4-6f11-4ec0-8ce6-a544e417c56e\") " Dec 13 00:11:10 crc kubenswrapper[3562]: I1213 00:11:10.709139 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65e6c5f4-6f11-4ec0-8ce6-a544e417c56e-serviceca" (OuterVolumeSpecName: "serviceca") pod "65e6c5f4-6f11-4ec0-8ce6-a544e417c56e" (UID: "65e6c5f4-6f11-4ec0-8ce6-a544e417c56e"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 00:11:10 crc kubenswrapper[3562]: I1213 00:11:10.714028 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65e6c5f4-6f11-4ec0-8ce6-a544e417c56e-kube-api-access-8n87m" (OuterVolumeSpecName: "kube-api-access-8n87m") pod "65e6c5f4-6f11-4ec0-8ce6-a544e417c56e" (UID: "65e6c5f4-6f11-4ec0-8ce6-a544e417c56e"). InnerVolumeSpecName "kube-api-access-8n87m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 00:11:10 crc kubenswrapper[3562]: I1213 00:11:10.809164 3562 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-8n87m\" (UniqueName: \"kubernetes.io/projected/65e6c5f4-6f11-4ec0-8ce6-a544e417c56e-kube-api-access-8n87m\") on node \"crc\" DevicePath \"\"" Dec 13 00:11:10 crc kubenswrapper[3562]: I1213 00:11:10.809197 3562 reconciler_common.go:300] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/65e6c5f4-6f11-4ec0-8ce6-a544e417c56e-serviceca\") on node \"crc\" DevicePath \"\"" Dec 13 00:11:11 crc kubenswrapper[3562]: I1213 00:11:11.344198 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29426400-qzd6w" event={"ID":"65e6c5f4-6f11-4ec0-8ce6-a544e417c56e","Type":"ContainerDied","Data":"82b74fcd8468283cc5dbd7119dcb6524450a081b9e20eee148a13d696cd86748"} Dec 13 00:11:11 crc kubenswrapper[3562]: I1213 00:11:11.344235 3562 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82b74fcd8468283cc5dbd7119dcb6524450a081b9e20eee148a13d696cd86748" Dec 13 00:11:11 crc kubenswrapper[3562]: I1213 00:11:11.344239 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29426400-qzd6w" Dec 13 00:11:47 crc kubenswrapper[3562]: I1213 00:11:47.865957 3562 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-7fx6n"] Dec 13 00:11:47 crc kubenswrapper[3562]: I1213 00:11:47.866523 3562 topology_manager.go:215] "Topology Admit Handler" podUID="f07661a3-219e-48c0-ae91-20a88ffd8828" podNamespace="openshift-multus" podName="cni-sysctl-allowlist-ds-7fx6n" Dec 13 00:11:47 crc kubenswrapper[3562]: E1213 00:11:47.866653 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="23298a96-6335-4197-a139-bafd98b8fa52" containerName="extract-utilities" Dec 13 00:11:47 crc kubenswrapper[3562]: I1213 00:11:47.866663 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="23298a96-6335-4197-a139-bafd98b8fa52" containerName="extract-utilities" Dec 13 00:11:47 crc kubenswrapper[3562]: E1213 00:11:47.866676 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="65e6c5f4-6f11-4ec0-8ce6-a544e417c56e" containerName="image-pruner" Dec 13 00:11:47 crc kubenswrapper[3562]: I1213 00:11:47.866683 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="65e6c5f4-6f11-4ec0-8ce6-a544e417c56e" containerName="image-pruner" Dec 13 00:11:47 crc kubenswrapper[3562]: E1213 00:11:47.866693 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="23298a96-6335-4197-a139-bafd98b8fa52" containerName="registry-server" Dec 13 00:11:47 crc kubenswrapper[3562]: I1213 00:11:47.866699 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="23298a96-6335-4197-a139-bafd98b8fa52" containerName="registry-server" Dec 13 00:11:47 crc kubenswrapper[3562]: E1213 00:11:47.866707 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="23298a96-6335-4197-a139-bafd98b8fa52" containerName="extract-content" Dec 13 00:11:47 crc kubenswrapper[3562]: I1213 00:11:47.866713 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="23298a96-6335-4197-a139-bafd98b8fa52" containerName="extract-content" Dec 13 00:11:47 crc kubenswrapper[3562]: I1213 00:11:47.866809 3562 memory_manager.go:354] "RemoveStaleState removing state" podUID="65e6c5f4-6f11-4ec0-8ce6-a544e417c56e" containerName="image-pruner" Dec 13 00:11:47 crc kubenswrapper[3562]: I1213 00:11:47.866820 3562 memory_manager.go:354] "RemoveStaleState removing state" podUID="23298a96-6335-4197-a139-bafd98b8fa52" containerName="registry-server" Dec 13 00:11:47 crc kubenswrapper[3562]: I1213 00:11:47.867151 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-7fx6n" Dec 13 00:11:47 crc kubenswrapper[3562]: I1213 00:11:47.874022 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Dec 13 00:11:47 crc kubenswrapper[3562]: I1213 00:11:47.874601 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-smth4" Dec 13 00:11:47 crc kubenswrapper[3562]: I1213 00:11:47.914468 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f07661a3-219e-48c0-ae91-20a88ffd8828-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-7fx6n\" (UID: \"f07661a3-219e-48c0-ae91-20a88ffd8828\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7fx6n" Dec 13 00:11:47 crc kubenswrapper[3562]: I1213 00:11:47.914632 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f07661a3-219e-48c0-ae91-20a88ffd8828-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-7fx6n\" (UID: \"f07661a3-219e-48c0-ae91-20a88ffd8828\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7fx6n" Dec 13 00:11:47 crc kubenswrapper[3562]: I1213 00:11:47.914776 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcmpw\" (UniqueName: \"kubernetes.io/projected/f07661a3-219e-48c0-ae91-20a88ffd8828-kube-api-access-vcmpw\") pod \"cni-sysctl-allowlist-ds-7fx6n\" (UID: \"f07661a3-219e-48c0-ae91-20a88ffd8828\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7fx6n" Dec 13 00:11:47 crc kubenswrapper[3562]: I1213 00:11:47.914851 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/f07661a3-219e-48c0-ae91-20a88ffd8828-ready\") pod \"cni-sysctl-allowlist-ds-7fx6n\" (UID: \"f07661a3-219e-48c0-ae91-20a88ffd8828\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7fx6n" Dec 13 00:11:48 crc kubenswrapper[3562]: I1213 00:11:48.015693 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f07661a3-219e-48c0-ae91-20a88ffd8828-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-7fx6n\" (UID: \"f07661a3-219e-48c0-ae91-20a88ffd8828\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7fx6n" Dec 13 00:11:48 crc kubenswrapper[3562]: I1213 00:11:48.015757 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f07661a3-219e-48c0-ae91-20a88ffd8828-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-7fx6n\" (UID: \"f07661a3-219e-48c0-ae91-20a88ffd8828\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7fx6n" Dec 13 00:11:48 crc kubenswrapper[3562]: I1213 00:11:48.015796 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vcmpw\" (UniqueName: \"kubernetes.io/projected/f07661a3-219e-48c0-ae91-20a88ffd8828-kube-api-access-vcmpw\") pod \"cni-sysctl-allowlist-ds-7fx6n\" (UID: \"f07661a3-219e-48c0-ae91-20a88ffd8828\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7fx6n" Dec 13 00:11:48 crc kubenswrapper[3562]: I1213 00:11:48.015824 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/f07661a3-219e-48c0-ae91-20a88ffd8828-ready\") pod \"cni-sysctl-allowlist-ds-7fx6n\" (UID: \"f07661a3-219e-48c0-ae91-20a88ffd8828\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7fx6n" Dec 13 00:11:48 crc kubenswrapper[3562]: I1213 00:11:48.015943 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f07661a3-219e-48c0-ae91-20a88ffd8828-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-7fx6n\" (UID: \"f07661a3-219e-48c0-ae91-20a88ffd8828\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7fx6n" Dec 13 00:11:48 crc kubenswrapper[3562]: I1213 00:11:48.016620 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/f07661a3-219e-48c0-ae91-20a88ffd8828-ready\") pod \"cni-sysctl-allowlist-ds-7fx6n\" (UID: \"f07661a3-219e-48c0-ae91-20a88ffd8828\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7fx6n" Dec 13 00:11:48 crc kubenswrapper[3562]: I1213 00:11:48.017037 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f07661a3-219e-48c0-ae91-20a88ffd8828-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-7fx6n\" (UID: \"f07661a3-219e-48c0-ae91-20a88ffd8828\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7fx6n" Dec 13 00:11:48 crc kubenswrapper[3562]: I1213 00:11:48.037105 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcmpw\" (UniqueName: \"kubernetes.io/projected/f07661a3-219e-48c0-ae91-20a88ffd8828-kube-api-access-vcmpw\") pod \"cni-sysctl-allowlist-ds-7fx6n\" (UID: \"f07661a3-219e-48c0-ae91-20a88ffd8828\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7fx6n" Dec 13 00:11:48 crc kubenswrapper[3562]: I1213 00:11:48.181791 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-7fx6n" Dec 13 00:11:48 crc kubenswrapper[3562]: I1213 00:11:48.545675 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-7fx6n" event={"ID":"f07661a3-219e-48c0-ae91-20a88ffd8828","Type":"ContainerStarted","Data":"f616002adb99719dc3601cad7598109cdd0367c2e91de0853da0a68538bd7f85"} Dec 13 00:11:49 crc kubenswrapper[3562]: I1213 00:11:49.552663 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-7fx6n" event={"ID":"f07661a3-219e-48c0-ae91-20a88ffd8828","Type":"ContainerStarted","Data":"1c307ec6298b629c965e31dad4f4397d58b2cae296f4da3ac308582cb7d83e7b"} Dec 13 00:11:49 crc kubenswrapper[3562]: I1213 00:11:49.571745 3562 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-7fx6n" podStartSLOduration=2.5716677580000002 podStartE2EDuration="2.571667758s" podCreationTimestamp="2025-12-13 00:11:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 00:11:49.570354722 +0000 UTC m=+346.053585235" watchObservedRunningTime="2025-12-13 00:11:49.571667758 +0000 UTC m=+346.054898301" Dec 13 00:11:50 crc kubenswrapper[3562]: I1213 00:11:50.557877 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-7fx6n" Dec 13 00:11:50 crc kubenswrapper[3562]: I1213 00:11:50.609013 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-7fx6n" Dec 13 00:11:50 crc kubenswrapper[3562]: I1213 00:11:50.876614 3562 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-7fx6n"] Dec 13 00:11:52 crc kubenswrapper[3562]: I1213 00:11:52.566473 3562 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-7fx6n" podUID="f07661a3-219e-48c0-ae91-20a88ffd8828" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://1c307ec6298b629c965e31dad4f4397d58b2cae296f4da3ac308582cb7d83e7b" gracePeriod=30 Dec 13 00:11:58 crc kubenswrapper[3562]: E1213 00:11:58.184801 3562 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1c307ec6298b629c965e31dad4f4397d58b2cae296f4da3ac308582cb7d83e7b" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 13 00:11:58 crc kubenswrapper[3562]: E1213 00:11:58.186479 3562 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1c307ec6298b629c965e31dad4f4397d58b2cae296f4da3ac308582cb7d83e7b" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 13 00:11:58 crc kubenswrapper[3562]: E1213 00:11:58.187686 3562 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1c307ec6298b629c965e31dad4f4397d58b2cae296f4da3ac308582cb7d83e7b" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 13 00:11:58 crc kubenswrapper[3562]: E1213 00:11:58.187729 3562 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-7fx6n" podUID="f07661a3-219e-48c0-ae91-20a88ffd8828" containerName="kube-multus-additional-cni-plugins" Dec 13 00:12:03 crc kubenswrapper[3562]: I1213 00:12:03.732169 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 13 00:12:03 crc kubenswrapper[3562]: I1213 00:12:03.732634 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 13 00:12:03 crc kubenswrapper[3562]: I1213 00:12:03.732685 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 13 00:12:03 crc kubenswrapper[3562]: I1213 00:12:03.732732 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 13 00:12:03 crc kubenswrapper[3562]: I1213 00:12:03.732773 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 13 00:12:08 crc kubenswrapper[3562]: E1213 00:12:08.185541 3562 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1c307ec6298b629c965e31dad4f4397d58b2cae296f4da3ac308582cb7d83e7b" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 13 00:12:08 crc kubenswrapper[3562]: E1213 00:12:08.186898 3562 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1c307ec6298b629c965e31dad4f4397d58b2cae296f4da3ac308582cb7d83e7b" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 13 00:12:08 crc kubenswrapper[3562]: E1213 00:12:08.187984 3562 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1c307ec6298b629c965e31dad4f4397d58b2cae296f4da3ac308582cb7d83e7b" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 13 00:12:08 crc kubenswrapper[3562]: E1213 00:12:08.188023 3562 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-7fx6n" podUID="f07661a3-219e-48c0-ae91-20a88ffd8828" containerName="kube-multus-additional-cni-plugins" Dec 13 00:12:13 crc kubenswrapper[3562]: I1213 00:12:13.052735 3562 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 00:12:13 crc kubenswrapper[3562]: I1213 00:12:13.053338 3562 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 00:12:18 crc kubenswrapper[3562]: E1213 00:12:18.185598 3562 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1c307ec6298b629c965e31dad4f4397d58b2cae296f4da3ac308582cb7d83e7b" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 13 00:12:18 crc kubenswrapper[3562]: E1213 00:12:18.187304 3562 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1c307ec6298b629c965e31dad4f4397d58b2cae296f4da3ac308582cb7d83e7b" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 13 00:12:18 crc kubenswrapper[3562]: E1213 00:12:18.188709 3562 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1c307ec6298b629c965e31dad4f4397d58b2cae296f4da3ac308582cb7d83e7b" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 13 00:12:18 crc kubenswrapper[3562]: E1213 00:12:18.188753 3562 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-7fx6n" podUID="f07661a3-219e-48c0-ae91-20a88ffd8828" containerName="kube-multus-additional-cni-plugins" Dec 13 00:12:22 crc kubenswrapper[3562]: I1213 00:12:22.706062 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-7fx6n_f07661a3-219e-48c0-ae91-20a88ffd8828/kube-multus-additional-cni-plugins/0.log" Dec 13 00:12:22 crc kubenswrapper[3562]: I1213 00:12:22.706330 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-7fx6n" Dec 13 00:12:22 crc kubenswrapper[3562]: I1213 00:12:22.847745 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-7fx6n_f07661a3-219e-48c0-ae91-20a88ffd8828/kube-multus-additional-cni-plugins/0.log" Dec 13 00:12:22 crc kubenswrapper[3562]: I1213 00:12:22.847838 3562 generic.go:334] "Generic (PLEG): container finished" podID="f07661a3-219e-48c0-ae91-20a88ffd8828" containerID="1c307ec6298b629c965e31dad4f4397d58b2cae296f4da3ac308582cb7d83e7b" exitCode=137 Dec 13 00:12:22 crc kubenswrapper[3562]: I1213 00:12:22.847880 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-7fx6n" event={"ID":"f07661a3-219e-48c0-ae91-20a88ffd8828","Type":"ContainerDied","Data":"1c307ec6298b629c965e31dad4f4397d58b2cae296f4da3ac308582cb7d83e7b"} Dec 13 00:12:22 crc kubenswrapper[3562]: I1213 00:12:22.847953 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-7fx6n" event={"ID":"f07661a3-219e-48c0-ae91-20a88ffd8828","Type":"ContainerDied","Data":"f616002adb99719dc3601cad7598109cdd0367c2e91de0853da0a68538bd7f85"} Dec 13 00:12:22 crc kubenswrapper[3562]: I1213 00:12:22.848015 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-7fx6n" Dec 13 00:12:22 crc kubenswrapper[3562]: I1213 00:12:22.848029 3562 scope.go:117] "RemoveContainer" containerID="1c307ec6298b629c965e31dad4f4397d58b2cae296f4da3ac308582cb7d83e7b" Dec 13 00:12:22 crc kubenswrapper[3562]: I1213 00:12:22.880231 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/f07661a3-219e-48c0-ae91-20a88ffd8828-ready\") pod \"f07661a3-219e-48c0-ae91-20a88ffd8828\" (UID: \"f07661a3-219e-48c0-ae91-20a88ffd8828\") " Dec 13 00:12:22 crc kubenswrapper[3562]: I1213 00:12:22.880335 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f07661a3-219e-48c0-ae91-20a88ffd8828-tuning-conf-dir\") pod \"f07661a3-219e-48c0-ae91-20a88ffd8828\" (UID: \"f07661a3-219e-48c0-ae91-20a88ffd8828\") " Dec 13 00:12:22 crc kubenswrapper[3562]: I1213 00:12:22.880375 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcmpw\" (UniqueName: \"kubernetes.io/projected/f07661a3-219e-48c0-ae91-20a88ffd8828-kube-api-access-vcmpw\") pod \"f07661a3-219e-48c0-ae91-20a88ffd8828\" (UID: \"f07661a3-219e-48c0-ae91-20a88ffd8828\") " Dec 13 00:12:22 crc kubenswrapper[3562]: I1213 00:12:22.880430 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f07661a3-219e-48c0-ae91-20a88ffd8828-cni-sysctl-allowlist\") pod \"f07661a3-219e-48c0-ae91-20a88ffd8828\" (UID: \"f07661a3-219e-48c0-ae91-20a88ffd8828\") " Dec 13 00:12:22 crc kubenswrapper[3562]: I1213 00:12:22.880472 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f07661a3-219e-48c0-ae91-20a88ffd8828-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "f07661a3-219e-48c0-ae91-20a88ffd8828" (UID: "f07661a3-219e-48c0-ae91-20a88ffd8828"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 00:12:22 crc kubenswrapper[3562]: I1213 00:12:22.880640 3562 reconciler_common.go:300] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f07661a3-219e-48c0-ae91-20a88ffd8828-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Dec 13 00:12:22 crc kubenswrapper[3562]: I1213 00:12:22.881289 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f07661a3-219e-48c0-ae91-20a88ffd8828-ready" (OuterVolumeSpecName: "ready") pod "f07661a3-219e-48c0-ae91-20a88ffd8828" (UID: "f07661a3-219e-48c0-ae91-20a88ffd8828"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 00:12:22 crc kubenswrapper[3562]: I1213 00:12:22.881963 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f07661a3-219e-48c0-ae91-20a88ffd8828-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "f07661a3-219e-48c0-ae91-20a88ffd8828" (UID: "f07661a3-219e-48c0-ae91-20a88ffd8828"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 00:12:22 crc kubenswrapper[3562]: I1213 00:12:22.889065 3562 scope.go:117] "RemoveContainer" containerID="1c307ec6298b629c965e31dad4f4397d58b2cae296f4da3ac308582cb7d83e7b" Dec 13 00:12:22 crc kubenswrapper[3562]: E1213 00:12:22.889891 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c307ec6298b629c965e31dad4f4397d58b2cae296f4da3ac308582cb7d83e7b\": container with ID starting with 1c307ec6298b629c965e31dad4f4397d58b2cae296f4da3ac308582cb7d83e7b not found: ID does not exist" containerID="1c307ec6298b629c965e31dad4f4397d58b2cae296f4da3ac308582cb7d83e7b" Dec 13 00:12:22 crc kubenswrapper[3562]: I1213 00:12:22.890206 3562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c307ec6298b629c965e31dad4f4397d58b2cae296f4da3ac308582cb7d83e7b"} err="failed to get container status \"1c307ec6298b629c965e31dad4f4397d58b2cae296f4da3ac308582cb7d83e7b\": rpc error: code = NotFound desc = could not find container \"1c307ec6298b629c965e31dad4f4397d58b2cae296f4da3ac308582cb7d83e7b\": container with ID starting with 1c307ec6298b629c965e31dad4f4397d58b2cae296f4da3ac308582cb7d83e7b not found: ID does not exist" Dec 13 00:12:22 crc kubenswrapper[3562]: I1213 00:12:22.890198 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f07661a3-219e-48c0-ae91-20a88ffd8828-kube-api-access-vcmpw" (OuterVolumeSpecName: "kube-api-access-vcmpw") pod "f07661a3-219e-48c0-ae91-20a88ffd8828" (UID: "f07661a3-219e-48c0-ae91-20a88ffd8828"). InnerVolumeSpecName "kube-api-access-vcmpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 00:12:22 crc kubenswrapper[3562]: I1213 00:12:22.982149 3562 reconciler_common.go:300] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f07661a3-219e-48c0-ae91-20a88ffd8828-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 13 00:12:22 crc kubenswrapper[3562]: I1213 00:12:22.982195 3562 reconciler_common.go:300] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/f07661a3-219e-48c0-ae91-20a88ffd8828-ready\") on node \"crc\" DevicePath \"\"" Dec 13 00:12:22 crc kubenswrapper[3562]: I1213 00:12:22.982212 3562 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-vcmpw\" (UniqueName: \"kubernetes.io/projected/f07661a3-219e-48c0-ae91-20a88ffd8828-kube-api-access-vcmpw\") on node \"crc\" DevicePath \"\"" Dec 13 00:12:23 crc kubenswrapper[3562]: I1213 00:12:23.176429 3562 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-7fx6n"] Dec 13 00:12:23 crc kubenswrapper[3562]: I1213 00:12:23.180119 3562 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-7fx6n"] Dec 13 00:12:23 crc kubenswrapper[3562]: I1213 00:12:23.887389 3562 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f07661a3-219e-48c0-ae91-20a88ffd8828" path="/var/lib/kubelet/pods/f07661a3-219e-48c0-ae91-20a88ffd8828/volumes" Dec 13 00:12:43 crc kubenswrapper[3562]: I1213 00:12:43.053435 3562 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 00:12:43 crc kubenswrapper[3562]: I1213 00:12:43.054084 3562 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 00:13:03 crc kubenswrapper[3562]: I1213 00:13:03.733385 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 13 00:13:03 crc kubenswrapper[3562]: I1213 00:13:03.733925 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 13 00:13:03 crc kubenswrapper[3562]: I1213 00:13:03.733950 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 13 00:13:03 crc kubenswrapper[3562]: I1213 00:13:03.733973 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 13 00:13:03 crc kubenswrapper[3562]: I1213 00:13:03.733992 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 13 00:13:13 crc kubenswrapper[3562]: I1213 00:13:13.053460 3562 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 00:13:13 crc kubenswrapper[3562]: I1213 00:13:13.054430 3562 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 00:13:13 crc kubenswrapper[3562]: I1213 00:13:13.054483 3562 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Dec 13 00:13:13 crc kubenswrapper[3562]: I1213 00:13:13.055618 3562 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7a6e9eb4ac073e0dfa6ae2034844d4a0104e16d5711e34811caed4fa91b88545"} pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 13 00:13:13 crc kubenswrapper[3562]: I1213 00:13:13.055855 3562 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" containerID="cri-o://7a6e9eb4ac073e0dfa6ae2034844d4a0104e16d5711e34811caed4fa91b88545" gracePeriod=600 Dec 13 00:13:13 crc kubenswrapper[3562]: I1213 00:13:13.418656 3562 generic.go:334] "Generic (PLEG): container finished" podID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerID="7a6e9eb4ac073e0dfa6ae2034844d4a0104e16d5711e34811caed4fa91b88545" exitCode=0 Dec 13 00:13:13 crc kubenswrapper[3562]: I1213 00:13:13.418697 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerDied","Data":"7a6e9eb4ac073e0dfa6ae2034844d4a0104e16d5711e34811caed4fa91b88545"} Dec 13 00:13:13 crc kubenswrapper[3562]: I1213 00:13:13.418721 3562 scope.go:117] "RemoveContainer" containerID="6f050250cf2ec95c071158dd93704a2edcae951b1c7c4cbefc8d8a73798e9ee6" Dec 13 00:13:14 crc kubenswrapper[3562]: I1213 00:13:14.425747 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerStarted","Data":"fe619c3248a2e072658a9e187d7c753cc185680dc456aa477791dd5c81bf75af"} Dec 13 00:14:03 crc kubenswrapper[3562]: I1213 00:14:03.734210 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 13 00:14:03 crc kubenswrapper[3562]: I1213 00:14:03.734935 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 13 00:14:03 crc kubenswrapper[3562]: I1213 00:14:03.734960 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 13 00:14:03 crc kubenswrapper[3562]: I1213 00:14:03.734981 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 13 00:14:03 crc kubenswrapper[3562]: I1213 00:14:03.735033 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 13 00:14:16 crc kubenswrapper[3562]: I1213 00:14:16.491059 3562 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="fsnotify: can't remove non-existent watch: /etc/kubernetes/kubelet-ca.crt" Dec 13 00:14:16 crc kubenswrapper[3562]: I1213 00:14:16.491600 3562 dynamic_cafile_content.go:119] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 13 00:14:25 crc kubenswrapper[3562]: I1213 00:14:25.059251 3562 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-75b7bb6564-8hnfv"] Dec 13 00:14:25 crc kubenswrapper[3562]: I1213 00:14:25.059778 3562 topology_manager.go:215] "Topology Admit Handler" podUID="bc90bcb2-4460-44e0-aed3-39c502c78bfd" podNamespace="openshift-image-registry" podName="image-registry-75b7bb6564-8hnfv" Dec 13 00:14:25 crc kubenswrapper[3562]: E1213 00:14:25.059971 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="f07661a3-219e-48c0-ae91-20a88ffd8828" containerName="kube-multus-additional-cni-plugins" Dec 13 00:14:25 crc kubenswrapper[3562]: I1213 00:14:25.059986 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="f07661a3-219e-48c0-ae91-20a88ffd8828" containerName="kube-multus-additional-cni-plugins" Dec 13 00:14:25 crc kubenswrapper[3562]: I1213 00:14:25.060109 3562 memory_manager.go:354] "RemoveStaleState removing state" podUID="f07661a3-219e-48c0-ae91-20a88ffd8828" containerName="kube-multus-additional-cni-plugins" Dec 13 00:14:25 crc kubenswrapper[3562]: I1213 00:14:25.060552 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75b7bb6564-8hnfv" Dec 13 00:14:25 crc kubenswrapper[3562]: I1213 00:14:25.075365 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-75b7bb6564-8hnfv"] Dec 13 00:14:25 crc kubenswrapper[3562]: I1213 00:14:25.182236 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bc90bcb2-4460-44e0-aed3-39c502c78bfd-registry-tls\") pod \"image-registry-75b7bb6564-8hnfv\" (UID: \"bc90bcb2-4460-44e0-aed3-39c502c78bfd\") " pod="openshift-image-registry/image-registry-75b7bb6564-8hnfv" Dec 13 00:14:25 crc kubenswrapper[3562]: I1213 00:14:25.182285 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bc90bcb2-4460-44e0-aed3-39c502c78bfd-registry-certificates\") pod \"image-registry-75b7bb6564-8hnfv\" (UID: \"bc90bcb2-4460-44e0-aed3-39c502c78bfd\") " pod="openshift-image-registry/image-registry-75b7bb6564-8hnfv" Dec 13 00:14:25 crc kubenswrapper[3562]: I1213 00:14:25.182321 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-75b7bb6564-8hnfv\" (UID: \"bc90bcb2-4460-44e0-aed3-39c502c78bfd\") " pod="openshift-image-registry/image-registry-75b7bb6564-8hnfv" Dec 13 00:14:25 crc kubenswrapper[3562]: I1213 00:14:25.182342 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv5l5\" (UniqueName: \"kubernetes.io/projected/bc90bcb2-4460-44e0-aed3-39c502c78bfd-kube-api-access-qv5l5\") pod \"image-registry-75b7bb6564-8hnfv\" (UID: \"bc90bcb2-4460-44e0-aed3-39c502c78bfd\") " pod="openshift-image-registry/image-registry-75b7bb6564-8hnfv" Dec 13 00:14:25 crc kubenswrapper[3562]: I1213 00:14:25.182484 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bc90bcb2-4460-44e0-aed3-39c502c78bfd-trusted-ca\") pod \"image-registry-75b7bb6564-8hnfv\" (UID: \"bc90bcb2-4460-44e0-aed3-39c502c78bfd\") " pod="openshift-image-registry/image-registry-75b7bb6564-8hnfv" Dec 13 00:14:25 crc kubenswrapper[3562]: I1213 00:14:25.182513 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bc90bcb2-4460-44e0-aed3-39c502c78bfd-ca-trust-extracted\") pod \"image-registry-75b7bb6564-8hnfv\" (UID: \"bc90bcb2-4460-44e0-aed3-39c502c78bfd\") " pod="openshift-image-registry/image-registry-75b7bb6564-8hnfv" Dec 13 00:14:25 crc kubenswrapper[3562]: I1213 00:14:25.182537 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bc90bcb2-4460-44e0-aed3-39c502c78bfd-installation-pull-secrets\") pod \"image-registry-75b7bb6564-8hnfv\" (UID: \"bc90bcb2-4460-44e0-aed3-39c502c78bfd\") " pod="openshift-image-registry/image-registry-75b7bb6564-8hnfv" Dec 13 00:14:25 crc kubenswrapper[3562]: I1213 00:14:25.182570 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bc90bcb2-4460-44e0-aed3-39c502c78bfd-bound-sa-token\") pod \"image-registry-75b7bb6564-8hnfv\" (UID: \"bc90bcb2-4460-44e0-aed3-39c502c78bfd\") " pod="openshift-image-registry/image-registry-75b7bb6564-8hnfv" Dec 13 00:14:25 crc kubenswrapper[3562]: I1213 00:14:25.205059 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-75b7bb6564-8hnfv\" (UID: \"bc90bcb2-4460-44e0-aed3-39c502c78bfd\") " pod="openshift-image-registry/image-registry-75b7bb6564-8hnfv" Dec 13 00:14:25 crc kubenswrapper[3562]: I1213 00:14:25.284167 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bc90bcb2-4460-44e0-aed3-39c502c78bfd-registry-tls\") pod \"image-registry-75b7bb6564-8hnfv\" (UID: \"bc90bcb2-4460-44e0-aed3-39c502c78bfd\") " pod="openshift-image-registry/image-registry-75b7bb6564-8hnfv" Dec 13 00:14:25 crc kubenswrapper[3562]: I1213 00:14:25.284459 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bc90bcb2-4460-44e0-aed3-39c502c78bfd-registry-certificates\") pod \"image-registry-75b7bb6564-8hnfv\" (UID: \"bc90bcb2-4460-44e0-aed3-39c502c78bfd\") " pod="openshift-image-registry/image-registry-75b7bb6564-8hnfv" Dec 13 00:14:25 crc kubenswrapper[3562]: I1213 00:14:25.284488 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qv5l5\" (UniqueName: \"kubernetes.io/projected/bc90bcb2-4460-44e0-aed3-39c502c78bfd-kube-api-access-qv5l5\") pod \"image-registry-75b7bb6564-8hnfv\" (UID: \"bc90bcb2-4460-44e0-aed3-39c502c78bfd\") " pod="openshift-image-registry/image-registry-75b7bb6564-8hnfv" Dec 13 00:14:25 crc kubenswrapper[3562]: I1213 00:14:25.284518 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bc90bcb2-4460-44e0-aed3-39c502c78bfd-trusted-ca\") pod \"image-registry-75b7bb6564-8hnfv\" (UID: \"bc90bcb2-4460-44e0-aed3-39c502c78bfd\") " pod="openshift-image-registry/image-registry-75b7bb6564-8hnfv" Dec 13 00:14:25 crc kubenswrapper[3562]: I1213 00:14:25.284542 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bc90bcb2-4460-44e0-aed3-39c502c78bfd-ca-trust-extracted\") pod \"image-registry-75b7bb6564-8hnfv\" (UID: \"bc90bcb2-4460-44e0-aed3-39c502c78bfd\") " pod="openshift-image-registry/image-registry-75b7bb6564-8hnfv" Dec 13 00:14:25 crc kubenswrapper[3562]: I1213 00:14:25.284563 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bc90bcb2-4460-44e0-aed3-39c502c78bfd-installation-pull-secrets\") pod \"image-registry-75b7bb6564-8hnfv\" (UID: \"bc90bcb2-4460-44e0-aed3-39c502c78bfd\") " pod="openshift-image-registry/image-registry-75b7bb6564-8hnfv" Dec 13 00:14:25 crc kubenswrapper[3562]: I1213 00:14:25.284599 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bc90bcb2-4460-44e0-aed3-39c502c78bfd-bound-sa-token\") pod \"image-registry-75b7bb6564-8hnfv\" (UID: \"bc90bcb2-4460-44e0-aed3-39c502c78bfd\") " pod="openshift-image-registry/image-registry-75b7bb6564-8hnfv" Dec 13 00:14:25 crc kubenswrapper[3562]: I1213 00:14:25.285293 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bc90bcb2-4460-44e0-aed3-39c502c78bfd-ca-trust-extracted\") pod \"image-registry-75b7bb6564-8hnfv\" (UID: \"bc90bcb2-4460-44e0-aed3-39c502c78bfd\") " pod="openshift-image-registry/image-registry-75b7bb6564-8hnfv" Dec 13 00:14:25 crc kubenswrapper[3562]: I1213 00:14:25.286102 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bc90bcb2-4460-44e0-aed3-39c502c78bfd-registry-certificates\") pod \"image-registry-75b7bb6564-8hnfv\" (UID: \"bc90bcb2-4460-44e0-aed3-39c502c78bfd\") " pod="openshift-image-registry/image-registry-75b7bb6564-8hnfv" Dec 13 00:14:25 crc kubenswrapper[3562]: I1213 00:14:25.286134 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bc90bcb2-4460-44e0-aed3-39c502c78bfd-trusted-ca\") pod \"image-registry-75b7bb6564-8hnfv\" (UID: \"bc90bcb2-4460-44e0-aed3-39c502c78bfd\") " pod="openshift-image-registry/image-registry-75b7bb6564-8hnfv" Dec 13 00:14:25 crc kubenswrapper[3562]: I1213 00:14:25.290369 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bc90bcb2-4460-44e0-aed3-39c502c78bfd-installation-pull-secrets\") pod \"image-registry-75b7bb6564-8hnfv\" (UID: \"bc90bcb2-4460-44e0-aed3-39c502c78bfd\") " pod="openshift-image-registry/image-registry-75b7bb6564-8hnfv" Dec 13 00:14:25 crc kubenswrapper[3562]: I1213 00:14:25.291223 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bc90bcb2-4460-44e0-aed3-39c502c78bfd-registry-tls\") pod \"image-registry-75b7bb6564-8hnfv\" (UID: \"bc90bcb2-4460-44e0-aed3-39c502c78bfd\") " pod="openshift-image-registry/image-registry-75b7bb6564-8hnfv" Dec 13 00:14:25 crc kubenswrapper[3562]: I1213 00:14:25.300628 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bc90bcb2-4460-44e0-aed3-39c502c78bfd-bound-sa-token\") pod \"image-registry-75b7bb6564-8hnfv\" (UID: \"bc90bcb2-4460-44e0-aed3-39c502c78bfd\") " pod="openshift-image-registry/image-registry-75b7bb6564-8hnfv" Dec 13 00:14:25 crc kubenswrapper[3562]: I1213 00:14:25.301301 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-qv5l5\" (UniqueName: \"kubernetes.io/projected/bc90bcb2-4460-44e0-aed3-39c502c78bfd-kube-api-access-qv5l5\") pod \"image-registry-75b7bb6564-8hnfv\" (UID: \"bc90bcb2-4460-44e0-aed3-39c502c78bfd\") " pod="openshift-image-registry/image-registry-75b7bb6564-8hnfv" Dec 13 00:14:25 crc kubenswrapper[3562]: I1213 00:14:25.380489 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75b7bb6564-8hnfv" Dec 13 00:14:25 crc kubenswrapper[3562]: I1213 00:14:25.644074 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-75b7bb6564-8hnfv"] Dec 13 00:14:25 crc kubenswrapper[3562]: I1213 00:14:25.751483 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-75b7bb6564-8hnfv" event={"ID":"bc90bcb2-4460-44e0-aed3-39c502c78bfd","Type":"ContainerStarted","Data":"b503054e2a6c8aed0ec532b19a9dbfed592954a109761e26d6c6a2290596a17c"} Dec 13 00:14:26 crc kubenswrapper[3562]: I1213 00:14:26.756470 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-75b7bb6564-8hnfv" event={"ID":"bc90bcb2-4460-44e0-aed3-39c502c78bfd","Type":"ContainerStarted","Data":"8ec67bac61179db4a75ef73ac68594c103fe0a8cc18cbcafbdd51de24249f5ac"} Dec 13 00:14:26 crc kubenswrapper[3562]: I1213 00:14:26.775157 3562 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-image-registry/image-registry-75b7bb6564-8hnfv" podStartSLOduration=1.7751170809999999 podStartE2EDuration="1.775117081s" podCreationTimestamp="2025-12-13 00:14:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 00:14:26.771574061 +0000 UTC m=+503.254804584" watchObservedRunningTime="2025-12-13 00:14:26.775117081 +0000 UTC m=+503.258347604" Dec 13 00:14:27 crc kubenswrapper[3562]: I1213 00:14:27.355225 3562 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="fsnotify: can't remove non-existent watch: /etc/kubernetes/kubelet-ca.crt" Dec 13 00:14:27 crc kubenswrapper[3562]: I1213 00:14:27.427713 3562 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="fsnotify: can't remove non-existent watch: /etc/kubernetes/kubelet-ca.crt" Dec 13 00:14:27 crc kubenswrapper[3562]: I1213 00:14:27.760495 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-75b7bb6564-8hnfv" Dec 13 00:14:28 crc kubenswrapper[3562]: I1213 00:14:28.476194 3562 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="fsnotify: can't remove non-existent watch: /etc/kubernetes/kubelet-ca.crt" Dec 13 00:14:32 crc kubenswrapper[3562]: I1213 00:14:32.628869 3562 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-13-crc"] Dec 13 00:14:32 crc kubenswrapper[3562]: I1213 00:14:32.629329 3562 topology_manager.go:215] "Topology Admit Handler" podUID="1c7093b7-6001-4ad1-8f11-36961d0fb83f" podNamespace="openshift-kube-apiserver" podName="installer-13-crc" Dec 13 00:14:32 crc kubenswrapper[3562]: I1213 00:14:32.630034 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-13-crc" Dec 13 00:14:32 crc kubenswrapper[3562]: I1213 00:14:32.632440 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-4kgh8" Dec 13 00:14:32 crc kubenswrapper[3562]: I1213 00:14:32.632655 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 13 00:14:32 crc kubenswrapper[3562]: I1213 00:14:32.636400 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-13-crc"] Dec 13 00:14:32 crc kubenswrapper[3562]: I1213 00:14:32.802019 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1c7093b7-6001-4ad1-8f11-36961d0fb83f-var-lock\") pod \"installer-13-crc\" (UID: \"1c7093b7-6001-4ad1-8f11-36961d0fb83f\") " pod="openshift-kube-apiserver/installer-13-crc" Dec 13 00:14:32 crc kubenswrapper[3562]: I1213 00:14:32.802348 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c7093b7-6001-4ad1-8f11-36961d0fb83f-kubelet-dir\") pod \"installer-13-crc\" (UID: \"1c7093b7-6001-4ad1-8f11-36961d0fb83f\") " pod="openshift-kube-apiserver/installer-13-crc" Dec 13 00:14:32 crc kubenswrapper[3562]: I1213 00:14:32.802388 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c7093b7-6001-4ad1-8f11-36961d0fb83f-kube-api-access\") pod \"installer-13-crc\" (UID: \"1c7093b7-6001-4ad1-8f11-36961d0fb83f\") " pod="openshift-kube-apiserver/installer-13-crc" Dec 13 00:14:32 crc kubenswrapper[3562]: I1213 00:14:32.903785 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c7093b7-6001-4ad1-8f11-36961d0fb83f-kubelet-dir\") pod \"installer-13-crc\" (UID: \"1c7093b7-6001-4ad1-8f11-36961d0fb83f\") " pod="openshift-kube-apiserver/installer-13-crc" Dec 13 00:14:32 crc kubenswrapper[3562]: I1213 00:14:32.903839 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c7093b7-6001-4ad1-8f11-36961d0fb83f-kube-api-access\") pod \"installer-13-crc\" (UID: \"1c7093b7-6001-4ad1-8f11-36961d0fb83f\") " pod="openshift-kube-apiserver/installer-13-crc" Dec 13 00:14:32 crc kubenswrapper[3562]: I1213 00:14:32.903945 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c7093b7-6001-4ad1-8f11-36961d0fb83f-kubelet-dir\") pod \"installer-13-crc\" (UID: \"1c7093b7-6001-4ad1-8f11-36961d0fb83f\") " pod="openshift-kube-apiserver/installer-13-crc" Dec 13 00:14:32 crc kubenswrapper[3562]: I1213 00:14:32.903948 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1c7093b7-6001-4ad1-8f11-36961d0fb83f-var-lock\") pod \"installer-13-crc\" (UID: \"1c7093b7-6001-4ad1-8f11-36961d0fb83f\") " pod="openshift-kube-apiserver/installer-13-crc" Dec 13 00:14:32 crc kubenswrapper[3562]: I1213 00:14:32.903986 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1c7093b7-6001-4ad1-8f11-36961d0fb83f-var-lock\") pod \"installer-13-crc\" (UID: \"1c7093b7-6001-4ad1-8f11-36961d0fb83f\") " pod="openshift-kube-apiserver/installer-13-crc" Dec 13 00:14:32 crc kubenswrapper[3562]: I1213 00:14:32.928565 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c7093b7-6001-4ad1-8f11-36961d0fb83f-kube-api-access\") pod \"installer-13-crc\" (UID: \"1c7093b7-6001-4ad1-8f11-36961d0fb83f\") " pod="openshift-kube-apiserver/installer-13-crc" Dec 13 00:14:32 crc kubenswrapper[3562]: I1213 00:14:32.945816 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-13-crc" Dec 13 00:14:33 crc kubenswrapper[3562]: I1213 00:14:33.383268 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-13-crc"] Dec 13 00:14:33 crc kubenswrapper[3562]: I1213 00:14:33.789172 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-13-crc" event={"ID":"1c7093b7-6001-4ad1-8f11-36961d0fb83f","Type":"ContainerStarted","Data":"cd41e621844ec3c2db06d57396ab091822786db2fe6eb172ebc671f070ac9d51"} Dec 13 00:14:37 crc kubenswrapper[3562]: I1213 00:14:37.807608 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-13-crc" event={"ID":"1c7093b7-6001-4ad1-8f11-36961d0fb83f","Type":"ContainerStarted","Data":"9067733d5dc2ddbec46f38c5746c75178efeae37e5c2b5f289c02435e9602d4b"} Dec 13 00:14:37 crc kubenswrapper[3562]: I1213 00:14:37.824257 3562 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-13-crc" podStartSLOduration=5.824216325 podStartE2EDuration="5.824216325s" podCreationTimestamp="2025-12-13 00:14:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 00:14:37.82118099 +0000 UTC m=+514.304411533" watchObservedRunningTime="2025-12-13 00:14:37.824216325 +0000 UTC m=+514.307446848" Dec 13 00:14:45 crc kubenswrapper[3562]: I1213 00:14:45.388033 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-75b7bb6564-8hnfv" Dec 13 00:14:45 crc kubenswrapper[3562]: I1213 00:14:45.441458 3562 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-75779c45fd-v2j2v"] Dec 13 00:15:00 crc kubenswrapper[3562]: I1213 00:15:00.153282 3562 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29426415-f8pcd"] Dec 13 00:15:00 crc kubenswrapper[3562]: I1213 00:15:00.154616 3562 topology_manager.go:215] "Topology Admit Handler" podUID="a5113a05-452c-4b56-90b5-e303b9f88675" podNamespace="openshift-operator-lifecycle-manager" podName="collect-profiles-29426415-f8pcd" Dec 13 00:15:00 crc kubenswrapper[3562]: I1213 00:15:00.155425 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29426415-f8pcd" Dec 13 00:15:00 crc kubenswrapper[3562]: I1213 00:15:00.158578 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 13 00:15:00 crc kubenswrapper[3562]: I1213 00:15:00.158643 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-45g9d" Dec 13 00:15:00 crc kubenswrapper[3562]: I1213 00:15:00.158848 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29426415-f8pcd"] Dec 13 00:15:00 crc kubenswrapper[3562]: I1213 00:15:00.274801 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a5113a05-452c-4b56-90b5-e303b9f88675-secret-volume\") pod \"collect-profiles-29426415-f8pcd\" (UID: \"a5113a05-452c-4b56-90b5-e303b9f88675\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426415-f8pcd" Dec 13 00:15:00 crc kubenswrapper[3562]: I1213 00:15:00.274863 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a5113a05-452c-4b56-90b5-e303b9f88675-config-volume\") pod \"collect-profiles-29426415-f8pcd\" (UID: \"a5113a05-452c-4b56-90b5-e303b9f88675\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426415-f8pcd" Dec 13 00:15:00 crc kubenswrapper[3562]: I1213 00:15:00.275003 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lftc\" (UniqueName: \"kubernetes.io/projected/a5113a05-452c-4b56-90b5-e303b9f88675-kube-api-access-7lftc\") pod \"collect-profiles-29426415-f8pcd\" (UID: \"a5113a05-452c-4b56-90b5-e303b9f88675\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426415-f8pcd" Dec 13 00:15:00 crc kubenswrapper[3562]: I1213 00:15:00.376473 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a5113a05-452c-4b56-90b5-e303b9f88675-secret-volume\") pod \"collect-profiles-29426415-f8pcd\" (UID: \"a5113a05-452c-4b56-90b5-e303b9f88675\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426415-f8pcd" Dec 13 00:15:00 crc kubenswrapper[3562]: I1213 00:15:00.376531 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a5113a05-452c-4b56-90b5-e303b9f88675-config-volume\") pod \"collect-profiles-29426415-f8pcd\" (UID: \"a5113a05-452c-4b56-90b5-e303b9f88675\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426415-f8pcd" Dec 13 00:15:00 crc kubenswrapper[3562]: I1213 00:15:00.376606 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-7lftc\" (UniqueName: \"kubernetes.io/projected/a5113a05-452c-4b56-90b5-e303b9f88675-kube-api-access-7lftc\") pod \"collect-profiles-29426415-f8pcd\" (UID: \"a5113a05-452c-4b56-90b5-e303b9f88675\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426415-f8pcd" Dec 13 00:15:00 crc kubenswrapper[3562]: I1213 00:15:00.379628 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a5113a05-452c-4b56-90b5-e303b9f88675-config-volume\") pod \"collect-profiles-29426415-f8pcd\" (UID: \"a5113a05-452c-4b56-90b5-e303b9f88675\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426415-f8pcd" Dec 13 00:15:00 crc kubenswrapper[3562]: I1213 00:15:00.391785 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lftc\" (UniqueName: \"kubernetes.io/projected/a5113a05-452c-4b56-90b5-e303b9f88675-kube-api-access-7lftc\") pod \"collect-profiles-29426415-f8pcd\" (UID: \"a5113a05-452c-4b56-90b5-e303b9f88675\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426415-f8pcd" Dec 13 00:15:00 crc kubenswrapper[3562]: I1213 00:15:00.391813 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a5113a05-452c-4b56-90b5-e303b9f88675-secret-volume\") pod \"collect-profiles-29426415-f8pcd\" (UID: \"a5113a05-452c-4b56-90b5-e303b9f88675\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426415-f8pcd" Dec 13 00:15:00 crc kubenswrapper[3562]: I1213 00:15:00.476938 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29426415-f8pcd" Dec 13 00:15:00 crc kubenswrapper[3562]: I1213 00:15:00.861165 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29426415-f8pcd"] Dec 13 00:15:01 crc kubenswrapper[3562]: I1213 00:15:01.314998 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29426415-f8pcd" event={"ID":"a5113a05-452c-4b56-90b5-e303b9f88675","Type":"ContainerStarted","Data":"8097f4a44407aaf353d08d73a5485fd6b32d682258078efb6b11b70816cee688"} Dec 13 00:15:01 crc kubenswrapper[3562]: I1213 00:15:01.315086 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29426415-f8pcd" event={"ID":"a5113a05-452c-4b56-90b5-e303b9f88675","Type":"ContainerStarted","Data":"c9e24ce30f6b354d470a797535d9c02ebeaee721d1067a67423eae86e78e55cd"} Dec 13 00:15:01 crc kubenswrapper[3562]: I1213 00:15:01.332141 3562 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29426415-f8pcd" podStartSLOduration=1.332102231 podStartE2EDuration="1.332102231s" podCreationTimestamp="2025-12-13 00:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 00:15:01.328020889 +0000 UTC m=+537.811251412" watchObservedRunningTime="2025-12-13 00:15:01.332102231 +0000 UTC m=+537.815332764" Dec 13 00:15:02 crc kubenswrapper[3562]: I1213 00:15:02.319773 3562 generic.go:334] "Generic (PLEG): container finished" podID="a5113a05-452c-4b56-90b5-e303b9f88675" containerID="8097f4a44407aaf353d08d73a5485fd6b32d682258078efb6b11b70816cee688" exitCode=0 Dec 13 00:15:02 crc kubenswrapper[3562]: I1213 00:15:02.319852 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29426415-f8pcd" event={"ID":"a5113a05-452c-4b56-90b5-e303b9f88675","Type":"ContainerDied","Data":"8097f4a44407aaf353d08d73a5485fd6b32d682258078efb6b11b70816cee688"} Dec 13 00:15:03 crc kubenswrapper[3562]: I1213 00:15:03.531552 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29426415-f8pcd" Dec 13 00:15:03 crc kubenswrapper[3562]: I1213 00:15:03.625557 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lftc\" (UniqueName: \"kubernetes.io/projected/a5113a05-452c-4b56-90b5-e303b9f88675-kube-api-access-7lftc\") pod \"a5113a05-452c-4b56-90b5-e303b9f88675\" (UID: \"a5113a05-452c-4b56-90b5-e303b9f88675\") " Dec 13 00:15:03 crc kubenswrapper[3562]: I1213 00:15:03.625599 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a5113a05-452c-4b56-90b5-e303b9f88675-secret-volume\") pod \"a5113a05-452c-4b56-90b5-e303b9f88675\" (UID: \"a5113a05-452c-4b56-90b5-e303b9f88675\") " Dec 13 00:15:03 crc kubenswrapper[3562]: I1213 00:15:03.625678 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a5113a05-452c-4b56-90b5-e303b9f88675-config-volume\") pod \"a5113a05-452c-4b56-90b5-e303b9f88675\" (UID: \"a5113a05-452c-4b56-90b5-e303b9f88675\") " Dec 13 00:15:03 crc kubenswrapper[3562]: I1213 00:15:03.626668 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5113a05-452c-4b56-90b5-e303b9f88675-config-volume" (OuterVolumeSpecName: "config-volume") pod "a5113a05-452c-4b56-90b5-e303b9f88675" (UID: "a5113a05-452c-4b56-90b5-e303b9f88675"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 00:15:03 crc kubenswrapper[3562]: I1213 00:15:03.631449 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5113a05-452c-4b56-90b5-e303b9f88675-kube-api-access-7lftc" (OuterVolumeSpecName: "kube-api-access-7lftc") pod "a5113a05-452c-4b56-90b5-e303b9f88675" (UID: "a5113a05-452c-4b56-90b5-e303b9f88675"). InnerVolumeSpecName "kube-api-access-7lftc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 00:15:03 crc kubenswrapper[3562]: I1213 00:15:03.631864 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5113a05-452c-4b56-90b5-e303b9f88675-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a5113a05-452c-4b56-90b5-e303b9f88675" (UID: "a5113a05-452c-4b56-90b5-e303b9f88675"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 00:15:03 crc kubenswrapper[3562]: I1213 00:15:03.727045 3562 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-7lftc\" (UniqueName: \"kubernetes.io/projected/a5113a05-452c-4b56-90b5-e303b9f88675-kube-api-access-7lftc\") on node \"crc\" DevicePath \"\"" Dec 13 00:15:03 crc kubenswrapper[3562]: I1213 00:15:03.727080 3562 reconciler_common.go:300] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a5113a05-452c-4b56-90b5-e303b9f88675-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 13 00:15:03 crc kubenswrapper[3562]: I1213 00:15:03.727091 3562 reconciler_common.go:300] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a5113a05-452c-4b56-90b5-e303b9f88675-config-volume\") on node \"crc\" DevicePath \"\"" Dec 13 00:15:03 crc kubenswrapper[3562]: I1213 00:15:03.735310 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 13 00:15:03 crc kubenswrapper[3562]: I1213 00:15:03.735482 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 13 00:15:03 crc kubenswrapper[3562]: I1213 00:15:03.735515 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 13 00:15:03 crc kubenswrapper[3562]: I1213 00:15:03.735576 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 13 00:15:03 crc kubenswrapper[3562]: I1213 00:15:03.735651 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 13 00:15:04 crc kubenswrapper[3562]: I1213 00:15:04.329996 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29426415-f8pcd" event={"ID":"a5113a05-452c-4b56-90b5-e303b9f88675","Type":"ContainerDied","Data":"c9e24ce30f6b354d470a797535d9c02ebeaee721d1067a67423eae86e78e55cd"} Dec 13 00:15:04 crc kubenswrapper[3562]: I1213 00:15:04.330037 3562 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9e24ce30f6b354d470a797535d9c02ebeaee721d1067a67423eae86e78e55cd" Dec 13 00:15:04 crc kubenswrapper[3562]: I1213 00:15:04.330093 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29426415-f8pcd" Dec 13 00:15:04 crc kubenswrapper[3562]: I1213 00:15:04.398983 3562 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j"] Dec 13 00:15:04 crc kubenswrapper[3562]: I1213 00:15:04.402704 3562 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j"] Dec 13 00:15:05 crc kubenswrapper[3562]: I1213 00:15:05.884149 3562 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51936587-a4af-470d-ad92-8ab9062cbc72" path="/var/lib/kubelet/pods/51936587-a4af-470d-ad92-8ab9062cbc72/volumes" Dec 13 00:15:10 crc kubenswrapper[3562]: I1213 00:15:10.520164 3562 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" containerName="registry" containerID="cri-o://7bcc9373144aa1baeead614defc177c9a227076a45e5bdcad17ffc8920843d2c" gracePeriod=30 Dec 13 00:15:11 crc kubenswrapper[3562]: I1213 00:15:11.368698 3562 generic.go:334] "Generic (PLEG): container finished" podID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" containerID="7bcc9373144aa1baeead614defc177c9a227076a45e5bdcad17ffc8920843d2c" exitCode=0 Dec 13 00:15:11 crc kubenswrapper[3562]: I1213 00:15:11.368745 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" event={"ID":"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319","Type":"ContainerDied","Data":"7bcc9373144aa1baeead614defc177c9a227076a45e5bdcad17ffc8920843d2c"} Dec 13 00:15:11 crc kubenswrapper[3562]: I1213 00:15:11.895990 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:15:12 crc kubenswrapper[3562]: I1213 00:15:12.043867 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets\") pod \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " Dec 13 00:15:12 crc kubenswrapper[3562]: I1213 00:15:12.044368 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " Dec 13 00:15:12 crc kubenswrapper[3562]: I1213 00:15:12.044576 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scpwv\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-kube-api-access-scpwv\") pod \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " Dec 13 00:15:12 crc kubenswrapper[3562]: I1213 00:15:12.044763 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls\") pod \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " Dec 13 00:15:12 crc kubenswrapper[3562]: I1213 00:15:12.045005 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-certificates\") pod \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " Dec 13 00:15:12 crc kubenswrapper[3562]: I1213 00:15:12.045220 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca\") pod \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " Dec 13 00:15:12 crc kubenswrapper[3562]: I1213 00:15:12.045414 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-ca-trust-extracted\") pod \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " Dec 13 00:15:12 crc kubenswrapper[3562]: I1213 00:15:12.045595 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-bound-sa-token\") pod \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " Dec 13 00:15:12 crc kubenswrapper[3562]: I1213 00:15:12.045818 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 00:15:12 crc kubenswrapper[3562]: I1213 00:15:12.045896 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 00:15:12 crc kubenswrapper[3562]: I1213 00:15:12.047008 3562 reconciler_common.go:300] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 13 00:15:12 crc kubenswrapper[3562]: I1213 00:15:12.047068 3562 reconciler_common.go:300] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 13 00:15:12 crc kubenswrapper[3562]: I1213 00:15:12.049773 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 00:15:12 crc kubenswrapper[3562]: I1213 00:15:12.050646 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 00:15:12 crc kubenswrapper[3562]: I1213 00:15:12.058528 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 00:15:12 crc kubenswrapper[3562]: I1213 00:15:12.060665 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 00:15:12 crc kubenswrapper[3562]: I1213 00:15:12.061399 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-kube-api-access-scpwv" (OuterVolumeSpecName: "kube-api-access-scpwv") pod "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319"). InnerVolumeSpecName "kube-api-access-scpwv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 00:15:12 crc kubenswrapper[3562]: I1213 00:15:12.106763 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (OuterVolumeSpecName: "registry-storage") pod "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319"). InnerVolumeSpecName "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 13 00:15:12 crc kubenswrapper[3562]: I1213 00:15:12.148489 3562 reconciler_common.go:300] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 13 00:15:12 crc kubenswrapper[3562]: I1213 00:15:12.148537 3562 reconciler_common.go:300] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 13 00:15:12 crc kubenswrapper[3562]: I1213 00:15:12.148552 3562 reconciler_common.go:300] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 13 00:15:12 crc kubenswrapper[3562]: I1213 00:15:12.148569 3562 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-scpwv\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-kube-api-access-scpwv\") on node \"crc\" DevicePath \"\"" Dec 13 00:15:12 crc kubenswrapper[3562]: I1213 00:15:12.148583 3562 reconciler_common.go:300] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 13 00:15:12 crc kubenswrapper[3562]: I1213 00:15:12.374075 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" event={"ID":"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319","Type":"ContainerDied","Data":"ce558c209d6117269c8377cbe364cf9dc9f464b7745c2e681907ca27a02091a7"} Dec 13 00:15:12 crc kubenswrapper[3562]: I1213 00:15:12.374094 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Dec 13 00:15:12 crc kubenswrapper[3562]: I1213 00:15:12.374117 3562 scope.go:117] "RemoveContainer" containerID="7bcc9373144aa1baeead614defc177c9a227076a45e5bdcad17ffc8920843d2c" Dec 13 00:15:12 crc kubenswrapper[3562]: I1213 00:15:12.410881 3562 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-75779c45fd-v2j2v"] Dec 13 00:15:12 crc kubenswrapper[3562]: I1213 00:15:12.416545 3562 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-75779c45fd-v2j2v"] Dec 13 00:15:13 crc kubenswrapper[3562]: I1213 00:15:13.886346 3562 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" path="/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes" Dec 13 00:15:15 crc kubenswrapper[3562]: I1213 00:15:15.549682 3562 kubelet.go:2439] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 13 00:15:15 crc kubenswrapper[3562]: I1213 00:15:15.550232 3562 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver" containerID="cri-o://4d133fa5757238da3b7fe8dd7c27de06e8aa00eb048d2818fad45877fd1377a4" gracePeriod=15 Dec 13 00:15:15 crc kubenswrapper[3562]: I1213 00:15:15.550321 3562 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver-check-endpoints" containerID="cri-o://49c6a556089813d09a464ae0020d48fc2ecced0e755f8f8a63664f0c7a6148cc" gracePeriod=15 Dec 13 00:15:15 crc kubenswrapper[3562]: I1213 00:15:15.550436 3562 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://d8eb425328e224404d5ed33dee54bc752129804bab6d33fec60a1e7d4b86f246" gracePeriod=15 Dec 13 00:15:15 crc kubenswrapper[3562]: I1213 00:15:15.550551 3562 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://4ce7dfdcabe0ad2eec941a3cf6bee306e49c16470f3d8addbbd2944392fb77ec" gracePeriod=15 Dec 13 00:15:15 crc kubenswrapper[3562]: I1213 00:15:15.550596 3562 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver-cert-syncer" containerID="cri-o://ded8bd9cf1fd51afd473a88657f94e48f3a4edec471438cfcf4356047ac64fe9" gracePeriod=15 Dec 13 00:15:15 crc kubenswrapper[3562]: I1213 00:15:15.553203 3562 kubelet.go:2429] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 13 00:15:15 crc kubenswrapper[3562]: I1213 00:15:15.553465 3562 topology_manager.go:215] "Topology Admit Handler" podUID="7f3419c3ca30b18b78e8dd2488b00489" podNamespace="openshift-kube-apiserver" podName="kube-apiserver-crc" Dec 13 00:15:15 crc kubenswrapper[3562]: E1213 00:15:15.553746 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="a5113a05-452c-4b56-90b5-e303b9f88675" containerName="collect-profiles" Dec 13 00:15:15 crc kubenswrapper[3562]: I1213 00:15:15.553916 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5113a05-452c-4b56-90b5-e303b9f88675" containerName="collect-profiles" Dec 13 00:15:15 crc kubenswrapper[3562]: E1213 00:15:15.554023 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver-cert-regeneration-controller" Dec 13 00:15:15 crc kubenswrapper[3562]: I1213 00:15:15.554117 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver-cert-regeneration-controller" Dec 13 00:15:15 crc kubenswrapper[3562]: E1213 00:15:15.554209 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" containerName="registry" Dec 13 00:15:15 crc kubenswrapper[3562]: I1213 00:15:15.554293 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" containerName="registry" Dec 13 00:15:15 crc kubenswrapper[3562]: E1213 00:15:15.554380 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver" Dec 13 00:15:15 crc kubenswrapper[3562]: I1213 00:15:15.554470 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver" Dec 13 00:15:15 crc kubenswrapper[3562]: E1213 00:15:15.554567 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver-insecure-readyz" Dec 13 00:15:15 crc kubenswrapper[3562]: I1213 00:15:15.554654 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver-insecure-readyz" Dec 13 00:15:15 crc kubenswrapper[3562]: E1213 00:15:15.554743 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="ae85115fdc231b4002b57317b41a6400" containerName="setup" Dec 13 00:15:15 crc kubenswrapper[3562]: I1213 00:15:15.554829 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae85115fdc231b4002b57317b41a6400" containerName="setup" Dec 13 00:15:15 crc kubenswrapper[3562]: E1213 00:15:15.554939 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver-cert-syncer" Dec 13 00:15:15 crc kubenswrapper[3562]: I1213 00:15:15.555033 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver-cert-syncer" Dec 13 00:15:15 crc kubenswrapper[3562]: E1213 00:15:15.555125 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver-check-endpoints" Dec 13 00:15:15 crc kubenswrapper[3562]: I1213 00:15:15.555221 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver-check-endpoints" Dec 13 00:15:15 crc kubenswrapper[3562]: I1213 00:15:15.555447 3562 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver" Dec 13 00:15:15 crc kubenswrapper[3562]: I1213 00:15:15.555548 3562 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver-cert-regeneration-controller" Dec 13 00:15:15 crc kubenswrapper[3562]: I1213 00:15:15.555637 3562 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver-cert-syncer" Dec 13 00:15:15 crc kubenswrapper[3562]: I1213 00:15:15.555772 3562 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver-insecure-readyz" Dec 13 00:15:15 crc kubenswrapper[3562]: I1213 00:15:15.555869 3562 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" containerName="registry" Dec 13 00:15:15 crc kubenswrapper[3562]: I1213 00:15:15.555986 3562 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver-check-endpoints" Dec 13 00:15:15 crc kubenswrapper[3562]: I1213 00:15:15.556084 3562 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5113a05-452c-4b56-90b5-e303b9f88675" containerName="collect-profiles" Dec 13 00:15:15 crc kubenswrapper[3562]: I1213 00:15:15.557970 3562 kubelet.go:2429] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 13 00:15:15 crc kubenswrapper[3562]: I1213 00:15:15.558174 3562 topology_manager.go:215] "Topology Admit Handler" podUID="7dae59545f22b3fb679a7fbf878a6379" podNamespace="openshift-kube-apiserver" podName="kube-apiserver-startup-monitor-crc" Dec 13 00:15:15 crc kubenswrapper[3562]: I1213 00:15:15.558930 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 00:15:15 crc kubenswrapper[3562]: I1213 00:15:15.563902 3562 status_manager.go:863] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="ae85115fdc231b4002b57317b41a6400" podUID="7f3419c3ca30b18b78e8dd2488b00489" Dec 13 00:15:15 crc kubenswrapper[3562]: I1213 00:15:15.634307 3562 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 13 00:15:15 crc kubenswrapper[3562]: I1213 00:15:15.689936 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7dae59545f22b3fb679a7fbf878a6379\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 00:15:15 crc kubenswrapper[3562]: I1213 00:15:15.690014 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7dae59545f22b3fb679a7fbf878a6379\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 00:15:15 crc kubenswrapper[3562]: I1213 00:15:15.690050 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7dae59545f22b3fb679a7fbf878a6379\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 00:15:15 crc kubenswrapper[3562]: I1213 00:15:15.690090 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7dae59545f22b3fb679a7fbf878a6379\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 00:15:15 crc kubenswrapper[3562]: I1213 00:15:15.690550 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7f3419c3ca30b18b78e8dd2488b00489-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"7f3419c3ca30b18b78e8dd2488b00489\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 00:15:15 crc kubenswrapper[3562]: I1213 00:15:15.690619 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7f3419c3ca30b18b78e8dd2488b00489-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"7f3419c3ca30b18b78e8dd2488b00489\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 00:15:15 crc kubenswrapper[3562]: I1213 00:15:15.690672 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7dae59545f22b3fb679a7fbf878a6379\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 00:15:15 crc kubenswrapper[3562]: I1213 00:15:15.690696 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7f3419c3ca30b18b78e8dd2488b00489-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"7f3419c3ca30b18b78e8dd2488b00489\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 00:15:15 crc kubenswrapper[3562]: I1213 00:15:15.792242 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7dae59545f22b3fb679a7fbf878a6379\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 00:15:15 crc kubenswrapper[3562]: I1213 00:15:15.792330 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7f3419c3ca30b18b78e8dd2488b00489-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"7f3419c3ca30b18b78e8dd2488b00489\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 00:15:15 crc kubenswrapper[3562]: I1213 00:15:15.792369 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7f3419c3ca30b18b78e8dd2488b00489-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"7f3419c3ca30b18b78e8dd2488b00489\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 00:15:15 crc kubenswrapper[3562]: I1213 00:15:15.792395 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7dae59545f22b3fb679a7fbf878a6379\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 00:15:15 crc kubenswrapper[3562]: I1213 00:15:15.792417 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7dae59545f22b3fb679a7fbf878a6379\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 00:15:15 crc kubenswrapper[3562]: I1213 00:15:15.792419 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7f3419c3ca30b18b78e8dd2488b00489-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"7f3419c3ca30b18b78e8dd2488b00489\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 00:15:15 crc kubenswrapper[3562]: I1213 00:15:15.792450 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7f3419c3ca30b18b78e8dd2488b00489-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"7f3419c3ca30b18b78e8dd2488b00489\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 00:15:15 crc kubenswrapper[3562]: I1213 00:15:15.792464 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7dae59545f22b3fb679a7fbf878a6379\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 00:15:15 crc kubenswrapper[3562]: I1213 00:15:15.792480 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7dae59545f22b3fb679a7fbf878a6379\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 00:15:15 crc kubenswrapper[3562]: I1213 00:15:15.792480 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7f3419c3ca30b18b78e8dd2488b00489-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"7f3419c3ca30b18b78e8dd2488b00489\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 00:15:15 crc kubenswrapper[3562]: I1213 00:15:15.792509 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7dae59545f22b3fb679a7fbf878a6379\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 00:15:15 crc kubenswrapper[3562]: I1213 00:15:15.792526 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7f3419c3ca30b18b78e8dd2488b00489-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"7f3419c3ca30b18b78e8dd2488b00489\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 00:15:15 crc kubenswrapper[3562]: I1213 00:15:15.792534 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7dae59545f22b3fb679a7fbf878a6379\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 00:15:15 crc kubenswrapper[3562]: I1213 00:15:15.792600 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7dae59545f22b3fb679a7fbf878a6379\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 00:15:15 crc kubenswrapper[3562]: I1213 00:15:15.792600 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7dae59545f22b3fb679a7fbf878a6379\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 00:15:15 crc kubenswrapper[3562]: I1213 00:15:15.792575 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7dae59545f22b3fb679a7fbf878a6379\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 00:15:15 crc kubenswrapper[3562]: I1213 00:15:15.924068 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 00:15:15 crc kubenswrapper[3562]: E1213 00:15:15.947288 3562 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.198:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18809e2067c6bd74 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:7dae59545f22b3fb679a7fbf878a6379,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-13 00:15:15.946454388 +0000 UTC m=+552.429684911,LastTimestamp:2025-12-13 00:15:15.946454388 +0000 UTC m=+552.429684911,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 13 00:15:16 crc kubenswrapper[3562]: I1213 00:15:16.398205 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_ae85115fdc231b4002b57317b41a6400/kube-apiserver-cert-syncer/2.log" Dec 13 00:15:16 crc kubenswrapper[3562]: I1213 00:15:16.399162 3562 generic.go:334] "Generic (PLEG): container finished" podID="ae85115fdc231b4002b57317b41a6400" containerID="ded8bd9cf1fd51afd473a88657f94e48f3a4edec471438cfcf4356047ac64fe9" exitCode=2 Dec 13 00:15:16 crc kubenswrapper[3562]: I1213 00:15:16.400621 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"7dae59545f22b3fb679a7fbf878a6379","Type":"ContainerStarted","Data":"ecd9443fb1486631f622b3b58fc1d8106cfc09ee997c539b7323b55236c76a01"} Dec 13 00:15:17 crc kubenswrapper[3562]: I1213 00:15:17.409586 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_ae85115fdc231b4002b57317b41a6400/kube-apiserver-cert-syncer/2.log" Dec 13 00:15:17 crc kubenswrapper[3562]: I1213 00:15:17.410398 3562 generic.go:334] "Generic (PLEG): container finished" podID="ae85115fdc231b4002b57317b41a6400" containerID="d8eb425328e224404d5ed33dee54bc752129804bab6d33fec60a1e7d4b86f246" exitCode=0 Dec 13 00:15:17 crc kubenswrapper[3562]: I1213 00:15:17.410426 3562 generic.go:334] "Generic (PLEG): container finished" podID="ae85115fdc231b4002b57317b41a6400" containerID="4ce7dfdcabe0ad2eec941a3cf6bee306e49c16470f3d8addbbd2944392fb77ec" exitCode=0 Dec 13 00:15:18 crc kubenswrapper[3562]: I1213 00:15:18.420506 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_ae85115fdc231b4002b57317b41a6400/kube-apiserver-cert-syncer/2.log" Dec 13 00:15:18 crc kubenswrapper[3562]: I1213 00:15:18.421758 3562 generic.go:334] "Generic (PLEG): container finished" podID="ae85115fdc231b4002b57317b41a6400" containerID="49c6a556089813d09a464ae0020d48fc2ecced0e755f8f8a63664f0c7a6148cc" exitCode=0 Dec 13 00:15:18 crc kubenswrapper[3562]: I1213 00:15:18.423860 3562 generic.go:334] "Generic (PLEG): container finished" podID="1c7093b7-6001-4ad1-8f11-36961d0fb83f" containerID="9067733d5dc2ddbec46f38c5746c75178efeae37e5c2b5f289c02435e9602d4b" exitCode=0 Dec 13 00:15:18 crc kubenswrapper[3562]: I1213 00:15:18.423922 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-13-crc" event={"ID":"1c7093b7-6001-4ad1-8f11-36961d0fb83f","Type":"ContainerDied","Data":"9067733d5dc2ddbec46f38c5746c75178efeae37e5c2b5f289c02435e9602d4b"} Dec 13 00:15:18 crc kubenswrapper[3562]: I1213 00:15:18.424665 3562 status_manager.go:853] "Failed to get status for pod" podUID="1c7093b7-6001-4ad1-8f11-36961d0fb83f" pod="openshift-kube-apiserver/installer-13-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-13-crc\": dial tcp 38.102.83.198:6443: connect: connection refused" Dec 13 00:15:18 crc kubenswrapper[3562]: I1213 00:15:18.425175 3562 status_manager.go:853] "Failed to get status for pod" podUID="7dae59545f22b3fb679a7fbf878a6379" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.198:6443: connect: connection refused" Dec 13 00:15:19 crc kubenswrapper[3562]: I1213 00:15:19.432769 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_ae85115fdc231b4002b57317b41a6400/kube-apiserver-cert-syncer/2.log" Dec 13 00:15:19 crc kubenswrapper[3562]: I1213 00:15:19.434118 3562 generic.go:334] "Generic (PLEG): container finished" podID="ae85115fdc231b4002b57317b41a6400" containerID="4d133fa5757238da3b7fe8dd7c27de06e8aa00eb048d2818fad45877fd1377a4" exitCode=0 Dec 13 00:15:19 crc kubenswrapper[3562]: I1213 00:15:19.435519 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"7dae59545f22b3fb679a7fbf878a6379","Type":"ContainerStarted","Data":"5a5cd0542fc745e387fc153962bbe13528a3728c9ddfe91ac0cd6251421111cf"} Dec 13 00:15:19 crc kubenswrapper[3562]: I1213 00:15:19.436606 3562 status_manager.go:853] "Failed to get status for pod" podUID="1c7093b7-6001-4ad1-8f11-36961d0fb83f" pod="openshift-kube-apiserver/installer-13-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-13-crc\": dial tcp 38.102.83.198:6443: connect: connection refused" Dec 13 00:15:19 crc kubenswrapper[3562]: I1213 00:15:19.437089 3562 status_manager.go:853] "Failed to get status for pod" podUID="7dae59545f22b3fb679a7fbf878a6379" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.198:6443: connect: connection refused" Dec 13 00:15:19 crc kubenswrapper[3562]: E1213 00:15:19.539053 3562 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?resourceVersion=0&timeout=10s\": dial tcp 38.102.83.198:6443: connect: connection refused" Dec 13 00:15:19 crc kubenswrapper[3562]: E1213 00:15:19.539295 3562 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.198:6443: connect: connection refused" Dec 13 00:15:19 crc kubenswrapper[3562]: E1213 00:15:19.539665 3562 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.198:6443: connect: connection refused" Dec 13 00:15:19 crc kubenswrapper[3562]: E1213 00:15:19.539880 3562 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.198:6443: connect: connection refused" Dec 13 00:15:19 crc kubenswrapper[3562]: E1213 00:15:19.540253 3562 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.198:6443: connect: connection refused" Dec 13 00:15:19 crc kubenswrapper[3562]: E1213 00:15:19.540280 3562 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Dec 13 00:15:19 crc kubenswrapper[3562]: I1213 00:15:19.685628 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-13-crc" Dec 13 00:15:19 crc kubenswrapper[3562]: I1213 00:15:19.686148 3562 status_manager.go:853] "Failed to get status for pod" podUID="7dae59545f22b3fb679a7fbf878a6379" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.198:6443: connect: connection refused" Dec 13 00:15:19 crc kubenswrapper[3562]: I1213 00:15:19.686355 3562 status_manager.go:853] "Failed to get status for pod" podUID="1c7093b7-6001-4ad1-8f11-36961d0fb83f" pod="openshift-kube-apiserver/installer-13-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-13-crc\": dial tcp 38.102.83.198:6443: connect: connection refused" Dec 13 00:15:19 crc kubenswrapper[3562]: I1213 00:15:19.851470 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1c7093b7-6001-4ad1-8f11-36961d0fb83f-var-lock\") pod \"1c7093b7-6001-4ad1-8f11-36961d0fb83f\" (UID: \"1c7093b7-6001-4ad1-8f11-36961d0fb83f\") " Dec 13 00:15:19 crc kubenswrapper[3562]: I1213 00:15:19.851541 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c7093b7-6001-4ad1-8f11-36961d0fb83f-var-lock" (OuterVolumeSpecName: "var-lock") pod "1c7093b7-6001-4ad1-8f11-36961d0fb83f" (UID: "1c7093b7-6001-4ad1-8f11-36961d0fb83f"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 00:15:19 crc kubenswrapper[3562]: I1213 00:15:19.851599 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c7093b7-6001-4ad1-8f11-36961d0fb83f-kubelet-dir\") pod \"1c7093b7-6001-4ad1-8f11-36961d0fb83f\" (UID: \"1c7093b7-6001-4ad1-8f11-36961d0fb83f\") " Dec 13 00:15:19 crc kubenswrapper[3562]: I1213 00:15:19.851627 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c7093b7-6001-4ad1-8f11-36961d0fb83f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1c7093b7-6001-4ad1-8f11-36961d0fb83f" (UID: "1c7093b7-6001-4ad1-8f11-36961d0fb83f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 00:15:19 crc kubenswrapper[3562]: I1213 00:15:19.851656 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c7093b7-6001-4ad1-8f11-36961d0fb83f-kube-api-access\") pod \"1c7093b7-6001-4ad1-8f11-36961d0fb83f\" (UID: \"1c7093b7-6001-4ad1-8f11-36961d0fb83f\") " Dec 13 00:15:19 crc kubenswrapper[3562]: I1213 00:15:19.852016 3562 reconciler_common.go:300] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1c7093b7-6001-4ad1-8f11-36961d0fb83f-var-lock\") on node \"crc\" DevicePath \"\"" Dec 13 00:15:19 crc kubenswrapper[3562]: I1213 00:15:19.852050 3562 reconciler_common.go:300] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c7093b7-6001-4ad1-8f11-36961d0fb83f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 13 00:15:19 crc kubenswrapper[3562]: I1213 00:15:19.856413 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c7093b7-6001-4ad1-8f11-36961d0fb83f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1c7093b7-6001-4ad1-8f11-36961d0fb83f" (UID: "1c7093b7-6001-4ad1-8f11-36961d0fb83f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 00:15:19 crc kubenswrapper[3562]: I1213 00:15:19.952624 3562 reconciler_common.go:300] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c7093b7-6001-4ad1-8f11-36961d0fb83f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 13 00:15:20 crc kubenswrapper[3562]: I1213 00:15:20.193535 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_ae85115fdc231b4002b57317b41a6400/kube-apiserver-cert-syncer/2.log" Dec 13 00:15:20 crc kubenswrapper[3562]: I1213 00:15:20.195004 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 00:15:20 crc kubenswrapper[3562]: I1213 00:15:20.195876 3562 status_manager.go:853] "Failed to get status for pod" podUID="7dae59545f22b3fb679a7fbf878a6379" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.198:6443: connect: connection refused" Dec 13 00:15:20 crc kubenswrapper[3562]: I1213 00:15:20.196693 3562 status_manager.go:853] "Failed to get status for pod" podUID="ae85115fdc231b4002b57317b41a6400" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.198:6443: connect: connection refused" Dec 13 00:15:20 crc kubenswrapper[3562]: I1213 00:15:20.197390 3562 status_manager.go:853] "Failed to get status for pod" podUID="1c7093b7-6001-4ad1-8f11-36961d0fb83f" pod="openshift-kube-apiserver/installer-13-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-13-crc\": dial tcp 38.102.83.198:6443: connect: connection refused" Dec 13 00:15:20 crc kubenswrapper[3562]: I1213 00:15:20.357078 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-audit-dir\") pod \"ae85115fdc231b4002b57317b41a6400\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " Dec 13 00:15:20 crc kubenswrapper[3562]: I1213 00:15:20.357145 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-resource-dir\") pod \"ae85115fdc231b4002b57317b41a6400\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " Dec 13 00:15:20 crc kubenswrapper[3562]: I1213 00:15:20.357211 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-cert-dir\") pod \"ae85115fdc231b4002b57317b41a6400\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " Dec 13 00:15:20 crc kubenswrapper[3562]: I1213 00:15:20.357264 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "ae85115fdc231b4002b57317b41a6400" (UID: "ae85115fdc231b4002b57317b41a6400"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 00:15:20 crc kubenswrapper[3562]: I1213 00:15:20.357272 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "ae85115fdc231b4002b57317b41a6400" (UID: "ae85115fdc231b4002b57317b41a6400"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 00:15:20 crc kubenswrapper[3562]: I1213 00:15:20.357342 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "ae85115fdc231b4002b57317b41a6400" (UID: "ae85115fdc231b4002b57317b41a6400"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 00:15:20 crc kubenswrapper[3562]: I1213 00:15:20.357598 3562 reconciler_common.go:300] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 13 00:15:20 crc kubenswrapper[3562]: I1213 00:15:20.357618 3562 reconciler_common.go:300] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 13 00:15:20 crc kubenswrapper[3562]: I1213 00:15:20.357631 3562 reconciler_common.go:300] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 13 00:15:20 crc kubenswrapper[3562]: I1213 00:15:20.441287 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-13-crc" Dec 13 00:15:20 crc kubenswrapper[3562]: I1213 00:15:20.441287 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-13-crc" event={"ID":"1c7093b7-6001-4ad1-8f11-36961d0fb83f","Type":"ContainerDied","Data":"cd41e621844ec3c2db06d57396ab091822786db2fe6eb172ebc671f070ac9d51"} Dec 13 00:15:20 crc kubenswrapper[3562]: I1213 00:15:20.441451 3562 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd41e621844ec3c2db06d57396ab091822786db2fe6eb172ebc671f070ac9d51" Dec 13 00:15:20 crc kubenswrapper[3562]: I1213 00:15:20.441960 3562 status_manager.go:853] "Failed to get status for pod" podUID="ae85115fdc231b4002b57317b41a6400" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.198:6443: connect: connection refused" Dec 13 00:15:20 crc kubenswrapper[3562]: I1213 00:15:20.442241 3562 status_manager.go:853] "Failed to get status for pod" podUID="1c7093b7-6001-4ad1-8f11-36961d0fb83f" pod="openshift-kube-apiserver/installer-13-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-13-crc\": dial tcp 38.102.83.198:6443: connect: connection refused" Dec 13 00:15:20 crc kubenswrapper[3562]: I1213 00:15:20.442579 3562 status_manager.go:853] "Failed to get status for pod" podUID="7dae59545f22b3fb679a7fbf878a6379" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.198:6443: connect: connection refused" Dec 13 00:15:20 crc kubenswrapper[3562]: I1213 00:15:20.444464 3562 status_manager.go:853] "Failed to get status for pod" podUID="1c7093b7-6001-4ad1-8f11-36961d0fb83f" pod="openshift-kube-apiserver/installer-13-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-13-crc\": dial tcp 38.102.83.198:6443: connect: connection refused" Dec 13 00:15:20 crc kubenswrapper[3562]: I1213 00:15:20.444776 3562 status_manager.go:853] "Failed to get status for pod" podUID="7dae59545f22b3fb679a7fbf878a6379" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.198:6443: connect: connection refused" Dec 13 00:15:20 crc kubenswrapper[3562]: I1213 00:15:20.445168 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_ae85115fdc231b4002b57317b41a6400/kube-apiserver-cert-syncer/2.log" Dec 13 00:15:20 crc kubenswrapper[3562]: I1213 00:15:20.445174 3562 status_manager.go:853] "Failed to get status for pod" podUID="ae85115fdc231b4002b57317b41a6400" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.198:6443: connect: connection refused" Dec 13 00:15:20 crc kubenswrapper[3562]: I1213 00:15:20.445959 3562 scope.go:117] "RemoveContainer" containerID="49c6a556089813d09a464ae0020d48fc2ecced0e755f8f8a63664f0c7a6148cc" Dec 13 00:15:20 crc kubenswrapper[3562]: I1213 00:15:20.445989 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 00:15:20 crc kubenswrapper[3562]: I1213 00:15:20.462191 3562 status_manager.go:853] "Failed to get status for pod" podUID="ae85115fdc231b4002b57317b41a6400" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.198:6443: connect: connection refused" Dec 13 00:15:20 crc kubenswrapper[3562]: I1213 00:15:20.462541 3562 status_manager.go:853] "Failed to get status for pod" podUID="1c7093b7-6001-4ad1-8f11-36961d0fb83f" pod="openshift-kube-apiserver/installer-13-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-13-crc\": dial tcp 38.102.83.198:6443: connect: connection refused" Dec 13 00:15:20 crc kubenswrapper[3562]: I1213 00:15:20.462849 3562 status_manager.go:853] "Failed to get status for pod" podUID="7dae59545f22b3fb679a7fbf878a6379" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.198:6443: connect: connection refused" Dec 13 00:15:20 crc kubenswrapper[3562]: I1213 00:15:20.469181 3562 scope.go:117] "RemoveContainer" containerID="d8eb425328e224404d5ed33dee54bc752129804bab6d33fec60a1e7d4b86f246" Dec 13 00:15:20 crc kubenswrapper[3562]: I1213 00:15:20.487732 3562 scope.go:117] "RemoveContainer" containerID="4ce7dfdcabe0ad2eec941a3cf6bee306e49c16470f3d8addbbd2944392fb77ec" Dec 13 00:15:20 crc kubenswrapper[3562]: I1213 00:15:20.509923 3562 scope.go:117] "RemoveContainer" containerID="ded8bd9cf1fd51afd473a88657f94e48f3a4edec471438cfcf4356047ac64fe9" Dec 13 00:15:20 crc kubenswrapper[3562]: I1213 00:15:20.530457 3562 scope.go:117] "RemoveContainer" containerID="4d133fa5757238da3b7fe8dd7c27de06e8aa00eb048d2818fad45877fd1377a4" Dec 13 00:15:20 crc kubenswrapper[3562]: I1213 00:15:20.552306 3562 scope.go:117] "RemoveContainer" containerID="6b714d9cc211928a69778273896d7209694b3a4d32a602ad86f5ee50b893ffab" Dec 13 00:15:21 crc kubenswrapper[3562]: I1213 00:15:21.881851 3562 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae85115fdc231b4002b57317b41a6400" path="/var/lib/kubelet/pods/ae85115fdc231b4002b57317b41a6400/volumes" Dec 13 00:15:22 crc kubenswrapper[3562]: E1213 00:15:22.734200 3562 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.198:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18809e2067c6bd74 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:7dae59545f22b3fb679a7fbf878a6379,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-13 00:15:15.946454388 +0000 UTC m=+552.429684911,LastTimestamp:2025-12-13 00:15:15.946454388 +0000 UTC m=+552.429684911,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 13 00:15:23 crc kubenswrapper[3562]: E1213 00:15:23.055512 3562 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.198:6443: connect: connection refused" Dec 13 00:15:23 crc kubenswrapper[3562]: E1213 00:15:23.055944 3562 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.198:6443: connect: connection refused" Dec 13 00:15:23 crc kubenswrapper[3562]: E1213 00:15:23.056694 3562 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.198:6443: connect: connection refused" Dec 13 00:15:23 crc kubenswrapper[3562]: E1213 00:15:23.057000 3562 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.198:6443: connect: connection refused" Dec 13 00:15:23 crc kubenswrapper[3562]: E1213 00:15:23.057331 3562 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.198:6443: connect: connection refused" Dec 13 00:15:23 crc kubenswrapper[3562]: I1213 00:15:23.057362 3562 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 13 00:15:23 crc kubenswrapper[3562]: E1213 00:15:23.057622 3562 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.198:6443: connect: connection refused" interval="200ms" Dec 13 00:15:23 crc kubenswrapper[3562]: E1213 00:15:23.258609 3562 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.198:6443: connect: connection refused" interval="400ms" Dec 13 00:15:23 crc kubenswrapper[3562]: E1213 00:15:23.659755 3562 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.198:6443: connect: connection refused" interval="800ms" Dec 13 00:15:23 crc kubenswrapper[3562]: I1213 00:15:23.877451 3562 status_manager.go:853] "Failed to get status for pod" podUID="1c7093b7-6001-4ad1-8f11-36961d0fb83f" pod="openshift-kube-apiserver/installer-13-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-13-crc\": dial tcp 38.102.83.198:6443: connect: connection refused" Dec 13 00:15:23 crc kubenswrapper[3562]: I1213 00:15:23.877772 3562 status_manager.go:853] "Failed to get status for pod" podUID="7dae59545f22b3fb679a7fbf878a6379" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.198:6443: connect: connection refused" Dec 13 00:15:24 crc kubenswrapper[3562]: E1213 00:15:24.461145 3562 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.198:6443: connect: connection refused" interval="1.6s" Dec 13 00:15:26 crc kubenswrapper[3562]: E1213 00:15:26.062252 3562 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.198:6443: connect: connection refused" interval="3.2s" Dec 13 00:15:29 crc kubenswrapper[3562]: E1213 00:15:29.264107 3562 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.198:6443: connect: connection refused" interval="6.4s" Dec 13 00:15:29 crc kubenswrapper[3562]: E1213 00:15:29.809240 3562 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?resourceVersion=0&timeout=10s\": dial tcp 38.102.83.198:6443: connect: connection refused" Dec 13 00:15:29 crc kubenswrapper[3562]: E1213 00:15:29.809950 3562 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.198:6443: connect: connection refused" Dec 13 00:15:29 crc kubenswrapper[3562]: E1213 00:15:29.810168 3562 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.198:6443: connect: connection refused" Dec 13 00:15:29 crc kubenswrapper[3562]: E1213 00:15:29.810354 3562 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.198:6443: connect: connection refused" Dec 13 00:15:29 crc kubenswrapper[3562]: E1213 00:15:29.810512 3562 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.198:6443: connect: connection refused" Dec 13 00:15:29 crc kubenswrapper[3562]: E1213 00:15:29.810532 3562 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Dec 13 00:15:29 crc kubenswrapper[3562]: I1213 00:15:29.874438 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 00:15:29 crc kubenswrapper[3562]: I1213 00:15:29.876338 3562 status_manager.go:853] "Failed to get status for pod" podUID="1c7093b7-6001-4ad1-8f11-36961d0fb83f" pod="openshift-kube-apiserver/installer-13-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-13-crc\": dial tcp 38.102.83.198:6443: connect: connection refused" Dec 13 00:15:29 crc kubenswrapper[3562]: I1213 00:15:29.876939 3562 status_manager.go:853] "Failed to get status for pod" podUID="7dae59545f22b3fb679a7fbf878a6379" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.198:6443: connect: connection refused" Dec 13 00:15:29 crc kubenswrapper[3562]: I1213 00:15:29.890172 3562 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d1b73e61-d8d2-4892-8a19-005929c9d4e1" Dec 13 00:15:29 crc kubenswrapper[3562]: I1213 00:15:29.890214 3562 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d1b73e61-d8d2-4892-8a19-005929c9d4e1" Dec 13 00:15:29 crc kubenswrapper[3562]: E1213 00:15:29.891268 3562 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.198:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 00:15:29 crc kubenswrapper[3562]: I1213 00:15:29.891921 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 00:15:29 crc kubenswrapper[3562]: W1213 00:15:29.934672 3562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f3419c3ca30b18b78e8dd2488b00489.slice/crio-c90ca8d6447ed14f475c1ac48ac1fbac405a6604d9f92bb167261acf835112b0 WatchSource:0}: Error finding container c90ca8d6447ed14f475c1ac48ac1fbac405a6604d9f92bb167261acf835112b0: Status 404 returned error can't find the container with id c90ca8d6447ed14f475c1ac48ac1fbac405a6604d9f92bb167261acf835112b0 Dec 13 00:15:30 crc kubenswrapper[3562]: I1213 00:15:30.499553 3562 generic.go:334] "Generic (PLEG): container finished" podID="7f3419c3ca30b18b78e8dd2488b00489" containerID="9d660011795ce6fad556a1b7104af2a20bfb8352505860f9fe19487beaada73d" exitCode=0 Dec 13 00:15:30 crc kubenswrapper[3562]: I1213 00:15:30.499674 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"7f3419c3ca30b18b78e8dd2488b00489","Type":"ContainerDied","Data":"9d660011795ce6fad556a1b7104af2a20bfb8352505860f9fe19487beaada73d"} Dec 13 00:15:30 crc kubenswrapper[3562]: I1213 00:15:30.499841 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"7f3419c3ca30b18b78e8dd2488b00489","Type":"ContainerStarted","Data":"c90ca8d6447ed14f475c1ac48ac1fbac405a6604d9f92bb167261acf835112b0"} Dec 13 00:15:30 crc kubenswrapper[3562]: I1213 00:15:30.500116 3562 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d1b73e61-d8d2-4892-8a19-005929c9d4e1" Dec 13 00:15:30 crc kubenswrapper[3562]: I1213 00:15:30.500127 3562 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d1b73e61-d8d2-4892-8a19-005929c9d4e1" Dec 13 00:15:30 crc kubenswrapper[3562]: E1213 00:15:30.500496 3562 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.198:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 00:15:30 crc kubenswrapper[3562]: I1213 00:15:30.500679 3562 status_manager.go:853] "Failed to get status for pod" podUID="1c7093b7-6001-4ad1-8f11-36961d0fb83f" pod="openshift-kube-apiserver/installer-13-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-13-crc\": dial tcp 38.102.83.198:6443: connect: connection refused" Dec 13 00:15:30 crc kubenswrapper[3562]: I1213 00:15:30.500875 3562 status_manager.go:853] "Failed to get status for pod" podUID="7dae59545f22b3fb679a7fbf878a6379" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.198:6443: connect: connection refused" Dec 13 00:15:30 crc kubenswrapper[3562]: I1213 00:15:30.505413 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager/2.log" Dec 13 00:15:30 crc kubenswrapper[3562]: I1213 00:15:30.505453 3562 generic.go:334] "Generic (PLEG): container finished" podID="bd6a3a59e513625ca0ae3724df2686bc" containerID="c6e9ba1bf1bd3177fe795558bc6feed1a2c0a824249812472c2a530048b2bc9b" exitCode=1 Dec 13 00:15:30 crc kubenswrapper[3562]: I1213 00:15:30.505471 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerDied","Data":"c6e9ba1bf1bd3177fe795558bc6feed1a2c0a824249812472c2a530048b2bc9b"} Dec 13 00:15:30 crc kubenswrapper[3562]: I1213 00:15:30.505766 3562 scope.go:117] "RemoveContainer" containerID="c6e9ba1bf1bd3177fe795558bc6feed1a2c0a824249812472c2a530048b2bc9b" Dec 13 00:15:30 crc kubenswrapper[3562]: I1213 00:15:30.507513 3562 status_manager.go:853] "Failed to get status for pod" podUID="bd6a3a59e513625ca0ae3724df2686bc" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.198:6443: connect: connection refused" Dec 13 00:15:30 crc kubenswrapper[3562]: I1213 00:15:30.507749 3562 status_manager.go:853] "Failed to get status for pod" podUID="1c7093b7-6001-4ad1-8f11-36961d0fb83f" pod="openshift-kube-apiserver/installer-13-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-13-crc\": dial tcp 38.102.83.198:6443: connect: connection refused" Dec 13 00:15:30 crc kubenswrapper[3562]: I1213 00:15:30.507968 3562 status_manager.go:853] "Failed to get status for pod" podUID="7dae59545f22b3fb679a7fbf878a6379" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.198:6443: connect: connection refused" Dec 13 00:15:31 crc kubenswrapper[3562]: I1213 00:15:31.513228 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"7f3419c3ca30b18b78e8dd2488b00489","Type":"ContainerStarted","Data":"0a85fda3055cc31157a9be6a8785aefb14999db9894c9dc0233dbd67e5d4369d"} Dec 13 00:15:31 crc kubenswrapper[3562]: I1213 00:15:31.513470 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"7f3419c3ca30b18b78e8dd2488b00489","Type":"ContainerStarted","Data":"f1c246184304da3066a4bdcb4411100630d85db88f1943f1dddbf9815f4c1641"} Dec 13 00:15:31 crc kubenswrapper[3562]: I1213 00:15:31.516131 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager/2.log" Dec 13 00:15:31 crc kubenswrapper[3562]: I1213 00:15:31.516171 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerStarted","Data":"c654da0607487f16b08174cea41c369218e1b4dee9dd12b3cba6886ff3f8cdee"} Dec 13 00:15:32 crc kubenswrapper[3562]: I1213 00:15:32.532373 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"7f3419c3ca30b18b78e8dd2488b00489","Type":"ContainerStarted","Data":"637251ae2f1de6a169e816bdda9aa4d665e1b76a2f6db0258ae3a1a76f987421"} Dec 13 00:15:33 crc kubenswrapper[3562]: I1213 00:15:33.249884 3562 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 00:15:33 crc kubenswrapper[3562]: I1213 00:15:33.250079 3562 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 13 00:15:33 crc kubenswrapper[3562]: I1213 00:15:33.250145 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 13 00:15:33 crc kubenswrapper[3562]: I1213 00:15:33.541636 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"7f3419c3ca30b18b78e8dd2488b00489","Type":"ContainerStarted","Data":"844279bdfe556f965b431588fbc49efc42ebcd93cbf97523d6b62baebc839da4"} Dec 13 00:15:34 crc kubenswrapper[3562]: I1213 00:15:34.549823 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"7f3419c3ca30b18b78e8dd2488b00489","Type":"ContainerStarted","Data":"5c4cb7b01ee58cf69823ed4b82aef043396da477c277d03906e6152411e7d514"} Dec 13 00:15:34 crc kubenswrapper[3562]: I1213 00:15:34.550267 3562 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d1b73e61-d8d2-4892-8a19-005929c9d4e1" Dec 13 00:15:34 crc kubenswrapper[3562]: I1213 00:15:34.550293 3562 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d1b73e61-d8d2-4892-8a19-005929c9d4e1" Dec 13 00:15:34 crc kubenswrapper[3562]: I1213 00:15:34.557188 3562 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 00:15:34 crc kubenswrapper[3562]: I1213 00:15:34.892217 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 00:15:34 crc kubenswrapper[3562]: I1213 00:15:34.892272 3562 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 00:15:34 crc kubenswrapper[3562]: I1213 00:15:34.897421 3562 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 00:15:35 crc kubenswrapper[3562]: I1213 00:15:35.554648 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 00:15:35 crc kubenswrapper[3562]: I1213 00:15:35.554655 3562 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d1b73e61-d8d2-4892-8a19-005929c9d4e1" Dec 13 00:15:35 crc kubenswrapper[3562]: I1213 00:15:35.554727 3562 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d1b73e61-d8d2-4892-8a19-005929c9d4e1" Dec 13 00:15:35 crc kubenswrapper[3562]: I1213 00:15:35.560458 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 00:15:36 crc kubenswrapper[3562]: I1213 00:15:36.466792 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 00:15:36 crc kubenswrapper[3562]: I1213 00:15:36.558975 3562 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d1b73e61-d8d2-4892-8a19-005929c9d4e1" Dec 13 00:15:36 crc kubenswrapper[3562]: I1213 00:15:36.558998 3562 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d1b73e61-d8d2-4892-8a19-005929c9d4e1" Dec 13 00:15:37 crc kubenswrapper[3562]: I1213 00:15:37.356038 3562 status_manager.go:863] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="7f3419c3ca30b18b78e8dd2488b00489" podUID="96338956-2a24-4636-80c1-2dcff03fc0bf" Dec 13 00:15:37 crc kubenswrapper[3562]: I1213 00:15:37.562618 3562 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d1b73e61-d8d2-4892-8a19-005929c9d4e1" Dec 13 00:15:37 crc kubenswrapper[3562]: I1213 00:15:37.562641 3562 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d1b73e61-d8d2-4892-8a19-005929c9d4e1" Dec 13 00:15:37 crc kubenswrapper[3562]: I1213 00:15:37.566460 3562 status_manager.go:863] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="7f3419c3ca30b18b78e8dd2488b00489" podUID="96338956-2a24-4636-80c1-2dcff03fc0bf" Dec 13 00:15:43 crc kubenswrapper[3562]: I1213 00:15:43.052957 3562 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 00:15:43 crc kubenswrapper[3562]: I1213 00:15:43.053511 3562 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 00:15:43 crc kubenswrapper[3562]: I1213 00:15:43.256199 3562 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 00:15:43 crc kubenswrapper[3562]: I1213 00:15:43.262981 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 00:15:47 crc kubenswrapper[3562]: I1213 00:15:47.378671 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 13 00:15:47 crc kubenswrapper[3562]: I1213 00:15:47.937838 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 13 00:15:48 crc kubenswrapper[3562]: I1213 00:15:48.490085 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 13 00:15:48 crc kubenswrapper[3562]: I1213 00:15:48.536239 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 13 00:15:48 crc kubenswrapper[3562]: I1213 00:15:48.914087 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 13 00:15:48 crc kubenswrapper[3562]: I1213 00:15:48.969128 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 13 00:15:49 crc kubenswrapper[3562]: I1213 00:15:49.022292 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 13 00:15:49 crc kubenswrapper[3562]: I1213 00:15:49.132796 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 13 00:15:49 crc kubenswrapper[3562]: I1213 00:15:49.217331 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 13 00:15:49 crc kubenswrapper[3562]: I1213 00:15:49.252262 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 13 00:15:49 crc kubenswrapper[3562]: I1213 00:15:49.367544 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 13 00:15:49 crc kubenswrapper[3562]: I1213 00:15:49.447236 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 13 00:15:49 crc kubenswrapper[3562]: I1213 00:15:49.594405 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 13 00:15:49 crc kubenswrapper[3562]: I1213 00:15:49.757619 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 13 00:15:49 crc kubenswrapper[3562]: I1213 00:15:49.900581 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 13 00:15:50 crc kubenswrapper[3562]: I1213 00:15:50.228289 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 13 00:15:50 crc kubenswrapper[3562]: I1213 00:15:50.238840 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 13 00:15:50 crc kubenswrapper[3562]: I1213 00:15:50.310964 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 13 00:15:50 crc kubenswrapper[3562]: I1213 00:15:50.554147 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 13 00:15:50 crc kubenswrapper[3562]: I1213 00:15:50.730876 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-6sd5l" Dec 13 00:15:50 crc kubenswrapper[3562]: I1213 00:15:50.962242 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 13 00:15:51 crc kubenswrapper[3562]: I1213 00:15:51.066720 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 13 00:15:51 crc kubenswrapper[3562]: I1213 00:15:51.181870 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 13 00:15:51 crc kubenswrapper[3562]: I1213 00:15:51.246155 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 13 00:15:51 crc kubenswrapper[3562]: I1213 00:15:51.360800 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 13 00:15:51 crc kubenswrapper[3562]: I1213 00:15:51.370870 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 13 00:15:51 crc kubenswrapper[3562]: I1213 00:15:51.387005 3562 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Dec 13 00:15:51 crc kubenswrapper[3562]: I1213 00:15:51.393704 3562 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=36.393594897 podStartE2EDuration="36.393594897s" podCreationTimestamp="2025-12-13 00:15:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 00:15:37.345061901 +0000 UTC m=+573.828292424" watchObservedRunningTime="2025-12-13 00:15:51.393594897 +0000 UTC m=+587.876825460" Dec 13 00:15:51 crc kubenswrapper[3562]: I1213 00:15:51.402114 3562 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 13 00:15:51 crc kubenswrapper[3562]: I1213 00:15:51.402836 3562 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 13 00:15:51 crc kubenswrapper[3562]: I1213 00:15:51.403987 3562 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d1b73e61-d8d2-4892-8a19-005929c9d4e1" Dec 13 00:15:51 crc kubenswrapper[3562]: I1213 00:15:51.404072 3562 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d1b73e61-d8d2-4892-8a19-005929c9d4e1" Dec 13 00:15:51 crc kubenswrapper[3562]: I1213 00:15:51.425820 3562 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=17.425750704 podStartE2EDuration="17.425750704s" podCreationTimestamp="2025-12-13 00:15:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 00:15:51.420332726 +0000 UTC m=+587.903563269" watchObservedRunningTime="2025-12-13 00:15:51.425750704 +0000 UTC m=+587.908981237" Dec 13 00:15:51 crc kubenswrapper[3562]: I1213 00:15:51.433576 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 13 00:15:51 crc kubenswrapper[3562]: I1213 00:15:51.489687 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 13 00:15:51 crc kubenswrapper[3562]: I1213 00:15:51.537062 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 13 00:15:51 crc kubenswrapper[3562]: I1213 00:15:51.561422 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 13 00:15:51 crc kubenswrapper[3562]: I1213 00:15:51.573144 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 13 00:15:51 crc kubenswrapper[3562]: I1213 00:15:51.650081 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 00:15:51 crc kubenswrapper[3562]: I1213 00:15:51.695468 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 13 00:15:51 crc kubenswrapper[3562]: I1213 00:15:51.797662 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 13 00:15:51 crc kubenswrapper[3562]: I1213 00:15:51.965797 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 13 00:15:51 crc kubenswrapper[3562]: I1213 00:15:51.987207 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 13 00:15:51 crc kubenswrapper[3562]: I1213 00:15:51.995109 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-twmwc" Dec 13 00:15:52 crc kubenswrapper[3562]: I1213 00:15:52.047826 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 13 00:15:52 crc kubenswrapper[3562]: I1213 00:15:52.143159 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 13 00:15:52 crc kubenswrapper[3562]: I1213 00:15:52.161281 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 13 00:15:52 crc kubenswrapper[3562]: I1213 00:15:52.200873 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 13 00:15:52 crc kubenswrapper[3562]: I1213 00:15:52.211226 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 13 00:15:52 crc kubenswrapper[3562]: I1213 00:15:52.254461 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 13 00:15:52 crc kubenswrapper[3562]: I1213 00:15:52.255050 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-r9fjc" Dec 13 00:15:52 crc kubenswrapper[3562]: I1213 00:15:52.344321 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 13 00:15:52 crc kubenswrapper[3562]: I1213 00:15:52.404289 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 13 00:15:52 crc kubenswrapper[3562]: I1213 00:15:52.431749 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 13 00:15:52 crc kubenswrapper[3562]: I1213 00:15:52.445458 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 13 00:15:52 crc kubenswrapper[3562]: I1213 00:15:52.454309 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-sv888" Dec 13 00:15:52 crc kubenswrapper[3562]: I1213 00:15:52.502856 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 13 00:15:52 crc kubenswrapper[3562]: I1213 00:15:52.638635 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-9r4gl" Dec 13 00:15:52 crc kubenswrapper[3562]: I1213 00:15:52.640624 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 13 00:15:52 crc kubenswrapper[3562]: I1213 00:15:52.663093 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 13 00:15:52 crc kubenswrapper[3562]: I1213 00:15:52.730423 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 13 00:15:52 crc kubenswrapper[3562]: I1213 00:15:52.749786 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 13 00:15:52 crc kubenswrapper[3562]: I1213 00:15:52.898461 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 13 00:15:52 crc kubenswrapper[3562]: I1213 00:15:52.919160 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 13 00:15:52 crc kubenswrapper[3562]: I1213 00:15:52.919506 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 13 00:15:52 crc kubenswrapper[3562]: I1213 00:15:52.934128 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 13 00:15:52 crc kubenswrapper[3562]: I1213 00:15:52.981853 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 13 00:15:53 crc kubenswrapper[3562]: I1213 00:15:53.044246 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 13 00:15:53 crc kubenswrapper[3562]: I1213 00:15:53.064258 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 13 00:15:53 crc kubenswrapper[3562]: I1213 00:15:53.117156 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 13 00:15:53 crc kubenswrapper[3562]: I1213 00:15:53.142616 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 13 00:15:53 crc kubenswrapper[3562]: I1213 00:15:53.180742 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 13 00:15:53 crc kubenswrapper[3562]: I1213 00:15:53.208437 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 13 00:15:53 crc kubenswrapper[3562]: I1213 00:15:53.276494 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 13 00:15:53 crc kubenswrapper[3562]: I1213 00:15:53.349569 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 13 00:15:53 crc kubenswrapper[3562]: I1213 00:15:53.383453 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 13 00:15:53 crc kubenswrapper[3562]: I1213 00:15:53.467045 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 13 00:15:53 crc kubenswrapper[3562]: I1213 00:15:53.476019 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 13 00:15:53 crc kubenswrapper[3562]: I1213 00:15:53.490391 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 13 00:15:53 crc kubenswrapper[3562]: I1213 00:15:53.545569 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 13 00:15:53 crc kubenswrapper[3562]: I1213 00:15:53.657695 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 13 00:15:53 crc kubenswrapper[3562]: I1213 00:15:53.743477 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 13 00:15:53 crc kubenswrapper[3562]: I1213 00:15:53.838749 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 13 00:15:53 crc kubenswrapper[3562]: I1213 00:15:53.844602 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 13 00:15:53 crc kubenswrapper[3562]: I1213 00:15:53.988455 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 13 00:15:54 crc kubenswrapper[3562]: I1213 00:15:54.030058 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 13 00:15:54 crc kubenswrapper[3562]: I1213 00:15:54.077660 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 13 00:15:54 crc kubenswrapper[3562]: I1213 00:15:54.108978 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 13 00:15:54 crc kubenswrapper[3562]: I1213 00:15:54.216013 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 13 00:15:54 crc kubenswrapper[3562]: I1213 00:15:54.344606 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 13 00:15:54 crc kubenswrapper[3562]: I1213 00:15:54.366607 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 13 00:15:54 crc kubenswrapper[3562]: I1213 00:15:54.385005 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 13 00:15:54 crc kubenswrapper[3562]: I1213 00:15:54.440268 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 13 00:15:54 crc kubenswrapper[3562]: I1213 00:15:54.446568 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 13 00:15:54 crc kubenswrapper[3562]: I1213 00:15:54.473695 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 13 00:15:54 crc kubenswrapper[3562]: I1213 00:15:54.639757 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 13 00:15:54 crc kubenswrapper[3562]: I1213 00:15:54.687951 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 13 00:15:54 crc kubenswrapper[3562]: I1213 00:15:54.728628 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 13 00:15:54 crc kubenswrapper[3562]: I1213 00:15:54.760057 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 13 00:15:54 crc kubenswrapper[3562]: I1213 00:15:54.807951 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 13 00:15:54 crc kubenswrapper[3562]: I1213 00:15:54.840718 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 13 00:15:54 crc kubenswrapper[3562]: I1213 00:15:54.869302 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 13 00:15:54 crc kubenswrapper[3562]: I1213 00:15:54.921042 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 13 00:15:54 crc kubenswrapper[3562]: I1213 00:15:54.921622 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 13 00:15:54 crc kubenswrapper[3562]: I1213 00:15:54.942595 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 13 00:15:54 crc kubenswrapper[3562]: I1213 00:15:54.955936 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 13 00:15:55 crc kubenswrapper[3562]: I1213 00:15:55.088403 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 13 00:15:55 crc kubenswrapper[3562]: I1213 00:15:55.114440 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 13 00:15:55 crc kubenswrapper[3562]: I1213 00:15:55.327975 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 13 00:15:55 crc kubenswrapper[3562]: I1213 00:15:55.359371 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 13 00:15:55 crc kubenswrapper[3562]: I1213 00:15:55.377251 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 13 00:15:55 crc kubenswrapper[3562]: I1213 00:15:55.377425 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 13 00:15:55 crc kubenswrapper[3562]: I1213 00:15:55.397028 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 13 00:15:55 crc kubenswrapper[3562]: I1213 00:15:55.434385 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 13 00:15:55 crc kubenswrapper[3562]: I1213 00:15:55.624802 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 13 00:15:55 crc kubenswrapper[3562]: I1213 00:15:55.648640 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 13 00:15:55 crc kubenswrapper[3562]: I1213 00:15:55.651642 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 13 00:15:55 crc kubenswrapper[3562]: I1213 00:15:55.714623 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 13 00:15:55 crc kubenswrapper[3562]: I1213 00:15:55.749572 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-b4zbk" Dec 13 00:15:55 crc kubenswrapper[3562]: I1213 00:15:55.839403 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 13 00:15:56 crc kubenswrapper[3562]: I1213 00:15:56.021827 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 13 00:15:56 crc kubenswrapper[3562]: I1213 00:15:56.059767 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 13 00:15:56 crc kubenswrapper[3562]: I1213 00:15:56.193961 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console-operator"/"webhook-serving-cert" Dec 13 00:15:56 crc kubenswrapper[3562]: I1213 00:15:56.275376 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 13 00:15:56 crc kubenswrapper[3562]: I1213 00:15:56.291077 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 13 00:15:56 crc kubenswrapper[3562]: I1213 00:15:56.338504 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 13 00:15:56 crc kubenswrapper[3562]: I1213 00:15:56.352454 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 13 00:15:56 crc kubenswrapper[3562]: I1213 00:15:56.545074 3562 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Dec 13 00:15:56 crc kubenswrapper[3562]: I1213 00:15:56.674443 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 13 00:15:56 crc kubenswrapper[3562]: I1213 00:15:56.729811 3562 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Dec 13 00:15:56 crc kubenswrapper[3562]: I1213 00:15:56.811006 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 13 00:15:56 crc kubenswrapper[3562]: I1213 00:15:56.833568 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 13 00:15:57 crc kubenswrapper[3562]: I1213 00:15:57.158968 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 13 00:15:57 crc kubenswrapper[3562]: I1213 00:15:57.236106 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-dwn4s" Dec 13 00:15:57 crc kubenswrapper[3562]: I1213 00:15:57.274760 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 13 00:15:57 crc kubenswrapper[3562]: I1213 00:15:57.307265 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 13 00:15:57 crc kubenswrapper[3562]: I1213 00:15:57.347088 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 13 00:15:57 crc kubenswrapper[3562]: I1213 00:15:57.354222 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 13 00:15:57 crc kubenswrapper[3562]: I1213 00:15:57.358986 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 13 00:15:57 crc kubenswrapper[3562]: I1213 00:15:57.405964 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 13 00:15:57 crc kubenswrapper[3562]: I1213 00:15:57.406835 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 13 00:15:57 crc kubenswrapper[3562]: I1213 00:15:57.548562 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 13 00:15:57 crc kubenswrapper[3562]: I1213 00:15:57.624936 3562 reflector.go:351] Caches populated for *v1.CSIDriver from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Dec 13 00:15:57 crc kubenswrapper[3562]: I1213 00:15:57.634299 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 13 00:15:57 crc kubenswrapper[3562]: I1213 00:15:57.700092 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 13 00:15:57 crc kubenswrapper[3562]: I1213 00:15:57.886368 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-79vsd" Dec 13 00:15:57 crc kubenswrapper[3562]: I1213 00:15:57.949387 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-ng44q" Dec 13 00:15:58 crc kubenswrapper[3562]: I1213 00:15:58.048977 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 13 00:15:58 crc kubenswrapper[3562]: I1213 00:15:58.066078 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 13 00:15:58 crc kubenswrapper[3562]: I1213 00:15:58.066945 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 13 00:15:58 crc kubenswrapper[3562]: I1213 00:15:58.074177 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 13 00:15:58 crc kubenswrapper[3562]: I1213 00:15:58.160249 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 13 00:15:58 crc kubenswrapper[3562]: I1213 00:15:58.182024 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 13 00:15:58 crc kubenswrapper[3562]: I1213 00:15:58.204284 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 13 00:15:58 crc kubenswrapper[3562]: I1213 00:15:58.205865 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-kpdvz" Dec 13 00:15:58 crc kubenswrapper[3562]: I1213 00:15:58.413101 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 13 00:15:58 crc kubenswrapper[3562]: I1213 00:15:58.416361 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 13 00:15:58 crc kubenswrapper[3562]: I1213 00:15:58.499276 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 13 00:15:58 crc kubenswrapper[3562]: I1213 00:15:58.526017 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 13 00:15:58 crc kubenswrapper[3562]: I1213 00:15:58.579726 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 13 00:15:58 crc kubenswrapper[3562]: I1213 00:15:58.598310 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 13 00:15:58 crc kubenswrapper[3562]: I1213 00:15:58.646156 3562 kubelet.go:2439] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 13 00:15:58 crc kubenswrapper[3562]: I1213 00:15:58.646313 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 13 00:15:58 crc kubenswrapper[3562]: I1213 00:15:58.646377 3562 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="7dae59545f22b3fb679a7fbf878a6379" containerName="startup-monitor" containerID="cri-o://5a5cd0542fc745e387fc153962bbe13528a3728c9ddfe91ac0cd6251421111cf" gracePeriod=5 Dec 13 00:15:58 crc kubenswrapper[3562]: I1213 00:15:58.715974 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 13 00:15:58 crc kubenswrapper[3562]: I1213 00:15:58.789698 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 13 00:15:58 crc kubenswrapper[3562]: I1213 00:15:58.818488 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 13 00:15:58 crc kubenswrapper[3562]: I1213 00:15:58.818892 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 13 00:15:58 crc kubenswrapper[3562]: I1213 00:15:58.829114 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 13 00:15:58 crc kubenswrapper[3562]: I1213 00:15:58.846030 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 13 00:15:58 crc kubenswrapper[3562]: I1213 00:15:58.916658 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 13 00:15:58 crc kubenswrapper[3562]: I1213 00:15:58.927050 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 13 00:15:58 crc kubenswrapper[3562]: I1213 00:15:58.956841 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 13 00:15:59 crc kubenswrapper[3562]: I1213 00:15:59.092147 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 13 00:15:59 crc kubenswrapper[3562]: I1213 00:15:59.145799 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 13 00:15:59 crc kubenswrapper[3562]: I1213 00:15:59.172980 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 13 00:15:59 crc kubenswrapper[3562]: I1213 00:15:59.450165 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 13 00:15:59 crc kubenswrapper[3562]: I1213 00:15:59.598035 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 13 00:15:59 crc kubenswrapper[3562]: I1213 00:15:59.660505 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-58g82" Dec 13 00:15:59 crc kubenswrapper[3562]: I1213 00:15:59.744404 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 13 00:15:59 crc kubenswrapper[3562]: I1213 00:15:59.824602 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 13 00:15:59 crc kubenswrapper[3562]: I1213 00:15:59.855623 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 13 00:15:59 crc kubenswrapper[3562]: I1213 00:15:59.914652 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 13 00:15:59 crc kubenswrapper[3562]: I1213 00:15:59.971356 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 13 00:15:59 crc kubenswrapper[3562]: I1213 00:15:59.995747 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 13 00:16:00 crc kubenswrapper[3562]: I1213 00:16:00.339380 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 13 00:16:00 crc kubenswrapper[3562]: I1213 00:16:00.364575 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 13 00:16:00 crc kubenswrapper[3562]: I1213 00:16:00.404385 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 13 00:16:00 crc kubenswrapper[3562]: I1213 00:16:00.485301 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 13 00:16:00 crc kubenswrapper[3562]: I1213 00:16:00.584877 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 13 00:16:00 crc kubenswrapper[3562]: I1213 00:16:00.718698 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 13 00:16:00 crc kubenswrapper[3562]: I1213 00:16:00.747048 3562 reflector.go:351] Caches populated for *v1.RuntimeClass from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Dec 13 00:16:00 crc kubenswrapper[3562]: I1213 00:16:00.766024 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 13 00:16:00 crc kubenswrapper[3562]: I1213 00:16:00.939456 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 13 00:16:01 crc kubenswrapper[3562]: I1213 00:16:01.057854 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 13 00:16:01 crc kubenswrapper[3562]: I1213 00:16:01.135741 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 13 00:16:01 crc kubenswrapper[3562]: I1213 00:16:01.240529 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 13 00:16:01 crc kubenswrapper[3562]: I1213 00:16:01.264301 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-q786x" Dec 13 00:16:01 crc kubenswrapper[3562]: I1213 00:16:01.296480 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 13 00:16:01 crc kubenswrapper[3562]: I1213 00:16:01.308027 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 13 00:16:01 crc kubenswrapper[3562]: I1213 00:16:01.546782 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 13 00:16:01 crc kubenswrapper[3562]: I1213 00:16:01.749202 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 13 00:16:01 crc kubenswrapper[3562]: I1213 00:16:01.764590 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 13 00:16:01 crc kubenswrapper[3562]: I1213 00:16:01.767525 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 13 00:16:01 crc kubenswrapper[3562]: I1213 00:16:01.879476 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 13 00:16:01 crc kubenswrapper[3562]: I1213 00:16:01.889553 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 13 00:16:01 crc kubenswrapper[3562]: I1213 00:16:01.928261 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 13 00:16:02 crc kubenswrapper[3562]: I1213 00:16:02.229282 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 13 00:16:02 crc kubenswrapper[3562]: I1213 00:16:02.236866 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 13 00:16:02 crc kubenswrapper[3562]: I1213 00:16:02.258388 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 13 00:16:02 crc kubenswrapper[3562]: I1213 00:16:02.307029 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 13 00:16:02 crc kubenswrapper[3562]: I1213 00:16:02.472965 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 13 00:16:02 crc kubenswrapper[3562]: I1213 00:16:02.587686 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 13 00:16:02 crc kubenswrapper[3562]: I1213 00:16:02.877723 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 13 00:16:03 crc kubenswrapper[3562]: I1213 00:16:03.189984 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 13 00:16:03 crc kubenswrapper[3562]: I1213 00:16:03.623807 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 13 00:16:03 crc kubenswrapper[3562]: I1213 00:16:03.736817 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 13 00:16:03 crc kubenswrapper[3562]: I1213 00:16:03.736885 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 13 00:16:03 crc kubenswrapper[3562]: I1213 00:16:03.736927 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 13 00:16:03 crc kubenswrapper[3562]: I1213 00:16:03.736964 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 13 00:16:03 crc kubenswrapper[3562]: I1213 00:16:03.736992 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 13 00:16:03 crc kubenswrapper[3562]: I1213 00:16:03.910223 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 13 00:16:04 crc kubenswrapper[3562]: E1213 00:16:04.162132 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13053062c85d9edb3365e456db12e124816e6411643a8553c324352ece2c7373\": container with ID starting with 13053062c85d9edb3365e456db12e124816e6411643a8553c324352ece2c7373 not found: ID does not exist" containerID="13053062c85d9edb3365e456db12e124816e6411643a8553c324352ece2c7373" Dec 13 00:16:04 crc kubenswrapper[3562]: I1213 00:16:04.162187 3562 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="13053062c85d9edb3365e456db12e124816e6411643a8553c324352ece2c7373" err="rpc error: code = NotFound desc = could not find container \"13053062c85d9edb3365e456db12e124816e6411643a8553c324352ece2c7373\": container with ID starting with 13053062c85d9edb3365e456db12e124816e6411643a8553c324352ece2c7373 not found: ID does not exist" Dec 13 00:16:04 crc kubenswrapper[3562]: E1213 00:16:04.163673 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc62e76377abe761c91fc70b8c010469ee052b1cdb26156cc98186814ab9ea53\": container with ID starting with dc62e76377abe761c91fc70b8c010469ee052b1cdb26156cc98186814ab9ea53 not found: ID does not exist" containerID="dc62e76377abe761c91fc70b8c010469ee052b1cdb26156cc98186814ab9ea53" Dec 13 00:16:04 crc kubenswrapper[3562]: I1213 00:16:04.163714 3562 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="dc62e76377abe761c91fc70b8c010469ee052b1cdb26156cc98186814ab9ea53" err="rpc error: code = NotFound desc = could not find container \"dc62e76377abe761c91fc70b8c010469ee052b1cdb26156cc98186814ab9ea53\": container with ID starting with dc62e76377abe761c91fc70b8c010469ee052b1cdb26156cc98186814ab9ea53 not found: ID does not exist" Dec 13 00:16:04 crc kubenswrapper[3562]: E1213 00:16:04.164515 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03f468a807b805d0e77421246b6dab8f884b5ff4b6ec700846bd126ee447fd64\": container with ID starting with 03f468a807b805d0e77421246b6dab8f884b5ff4b6ec700846bd126ee447fd64 not found: ID does not exist" containerID="03f468a807b805d0e77421246b6dab8f884b5ff4b6ec700846bd126ee447fd64" Dec 13 00:16:04 crc kubenswrapper[3562]: I1213 00:16:04.164600 3562 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="03f468a807b805d0e77421246b6dab8f884b5ff4b6ec700846bd126ee447fd64" err="rpc error: code = NotFound desc = could not find container \"03f468a807b805d0e77421246b6dab8f884b5ff4b6ec700846bd126ee447fd64\": container with ID starting with 03f468a807b805d0e77421246b6dab8f884b5ff4b6ec700846bd126ee447fd64 not found: ID does not exist" Dec 13 00:16:04 crc kubenswrapper[3562]: E1213 00:16:04.165639 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53c859e04188764b0d92baab2d894b8e5cc24fc74718e7837e9bf64ec1096807\": container with ID starting with 53c859e04188764b0d92baab2d894b8e5cc24fc74718e7837e9bf64ec1096807 not found: ID does not exist" containerID="53c859e04188764b0d92baab2d894b8e5cc24fc74718e7837e9bf64ec1096807" Dec 13 00:16:04 crc kubenswrapper[3562]: I1213 00:16:04.165684 3562 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="53c859e04188764b0d92baab2d894b8e5cc24fc74718e7837e9bf64ec1096807" err="rpc error: code = NotFound desc = could not find container \"53c859e04188764b0d92baab2d894b8e5cc24fc74718e7837e9bf64ec1096807\": container with ID starting with 53c859e04188764b0d92baab2d894b8e5cc24fc74718e7837e9bf64ec1096807 not found: ID does not exist" Dec 13 00:16:04 crc kubenswrapper[3562]: E1213 00:16:04.166206 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3889e4065c8ccc3198452d7de12d524904c3a80cb7e5b2d0bbdb2501d86822fd\": container with ID starting with 3889e4065c8ccc3198452d7de12d524904c3a80cb7e5b2d0bbdb2501d86822fd not found: ID does not exist" containerID="3889e4065c8ccc3198452d7de12d524904c3a80cb7e5b2d0bbdb2501d86822fd" Dec 13 00:16:04 crc kubenswrapper[3562]: I1213 00:16:04.166254 3562 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="3889e4065c8ccc3198452d7de12d524904c3a80cb7e5b2d0bbdb2501d86822fd" err="rpc error: code = NotFound desc = could not find container \"3889e4065c8ccc3198452d7de12d524904c3a80cb7e5b2d0bbdb2501d86822fd\": container with ID starting with 3889e4065c8ccc3198452d7de12d524904c3a80cb7e5b2d0bbdb2501d86822fd not found: ID does not exist" Dec 13 00:16:04 crc kubenswrapper[3562]: E1213 00:16:04.166814 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ec028dd58f3480de1c152178877ef20363db5cdec32732223f3a6419a431078\": container with ID starting with 8ec028dd58f3480de1c152178877ef20363db5cdec32732223f3a6419a431078 not found: ID does not exist" containerID="8ec028dd58f3480de1c152178877ef20363db5cdec32732223f3a6419a431078" Dec 13 00:16:04 crc kubenswrapper[3562]: I1213 00:16:04.166875 3562 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="8ec028dd58f3480de1c152178877ef20363db5cdec32732223f3a6419a431078" err="rpc error: code = NotFound desc = could not find container \"8ec028dd58f3480de1c152178877ef20363db5cdec32732223f3a6419a431078\": container with ID starting with 8ec028dd58f3480de1c152178877ef20363db5cdec32732223f3a6419a431078 not found: ID does not exist" Dec 13 00:16:04 crc kubenswrapper[3562]: E1213 00:16:04.167417 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"192260781d494fb0fda0ec530cfb2f3769b95d68d7ed51cdba7ece427e3eeeb9\": container with ID starting with 192260781d494fb0fda0ec530cfb2f3769b95d68d7ed51cdba7ece427e3eeeb9 not found: ID does not exist" containerID="192260781d494fb0fda0ec530cfb2f3769b95d68d7ed51cdba7ece427e3eeeb9" Dec 13 00:16:04 crc kubenswrapper[3562]: I1213 00:16:04.167459 3562 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="192260781d494fb0fda0ec530cfb2f3769b95d68d7ed51cdba7ece427e3eeeb9" err="rpc error: code = NotFound desc = could not find container \"192260781d494fb0fda0ec530cfb2f3769b95d68d7ed51cdba7ece427e3eeeb9\": container with ID starting with 192260781d494fb0fda0ec530cfb2f3769b95d68d7ed51cdba7ece427e3eeeb9 not found: ID does not exist" Dec 13 00:16:04 crc kubenswrapper[3562]: E1213 00:16:04.167946 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6d2ed4439a7191ab2bfda0bfba1dd031d0a4d540b63ab481e85ae9fcff31282\": container with ID starting with a6d2ed4439a7191ab2bfda0bfba1dd031d0a4d540b63ab481e85ae9fcff31282 not found: ID does not exist" containerID="a6d2ed4439a7191ab2bfda0bfba1dd031d0a4d540b63ab481e85ae9fcff31282" Dec 13 00:16:04 crc kubenswrapper[3562]: I1213 00:16:04.167974 3562 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="a6d2ed4439a7191ab2bfda0bfba1dd031d0a4d540b63ab481e85ae9fcff31282" err="rpc error: code = NotFound desc = could not find container \"a6d2ed4439a7191ab2bfda0bfba1dd031d0a4d540b63ab481e85ae9fcff31282\": container with ID starting with a6d2ed4439a7191ab2bfda0bfba1dd031d0a4d540b63ab481e85ae9fcff31282 not found: ID does not exist" Dec 13 00:16:04 crc kubenswrapper[3562]: E1213 00:16:04.168391 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3c1aebe7819c560028dcd0c0b6a7f4a2ff5bd55f8fe8b6cc8f9d88c23e36a22\": container with ID starting with a3c1aebe7819c560028dcd0c0b6a7f4a2ff5bd55f8fe8b6cc8f9d88c23e36a22 not found: ID does not exist" containerID="a3c1aebe7819c560028dcd0c0b6a7f4a2ff5bd55f8fe8b6cc8f9d88c23e36a22" Dec 13 00:16:04 crc kubenswrapper[3562]: I1213 00:16:04.168438 3562 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="a3c1aebe7819c560028dcd0c0b6a7f4a2ff5bd55f8fe8b6cc8f9d88c23e36a22" err="rpc error: code = NotFound desc = could not find container \"a3c1aebe7819c560028dcd0c0b6a7f4a2ff5bd55f8fe8b6cc8f9d88c23e36a22\": container with ID starting with a3c1aebe7819c560028dcd0c0b6a7f4a2ff5bd55f8fe8b6cc8f9d88c23e36a22 not found: ID does not exist" Dec 13 00:16:04 crc kubenswrapper[3562]: E1213 00:16:04.168843 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea11448c0ee33a569f6d69d267e792b452d2024239768810e787c3c52f080333\": container with ID starting with ea11448c0ee33a569f6d69d267e792b452d2024239768810e787c3c52f080333 not found: ID does not exist" containerID="ea11448c0ee33a569f6d69d267e792b452d2024239768810e787c3c52f080333" Dec 13 00:16:04 crc kubenswrapper[3562]: I1213 00:16:04.168872 3562 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="ea11448c0ee33a569f6d69d267e792b452d2024239768810e787c3c52f080333" err="rpc error: code = NotFound desc = could not find container \"ea11448c0ee33a569f6d69d267e792b452d2024239768810e787c3c52f080333\": container with ID starting with ea11448c0ee33a569f6d69d267e792b452d2024239768810e787c3c52f080333 not found: ID does not exist" Dec 13 00:16:04 crc kubenswrapper[3562]: E1213 00:16:04.169276 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f88ead159a2dab22e458270562aeefb8fa7f52c4e4928e89183b5ccc06142b7\": container with ID starting with 7f88ead159a2dab22e458270562aeefb8fa7f52c4e4928e89183b5ccc06142b7 not found: ID does not exist" containerID="7f88ead159a2dab22e458270562aeefb8fa7f52c4e4928e89183b5ccc06142b7" Dec 13 00:16:04 crc kubenswrapper[3562]: I1213 00:16:04.169309 3562 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="7f88ead159a2dab22e458270562aeefb8fa7f52c4e4928e89183b5ccc06142b7" err="rpc error: code = NotFound desc = could not find container \"7f88ead159a2dab22e458270562aeefb8fa7f52c4e4928e89183b5ccc06142b7\": container with ID starting with 7f88ead159a2dab22e458270562aeefb8fa7f52c4e4928e89183b5ccc06142b7 not found: ID does not exist" Dec 13 00:16:04 crc kubenswrapper[3562]: E1213 00:16:04.169635 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"caf1498eec5b51d72767ade594459626b076c4bb41f3b23c2fc33eb01453a9a3\": container with ID starting with caf1498eec5b51d72767ade594459626b076c4bb41f3b23c2fc33eb01453a9a3 not found: ID does not exist" containerID="caf1498eec5b51d72767ade594459626b076c4bb41f3b23c2fc33eb01453a9a3" Dec 13 00:16:04 crc kubenswrapper[3562]: I1213 00:16:04.169665 3562 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="caf1498eec5b51d72767ade594459626b076c4bb41f3b23c2fc33eb01453a9a3" err="rpc error: code = NotFound desc = could not find container \"caf1498eec5b51d72767ade594459626b076c4bb41f3b23c2fc33eb01453a9a3\": container with ID starting with caf1498eec5b51d72767ade594459626b076c4bb41f3b23c2fc33eb01453a9a3 not found: ID does not exist" Dec 13 00:16:04 crc kubenswrapper[3562]: E1213 00:16:04.170121 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05c582e8404bde997b8ba5640dc26199d47b5ebbea2e230e2e412df871d70fb0\": container with ID starting with 05c582e8404bde997b8ba5640dc26199d47b5ebbea2e230e2e412df871d70fb0 not found: ID does not exist" containerID="05c582e8404bde997b8ba5640dc26199d47b5ebbea2e230e2e412df871d70fb0" Dec 13 00:16:04 crc kubenswrapper[3562]: I1213 00:16:04.170155 3562 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="05c582e8404bde997b8ba5640dc26199d47b5ebbea2e230e2e412df871d70fb0" err="rpc error: code = NotFound desc = could not find container \"05c582e8404bde997b8ba5640dc26199d47b5ebbea2e230e2e412df871d70fb0\": container with ID starting with 05c582e8404bde997b8ba5640dc26199d47b5ebbea2e230e2e412df871d70fb0 not found: ID does not exist" Dec 13 00:16:04 crc kubenswrapper[3562]: E1213 00:16:04.170466 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e7c2c3e562c9d9919442f40f7882ebbafac044309ad23f75a5ea127edd01df3\": container with ID starting with 0e7c2c3e562c9d9919442f40f7882ebbafac044309ad23f75a5ea127edd01df3 not found: ID does not exist" containerID="0e7c2c3e562c9d9919442f40f7882ebbafac044309ad23f75a5ea127edd01df3" Dec 13 00:16:04 crc kubenswrapper[3562]: I1213 00:16:04.170495 3562 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="0e7c2c3e562c9d9919442f40f7882ebbafac044309ad23f75a5ea127edd01df3" err="rpc error: code = NotFound desc = could not find container \"0e7c2c3e562c9d9919442f40f7882ebbafac044309ad23f75a5ea127edd01df3\": container with ID starting with 0e7c2c3e562c9d9919442f40f7882ebbafac044309ad23f75a5ea127edd01df3 not found: ID does not exist" Dec 13 00:16:04 crc kubenswrapper[3562]: I1213 00:16:04.348192 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_7dae59545f22b3fb679a7fbf878a6379/startup-monitor/0.log" Dec 13 00:16:04 crc kubenswrapper[3562]: I1213 00:16:04.348249 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 00:16:04 crc kubenswrapper[3562]: I1213 00:16:04.424313 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 13 00:16:04 crc kubenswrapper[3562]: I1213 00:16:04.537887 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-pod-resource-dir\") pod \"7dae59545f22b3fb679a7fbf878a6379\" (UID: \"7dae59545f22b3fb679a7fbf878a6379\") " Dec 13 00:16:04 crc kubenswrapper[3562]: I1213 00:16:04.538091 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-var-lock\") pod \"7dae59545f22b3fb679a7fbf878a6379\" (UID: \"7dae59545f22b3fb679a7fbf878a6379\") " Dec 13 00:16:04 crc kubenswrapper[3562]: I1213 00:16:04.538125 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-manifests\") pod \"7dae59545f22b3fb679a7fbf878a6379\" (UID: \"7dae59545f22b3fb679a7fbf878a6379\") " Dec 13 00:16:04 crc kubenswrapper[3562]: I1213 00:16:04.538189 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-resource-dir\") pod \"7dae59545f22b3fb679a7fbf878a6379\" (UID: \"7dae59545f22b3fb679a7fbf878a6379\") " Dec 13 00:16:04 crc kubenswrapper[3562]: I1213 00:16:04.538192 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-var-lock" (OuterVolumeSpecName: "var-lock") pod "7dae59545f22b3fb679a7fbf878a6379" (UID: "7dae59545f22b3fb679a7fbf878a6379"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 00:16:04 crc kubenswrapper[3562]: I1213 00:16:04.538256 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-var-log\") pod \"7dae59545f22b3fb679a7fbf878a6379\" (UID: \"7dae59545f22b3fb679a7fbf878a6379\") " Dec 13 00:16:04 crc kubenswrapper[3562]: I1213 00:16:04.538207 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-manifests" (OuterVolumeSpecName: "manifests") pod "7dae59545f22b3fb679a7fbf878a6379" (UID: "7dae59545f22b3fb679a7fbf878a6379"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 00:16:04 crc kubenswrapper[3562]: I1213 00:16:04.538297 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "7dae59545f22b3fb679a7fbf878a6379" (UID: "7dae59545f22b3fb679a7fbf878a6379"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 00:16:04 crc kubenswrapper[3562]: I1213 00:16:04.538417 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-var-log" (OuterVolumeSpecName: "var-log") pod "7dae59545f22b3fb679a7fbf878a6379" (UID: "7dae59545f22b3fb679a7fbf878a6379"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 00:16:04 crc kubenswrapper[3562]: I1213 00:16:04.538557 3562 reconciler_common.go:300] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-var-lock\") on node \"crc\" DevicePath \"\"" Dec 13 00:16:04 crc kubenswrapper[3562]: I1213 00:16:04.538578 3562 reconciler_common.go:300] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-manifests\") on node \"crc\" DevicePath \"\"" Dec 13 00:16:04 crc kubenswrapper[3562]: I1213 00:16:04.538595 3562 reconciler_common.go:300] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 13 00:16:04 crc kubenswrapper[3562]: I1213 00:16:04.538609 3562 reconciler_common.go:300] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-var-log\") on node \"crc\" DevicePath \"\"" Dec 13 00:16:04 crc kubenswrapper[3562]: I1213 00:16:04.552298 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "7dae59545f22b3fb679a7fbf878a6379" (UID: "7dae59545f22b3fb679a7fbf878a6379"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 00:16:04 crc kubenswrapper[3562]: I1213 00:16:04.639586 3562 reconciler_common.go:300] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 13 00:16:04 crc kubenswrapper[3562]: I1213 00:16:04.705343 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_7dae59545f22b3fb679a7fbf878a6379/startup-monitor/0.log" Dec 13 00:16:04 crc kubenswrapper[3562]: I1213 00:16:04.705425 3562 generic.go:334] "Generic (PLEG): container finished" podID="7dae59545f22b3fb679a7fbf878a6379" containerID="5a5cd0542fc745e387fc153962bbe13528a3728c9ddfe91ac0cd6251421111cf" exitCode=137 Dec 13 00:16:04 crc kubenswrapper[3562]: I1213 00:16:04.705462 3562 scope.go:117] "RemoveContainer" containerID="5a5cd0542fc745e387fc153962bbe13528a3728c9ddfe91ac0cd6251421111cf" Dec 13 00:16:04 crc kubenswrapper[3562]: I1213 00:16:04.705508 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 00:16:04 crc kubenswrapper[3562]: I1213 00:16:04.738700 3562 scope.go:117] "RemoveContainer" containerID="5a5cd0542fc745e387fc153962bbe13528a3728c9ddfe91ac0cd6251421111cf" Dec 13 00:16:04 crc kubenswrapper[3562]: E1213 00:16:04.739340 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a5cd0542fc745e387fc153962bbe13528a3728c9ddfe91ac0cd6251421111cf\": container with ID starting with 5a5cd0542fc745e387fc153962bbe13528a3728c9ddfe91ac0cd6251421111cf not found: ID does not exist" containerID="5a5cd0542fc745e387fc153962bbe13528a3728c9ddfe91ac0cd6251421111cf" Dec 13 00:16:04 crc kubenswrapper[3562]: I1213 00:16:04.739398 3562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a5cd0542fc745e387fc153962bbe13528a3728c9ddfe91ac0cd6251421111cf"} err="failed to get container status \"5a5cd0542fc745e387fc153962bbe13528a3728c9ddfe91ac0cd6251421111cf\": rpc error: code = NotFound desc = could not find container \"5a5cd0542fc745e387fc153962bbe13528a3728c9ddfe91ac0cd6251421111cf\": container with ID starting with 5a5cd0542fc745e387fc153962bbe13528a3728c9ddfe91ac0cd6251421111cf not found: ID does not exist" Dec 13 00:16:05 crc kubenswrapper[3562]: I1213 00:16:05.880246 3562 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7dae59545f22b3fb679a7fbf878a6379" path="/var/lib/kubelet/pods/7dae59545f22b3fb679a7fbf878a6379/volumes" Dec 13 00:16:05 crc kubenswrapper[3562]: I1213 00:16:05.880506 3562 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Dec 13 00:16:05 crc kubenswrapper[3562]: I1213 00:16:05.889323 3562 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 13 00:16:05 crc kubenswrapper[3562]: I1213 00:16:05.889366 3562 kubelet.go:2639] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="68f0c9ac-6933-4cc6-bca0-a7faa66fc7d9" Dec 13 00:16:05 crc kubenswrapper[3562]: I1213 00:16:05.892966 3562 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 13 00:16:05 crc kubenswrapper[3562]: I1213 00:16:05.892992 3562 kubelet.go:2663] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="68f0c9ac-6933-4cc6-bca0-a7faa66fc7d9" Dec 13 00:16:13 crc kubenswrapper[3562]: I1213 00:16:13.053506 3562 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 00:16:13 crc kubenswrapper[3562]: I1213 00:16:13.054242 3562 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 00:16:19 crc kubenswrapper[3562]: I1213 00:16:19.788260 3562 generic.go:334] "Generic (PLEG): container finished" podID="39e4a9ce-79cc-4ea3-b65e-fd5f4eb81c2b" containerID="7c812d27cfaa6d43692b15930ce5bf1f650e7a18e9c23e20c2a11911dfaec05f" exitCode=0 Dec 13 00:16:19 crc kubenswrapper[3562]: I1213 00:16:19.788345 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-cpx94" event={"ID":"39e4a9ce-79cc-4ea3-b65e-fd5f4eb81c2b","Type":"ContainerDied","Data":"7c812d27cfaa6d43692b15930ce5bf1f650e7a18e9c23e20c2a11911dfaec05f"} Dec 13 00:16:19 crc kubenswrapper[3562]: I1213 00:16:19.789064 3562 scope.go:117] "RemoveContainer" containerID="7c812d27cfaa6d43692b15930ce5bf1f650e7a18e9c23e20c2a11911dfaec05f" Dec 13 00:16:20 crc kubenswrapper[3562]: I1213 00:16:20.794313 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-cpx94" event={"ID":"39e4a9ce-79cc-4ea3-b65e-fd5f4eb81c2b","Type":"ContainerStarted","Data":"c55317679f3cce9f7b7d50d27e2ae444d34f5aefebee1e48439218454dcfbe6d"} Dec 13 00:16:20 crc kubenswrapper[3562]: I1213 00:16:20.794867 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-8b455464d-cpx94" Dec 13 00:16:20 crc kubenswrapper[3562]: I1213 00:16:20.796370 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-8b455464d-cpx94" Dec 13 00:16:26 crc kubenswrapper[3562]: I1213 00:16:26.799152 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 13 00:16:33 crc kubenswrapper[3562]: I1213 00:16:33.881678 3562 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-778975cc4f-x5vcf"] Dec 13 00:16:33 crc kubenswrapper[3562]: I1213 00:16:33.882627 3562 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" containerName="controller-manager" containerID="cri-o://32138d9495aa7f4e726774c0620ac5d16f0236d7d1db41cd6313f1f8266135dd" gracePeriod=30 Dec 13 00:16:33 crc kubenswrapper[3562]: I1213 00:16:33.903604 3562 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs"] Dec 13 00:16:33 crc kubenswrapper[3562]: I1213 00:16:33.904168 3562 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" containerName="route-controller-manager" containerID="cri-o://164ff5a5827ca89b6d533ebb14dbb49229c77c8751ebea0355117bf5381f7ff7" gracePeriod=30 Dec 13 00:16:34 crc kubenswrapper[3562]: I1213 00:16:34.717473 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:16:34 crc kubenswrapper[3562]: I1213 00:16:34.738849 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:16:34 crc kubenswrapper[3562]: I1213 00:16:34.750367 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert\") pod \"1a3e81c3-c292-4130-9436-f94062c91efd\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " Dec 13 00:16:34 crc kubenswrapper[3562]: I1213 00:16:34.750440 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkhl4\" (UniqueName: \"kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4\") pod \"1a3e81c3-c292-4130-9436-f94062c91efd\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " Dec 13 00:16:34 crc kubenswrapper[3562]: I1213 00:16:34.750518 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config\") pod \"1a3e81c3-c292-4130-9436-f94062c91efd\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " Dec 13 00:16:34 crc kubenswrapper[3562]: I1213 00:16:34.750556 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles\") pod \"1a3e81c3-c292-4130-9436-f94062c91efd\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " Dec 13 00:16:34 crc kubenswrapper[3562]: I1213 00:16:34.750588 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca\") pod \"1a3e81c3-c292-4130-9436-f94062c91efd\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " Dec 13 00:16:34 crc kubenswrapper[3562]: I1213 00:16:34.751501 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca" (OuterVolumeSpecName: "client-ca") pod "1a3e81c3-c292-4130-9436-f94062c91efd" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 00:16:34 crc kubenswrapper[3562]: I1213 00:16:34.754672 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "1a3e81c3-c292-4130-9436-f94062c91efd" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 00:16:34 crc kubenswrapper[3562]: I1213 00:16:34.754739 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config" (OuterVolumeSpecName: "config") pod "1a3e81c3-c292-4130-9436-f94062c91efd" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 00:16:34 crc kubenswrapper[3562]: I1213 00:16:34.759134 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1a3e81c3-c292-4130-9436-f94062c91efd" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 00:16:34 crc kubenswrapper[3562]: I1213 00:16:34.759547 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4" (OuterVolumeSpecName: "kube-api-access-pkhl4") pod "1a3e81c3-c292-4130-9436-f94062c91efd" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd"). InnerVolumeSpecName "kube-api-access-pkhl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 00:16:34 crc kubenswrapper[3562]: I1213 00:16:34.851945 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config\") pod \"21d29937-debd-4407-b2b1-d1053cb0f342\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " Dec 13 00:16:34 crc kubenswrapper[3562]: I1213 00:16:34.851988 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca\") pod \"21d29937-debd-4407-b2b1-d1053cb0f342\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " Dec 13 00:16:34 crc kubenswrapper[3562]: I1213 00:16:34.852053 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7vkr\" (UniqueName: \"kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr\") pod \"21d29937-debd-4407-b2b1-d1053cb0f342\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " Dec 13 00:16:34 crc kubenswrapper[3562]: I1213 00:16:34.852121 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert\") pod \"21d29937-debd-4407-b2b1-d1053cb0f342\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " Dec 13 00:16:34 crc kubenswrapper[3562]: I1213 00:16:34.852408 3562 reconciler_common.go:300] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 00:16:34 crc kubenswrapper[3562]: I1213 00:16:34.852428 3562 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-pkhl4\" (UniqueName: \"kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4\") on node \"crc\" DevicePath \"\"" Dec 13 00:16:34 crc kubenswrapper[3562]: I1213 00:16:34.852438 3562 reconciler_common.go:300] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config\") on node \"crc\" DevicePath \"\"" Dec 13 00:16:34 crc kubenswrapper[3562]: I1213 00:16:34.852449 3562 reconciler_common.go:300] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 13 00:16:34 crc kubenswrapper[3562]: I1213 00:16:34.852459 3562 reconciler_common.go:300] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca\") on node \"crc\" DevicePath \"\"" Dec 13 00:16:34 crc kubenswrapper[3562]: I1213 00:16:34.853008 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca" (OuterVolumeSpecName: "client-ca") pod "21d29937-debd-4407-b2b1-d1053cb0f342" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 00:16:34 crc kubenswrapper[3562]: I1213 00:16:34.853166 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config" (OuterVolumeSpecName: "config") pod "21d29937-debd-4407-b2b1-d1053cb0f342" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 00:16:34 crc kubenswrapper[3562]: I1213 00:16:34.855561 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr" (OuterVolumeSpecName: "kube-api-access-v7vkr") pod "21d29937-debd-4407-b2b1-d1053cb0f342" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342"). InnerVolumeSpecName "kube-api-access-v7vkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 00:16:34 crc kubenswrapper[3562]: I1213 00:16:34.855676 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "21d29937-debd-4407-b2b1-d1053cb0f342" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 00:16:34 crc kubenswrapper[3562]: I1213 00:16:34.859358 3562 generic.go:334] "Generic (PLEG): container finished" podID="1a3e81c3-c292-4130-9436-f94062c91efd" containerID="32138d9495aa7f4e726774c0620ac5d16f0236d7d1db41cd6313f1f8266135dd" exitCode=0 Dec 13 00:16:34 crc kubenswrapper[3562]: I1213 00:16:34.859424 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" event={"ID":"1a3e81c3-c292-4130-9436-f94062c91efd","Type":"ContainerDied","Data":"32138d9495aa7f4e726774c0620ac5d16f0236d7d1db41cd6313f1f8266135dd"} Dec 13 00:16:34 crc kubenswrapper[3562]: I1213 00:16:34.859451 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" event={"ID":"1a3e81c3-c292-4130-9436-f94062c91efd","Type":"ContainerDied","Data":"fb9e0655e69a744ff45b24a22f95a4ce47a206978562e5625906eaad2aa302db"} Dec 13 00:16:34 crc kubenswrapper[3562]: I1213 00:16:34.859468 3562 scope.go:117] "RemoveContainer" containerID="32138d9495aa7f4e726774c0620ac5d16f0236d7d1db41cd6313f1f8266135dd" Dec 13 00:16:34 crc kubenswrapper[3562]: I1213 00:16:34.859637 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Dec 13 00:16:34 crc kubenswrapper[3562]: I1213 00:16:34.860400 3562 generic.go:334] "Generic (PLEG): container finished" podID="21d29937-debd-4407-b2b1-d1053cb0f342" containerID="164ff5a5827ca89b6d533ebb14dbb49229c77c8751ebea0355117bf5381f7ff7" exitCode=0 Dec 13 00:16:34 crc kubenswrapper[3562]: I1213 00:16:34.860430 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" event={"ID":"21d29937-debd-4407-b2b1-d1053cb0f342","Type":"ContainerDied","Data":"164ff5a5827ca89b6d533ebb14dbb49229c77c8751ebea0355117bf5381f7ff7"} Dec 13 00:16:34 crc kubenswrapper[3562]: I1213 00:16:34.860444 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" event={"ID":"21d29937-debd-4407-b2b1-d1053cb0f342","Type":"ContainerDied","Data":"f62bf07f4ab3023f511f6471d158fb1ad1626f27dac6f381aeab57f8810ee9e7"} Dec 13 00:16:34 crc kubenswrapper[3562]: I1213 00:16:34.860502 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Dec 13 00:16:34 crc kubenswrapper[3562]: I1213 00:16:34.890584 3562 scope.go:117] "RemoveContainer" containerID="32138d9495aa7f4e726774c0620ac5d16f0236d7d1db41cd6313f1f8266135dd" Dec 13 00:16:34 crc kubenswrapper[3562]: E1213 00:16:34.891403 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32138d9495aa7f4e726774c0620ac5d16f0236d7d1db41cd6313f1f8266135dd\": container with ID starting with 32138d9495aa7f4e726774c0620ac5d16f0236d7d1db41cd6313f1f8266135dd not found: ID does not exist" containerID="32138d9495aa7f4e726774c0620ac5d16f0236d7d1db41cd6313f1f8266135dd" Dec 13 00:16:34 crc kubenswrapper[3562]: I1213 00:16:34.891448 3562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32138d9495aa7f4e726774c0620ac5d16f0236d7d1db41cd6313f1f8266135dd"} err="failed to get container status \"32138d9495aa7f4e726774c0620ac5d16f0236d7d1db41cd6313f1f8266135dd\": rpc error: code = NotFound desc = could not find container \"32138d9495aa7f4e726774c0620ac5d16f0236d7d1db41cd6313f1f8266135dd\": container with ID starting with 32138d9495aa7f4e726774c0620ac5d16f0236d7d1db41cd6313f1f8266135dd not found: ID does not exist" Dec 13 00:16:34 crc kubenswrapper[3562]: I1213 00:16:34.891462 3562 scope.go:117] "RemoveContainer" containerID="164ff5a5827ca89b6d533ebb14dbb49229c77c8751ebea0355117bf5381f7ff7" Dec 13 00:16:34 crc kubenswrapper[3562]: I1213 00:16:34.904617 3562 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs"] Dec 13 00:16:34 crc kubenswrapper[3562]: I1213 00:16:34.907956 3562 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs"] Dec 13 00:16:34 crc kubenswrapper[3562]: I1213 00:16:34.913910 3562 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-778975cc4f-x5vcf"] Dec 13 00:16:34 crc kubenswrapper[3562]: I1213 00:16:34.914183 3562 scope.go:117] "RemoveContainer" containerID="164ff5a5827ca89b6d533ebb14dbb49229c77c8751ebea0355117bf5381f7ff7" Dec 13 00:16:34 crc kubenswrapper[3562]: E1213 00:16:34.914589 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"164ff5a5827ca89b6d533ebb14dbb49229c77c8751ebea0355117bf5381f7ff7\": container with ID starting with 164ff5a5827ca89b6d533ebb14dbb49229c77c8751ebea0355117bf5381f7ff7 not found: ID does not exist" containerID="164ff5a5827ca89b6d533ebb14dbb49229c77c8751ebea0355117bf5381f7ff7" Dec 13 00:16:34 crc kubenswrapper[3562]: I1213 00:16:34.914643 3562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"164ff5a5827ca89b6d533ebb14dbb49229c77c8751ebea0355117bf5381f7ff7"} err="failed to get container status \"164ff5a5827ca89b6d533ebb14dbb49229c77c8751ebea0355117bf5381f7ff7\": rpc error: code = NotFound desc = could not find container \"164ff5a5827ca89b6d533ebb14dbb49229c77c8751ebea0355117bf5381f7ff7\": container with ID starting with 164ff5a5827ca89b6d533ebb14dbb49229c77c8751ebea0355117bf5381f7ff7 not found: ID does not exist" Dec 13 00:16:34 crc kubenswrapper[3562]: I1213 00:16:34.917339 3562 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-778975cc4f-x5vcf"] Dec 13 00:16:34 crc kubenswrapper[3562]: I1213 00:16:34.953304 3562 reconciler_common.go:300] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 00:16:34 crc kubenswrapper[3562]: I1213 00:16:34.953401 3562 reconciler_common.go:300] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config\") on node \"crc\" DevicePath \"\"" Dec 13 00:16:34 crc kubenswrapper[3562]: I1213 00:16:34.953414 3562 reconciler_common.go:300] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca\") on node \"crc\" DevicePath \"\"" Dec 13 00:16:34 crc kubenswrapper[3562]: I1213 00:16:34.953428 3562 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-v7vkr\" (UniqueName: \"kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr\") on node \"crc\" DevicePath \"\"" Dec 13 00:16:35 crc kubenswrapper[3562]: I1213 00:16:35.355659 3562 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6ccd8985fb-7xkld"] Dec 13 00:16:35 crc kubenswrapper[3562]: I1213 00:16:35.355801 3562 topology_manager.go:215] "Topology Admit Handler" podUID="3c4d00ae-cc05-4b86-a921-e8dd69d160fe" podNamespace="openshift-route-controller-manager" podName="route-controller-manager-6ccd8985fb-7xkld" Dec 13 00:16:35 crc kubenswrapper[3562]: E1213 00:16:35.356036 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" containerName="route-controller-manager" Dec 13 00:16:35 crc kubenswrapper[3562]: I1213 00:16:35.356057 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" containerName="route-controller-manager" Dec 13 00:16:35 crc kubenswrapper[3562]: E1213 00:16:35.356076 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="7dae59545f22b3fb679a7fbf878a6379" containerName="startup-monitor" Dec 13 00:16:35 crc kubenswrapper[3562]: I1213 00:16:35.356086 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dae59545f22b3fb679a7fbf878a6379" containerName="startup-monitor" Dec 13 00:16:35 crc kubenswrapper[3562]: E1213 00:16:35.356109 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="1c7093b7-6001-4ad1-8f11-36961d0fb83f" containerName="installer" Dec 13 00:16:35 crc kubenswrapper[3562]: I1213 00:16:35.356119 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c7093b7-6001-4ad1-8f11-36961d0fb83f" containerName="installer" Dec 13 00:16:35 crc kubenswrapper[3562]: E1213 00:16:35.356131 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" containerName="controller-manager" Dec 13 00:16:35 crc kubenswrapper[3562]: I1213 00:16:35.356140 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" containerName="controller-manager" Dec 13 00:16:35 crc kubenswrapper[3562]: I1213 00:16:35.356262 3562 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c7093b7-6001-4ad1-8f11-36961d0fb83f" containerName="installer" Dec 13 00:16:35 crc kubenswrapper[3562]: I1213 00:16:35.356278 3562 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" containerName="controller-manager" Dec 13 00:16:35 crc kubenswrapper[3562]: I1213 00:16:35.356296 3562 memory_manager.go:354] "RemoveStaleState removing state" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" containerName="route-controller-manager" Dec 13 00:16:35 crc kubenswrapper[3562]: I1213 00:16:35.356310 3562 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dae59545f22b3fb679a7fbf878a6379" containerName="startup-monitor" Dec 13 00:16:35 crc kubenswrapper[3562]: I1213 00:16:35.357349 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6ccd8985fb-7xkld" Dec 13 00:16:35 crc kubenswrapper[3562]: I1213 00:16:35.359689 3562 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-85467fd499-h7nc8"] Dec 13 00:16:35 crc kubenswrapper[3562]: I1213 00:16:35.359765 3562 topology_manager.go:215] "Topology Admit Handler" podUID="6d6c3d0f-1ab2-4544-862b-f91b3dc76d11" podNamespace="openshift-controller-manager" podName="controller-manager-85467fd499-h7nc8" Dec 13 00:16:35 crc kubenswrapper[3562]: I1213 00:16:35.360383 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-85467fd499-h7nc8" Dec 13 00:16:35 crc kubenswrapper[3562]: I1213 00:16:35.360986 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-9r4gl" Dec 13 00:16:35 crc kubenswrapper[3562]: I1213 00:16:35.361328 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 13 00:16:35 crc kubenswrapper[3562]: I1213 00:16:35.361782 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 13 00:16:35 crc kubenswrapper[3562]: I1213 00:16:35.361982 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 13 00:16:35 crc kubenswrapper[3562]: I1213 00:16:35.362108 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 13 00:16:35 crc kubenswrapper[3562]: I1213 00:16:35.362255 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 13 00:16:35 crc kubenswrapper[3562]: I1213 00:16:35.363275 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-85467fd499-h7nc8"] Dec 13 00:16:35 crc kubenswrapper[3562]: I1213 00:16:35.364300 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-58g82" Dec 13 00:16:35 crc kubenswrapper[3562]: I1213 00:16:35.364687 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 13 00:16:35 crc kubenswrapper[3562]: I1213 00:16:35.365403 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 13 00:16:35 crc kubenswrapper[3562]: I1213 00:16:35.365746 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 13 00:16:35 crc kubenswrapper[3562]: I1213 00:16:35.366033 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 13 00:16:35 crc kubenswrapper[3562]: I1213 00:16:35.367104 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 13 00:16:35 crc kubenswrapper[3562]: I1213 00:16:35.381022 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 13 00:16:35 crc kubenswrapper[3562]: I1213 00:16:35.383658 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6ccd8985fb-7xkld"] Dec 13 00:16:35 crc kubenswrapper[3562]: I1213 00:16:35.459421 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txlxj\" (UniqueName: \"kubernetes.io/projected/6d6c3d0f-1ab2-4544-862b-f91b3dc76d11-kube-api-access-txlxj\") pod \"controller-manager-85467fd499-h7nc8\" (UID: \"6d6c3d0f-1ab2-4544-862b-f91b3dc76d11\") " pod="openshift-controller-manager/controller-manager-85467fd499-h7nc8" Dec 13 00:16:35 crc kubenswrapper[3562]: I1213 00:16:35.459490 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d6c3d0f-1ab2-4544-862b-f91b3dc76d11-serving-cert\") pod \"controller-manager-85467fd499-h7nc8\" (UID: \"6d6c3d0f-1ab2-4544-862b-f91b3dc76d11\") " pod="openshift-controller-manager/controller-manager-85467fd499-h7nc8" Dec 13 00:16:35 crc kubenswrapper[3562]: I1213 00:16:35.459536 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c4d00ae-cc05-4b86-a921-e8dd69d160fe-serving-cert\") pod \"route-controller-manager-6ccd8985fb-7xkld\" (UID: \"3c4d00ae-cc05-4b86-a921-e8dd69d160fe\") " pod="openshift-route-controller-manager/route-controller-manager-6ccd8985fb-7xkld" Dec 13 00:16:35 crc kubenswrapper[3562]: I1213 00:16:35.459701 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d6c3d0f-1ab2-4544-862b-f91b3dc76d11-config\") pod \"controller-manager-85467fd499-h7nc8\" (UID: \"6d6c3d0f-1ab2-4544-862b-f91b3dc76d11\") " pod="openshift-controller-manager/controller-manager-85467fd499-h7nc8" Dec 13 00:16:35 crc kubenswrapper[3562]: I1213 00:16:35.459775 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3c4d00ae-cc05-4b86-a921-e8dd69d160fe-client-ca\") pod \"route-controller-manager-6ccd8985fb-7xkld\" (UID: \"3c4d00ae-cc05-4b86-a921-e8dd69d160fe\") " pod="openshift-route-controller-manager/route-controller-manager-6ccd8985fb-7xkld" Dec 13 00:16:35 crc kubenswrapper[3562]: I1213 00:16:35.459813 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c4d00ae-cc05-4b86-a921-e8dd69d160fe-config\") pod \"route-controller-manager-6ccd8985fb-7xkld\" (UID: \"3c4d00ae-cc05-4b86-a921-e8dd69d160fe\") " pod="openshift-route-controller-manager/route-controller-manager-6ccd8985fb-7xkld" Dec 13 00:16:35 crc kubenswrapper[3562]: I1213 00:16:35.459846 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6d6c3d0f-1ab2-4544-862b-f91b3dc76d11-client-ca\") pod \"controller-manager-85467fd499-h7nc8\" (UID: \"6d6c3d0f-1ab2-4544-862b-f91b3dc76d11\") " pod="openshift-controller-manager/controller-manager-85467fd499-h7nc8" Dec 13 00:16:35 crc kubenswrapper[3562]: I1213 00:16:35.459888 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6d6c3d0f-1ab2-4544-862b-f91b3dc76d11-proxy-ca-bundles\") pod \"controller-manager-85467fd499-h7nc8\" (UID: \"6d6c3d0f-1ab2-4544-862b-f91b3dc76d11\") " pod="openshift-controller-manager/controller-manager-85467fd499-h7nc8" Dec 13 00:16:35 crc kubenswrapper[3562]: I1213 00:16:35.459945 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjdcn\" (UniqueName: \"kubernetes.io/projected/3c4d00ae-cc05-4b86-a921-e8dd69d160fe-kube-api-access-rjdcn\") pod \"route-controller-manager-6ccd8985fb-7xkld\" (UID: \"3c4d00ae-cc05-4b86-a921-e8dd69d160fe\") " pod="openshift-route-controller-manager/route-controller-manager-6ccd8985fb-7xkld" Dec 13 00:16:35 crc kubenswrapper[3562]: I1213 00:16:35.561483 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d6c3d0f-1ab2-4544-862b-f91b3dc76d11-config\") pod \"controller-manager-85467fd499-h7nc8\" (UID: \"6d6c3d0f-1ab2-4544-862b-f91b3dc76d11\") " pod="openshift-controller-manager/controller-manager-85467fd499-h7nc8" Dec 13 00:16:35 crc kubenswrapper[3562]: I1213 00:16:35.561534 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3c4d00ae-cc05-4b86-a921-e8dd69d160fe-client-ca\") pod \"route-controller-manager-6ccd8985fb-7xkld\" (UID: \"3c4d00ae-cc05-4b86-a921-e8dd69d160fe\") " pod="openshift-route-controller-manager/route-controller-manager-6ccd8985fb-7xkld" Dec 13 00:16:35 crc kubenswrapper[3562]: I1213 00:16:35.561561 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c4d00ae-cc05-4b86-a921-e8dd69d160fe-config\") pod \"route-controller-manager-6ccd8985fb-7xkld\" (UID: \"3c4d00ae-cc05-4b86-a921-e8dd69d160fe\") " pod="openshift-route-controller-manager/route-controller-manager-6ccd8985fb-7xkld" Dec 13 00:16:35 crc kubenswrapper[3562]: I1213 00:16:35.561586 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6d6c3d0f-1ab2-4544-862b-f91b3dc76d11-client-ca\") pod \"controller-manager-85467fd499-h7nc8\" (UID: \"6d6c3d0f-1ab2-4544-862b-f91b3dc76d11\") " pod="openshift-controller-manager/controller-manager-85467fd499-h7nc8" Dec 13 00:16:35 crc kubenswrapper[3562]: I1213 00:16:35.561611 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6d6c3d0f-1ab2-4544-862b-f91b3dc76d11-proxy-ca-bundles\") pod \"controller-manager-85467fd499-h7nc8\" (UID: \"6d6c3d0f-1ab2-4544-862b-f91b3dc76d11\") " pod="openshift-controller-manager/controller-manager-85467fd499-h7nc8" Dec 13 00:16:35 crc kubenswrapper[3562]: I1213 00:16:35.561633 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rjdcn\" (UniqueName: \"kubernetes.io/projected/3c4d00ae-cc05-4b86-a921-e8dd69d160fe-kube-api-access-rjdcn\") pod \"route-controller-manager-6ccd8985fb-7xkld\" (UID: \"3c4d00ae-cc05-4b86-a921-e8dd69d160fe\") " pod="openshift-route-controller-manager/route-controller-manager-6ccd8985fb-7xkld" Dec 13 00:16:35 crc kubenswrapper[3562]: I1213 00:16:35.561677 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-txlxj\" (UniqueName: \"kubernetes.io/projected/6d6c3d0f-1ab2-4544-862b-f91b3dc76d11-kube-api-access-txlxj\") pod \"controller-manager-85467fd499-h7nc8\" (UID: \"6d6c3d0f-1ab2-4544-862b-f91b3dc76d11\") " pod="openshift-controller-manager/controller-manager-85467fd499-h7nc8" Dec 13 00:16:35 crc kubenswrapper[3562]: I1213 00:16:35.561714 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d6c3d0f-1ab2-4544-862b-f91b3dc76d11-serving-cert\") pod \"controller-manager-85467fd499-h7nc8\" (UID: \"6d6c3d0f-1ab2-4544-862b-f91b3dc76d11\") " pod="openshift-controller-manager/controller-manager-85467fd499-h7nc8" Dec 13 00:16:35 crc kubenswrapper[3562]: I1213 00:16:35.561747 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c4d00ae-cc05-4b86-a921-e8dd69d160fe-serving-cert\") pod \"route-controller-manager-6ccd8985fb-7xkld\" (UID: \"3c4d00ae-cc05-4b86-a921-e8dd69d160fe\") " pod="openshift-route-controller-manager/route-controller-manager-6ccd8985fb-7xkld" Dec 13 00:16:35 crc kubenswrapper[3562]: I1213 00:16:35.562833 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6d6c3d0f-1ab2-4544-862b-f91b3dc76d11-client-ca\") pod \"controller-manager-85467fd499-h7nc8\" (UID: \"6d6c3d0f-1ab2-4544-862b-f91b3dc76d11\") " pod="openshift-controller-manager/controller-manager-85467fd499-h7nc8" Dec 13 00:16:35 crc kubenswrapper[3562]: I1213 00:16:35.562858 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3c4d00ae-cc05-4b86-a921-e8dd69d160fe-client-ca\") pod \"route-controller-manager-6ccd8985fb-7xkld\" (UID: \"3c4d00ae-cc05-4b86-a921-e8dd69d160fe\") " pod="openshift-route-controller-manager/route-controller-manager-6ccd8985fb-7xkld" Dec 13 00:16:35 crc kubenswrapper[3562]: I1213 00:16:35.563220 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6d6c3d0f-1ab2-4544-862b-f91b3dc76d11-proxy-ca-bundles\") pod \"controller-manager-85467fd499-h7nc8\" (UID: \"6d6c3d0f-1ab2-4544-862b-f91b3dc76d11\") " pod="openshift-controller-manager/controller-manager-85467fd499-h7nc8" Dec 13 00:16:35 crc kubenswrapper[3562]: I1213 00:16:35.563386 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d6c3d0f-1ab2-4544-862b-f91b3dc76d11-config\") pod \"controller-manager-85467fd499-h7nc8\" (UID: \"6d6c3d0f-1ab2-4544-862b-f91b3dc76d11\") " pod="openshift-controller-manager/controller-manager-85467fd499-h7nc8" Dec 13 00:16:35 crc kubenswrapper[3562]: I1213 00:16:35.564005 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c4d00ae-cc05-4b86-a921-e8dd69d160fe-config\") pod \"route-controller-manager-6ccd8985fb-7xkld\" (UID: \"3c4d00ae-cc05-4b86-a921-e8dd69d160fe\") " pod="openshift-route-controller-manager/route-controller-manager-6ccd8985fb-7xkld" Dec 13 00:16:35 crc kubenswrapper[3562]: I1213 00:16:35.566400 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c4d00ae-cc05-4b86-a921-e8dd69d160fe-serving-cert\") pod \"route-controller-manager-6ccd8985fb-7xkld\" (UID: \"3c4d00ae-cc05-4b86-a921-e8dd69d160fe\") " pod="openshift-route-controller-manager/route-controller-manager-6ccd8985fb-7xkld" Dec 13 00:16:35 crc kubenswrapper[3562]: I1213 00:16:35.582009 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d6c3d0f-1ab2-4544-862b-f91b3dc76d11-serving-cert\") pod \"controller-manager-85467fd499-h7nc8\" (UID: \"6d6c3d0f-1ab2-4544-862b-f91b3dc76d11\") " pod="openshift-controller-manager/controller-manager-85467fd499-h7nc8" Dec 13 00:16:35 crc kubenswrapper[3562]: I1213 00:16:35.586581 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjdcn\" (UniqueName: \"kubernetes.io/projected/3c4d00ae-cc05-4b86-a921-e8dd69d160fe-kube-api-access-rjdcn\") pod \"route-controller-manager-6ccd8985fb-7xkld\" (UID: \"3c4d00ae-cc05-4b86-a921-e8dd69d160fe\") " pod="openshift-route-controller-manager/route-controller-manager-6ccd8985fb-7xkld" Dec 13 00:16:35 crc kubenswrapper[3562]: I1213 00:16:35.586912 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-txlxj\" (UniqueName: \"kubernetes.io/projected/6d6c3d0f-1ab2-4544-862b-f91b3dc76d11-kube-api-access-txlxj\") pod \"controller-manager-85467fd499-h7nc8\" (UID: \"6d6c3d0f-1ab2-4544-862b-f91b3dc76d11\") " pod="openshift-controller-manager/controller-manager-85467fd499-h7nc8" Dec 13 00:16:35 crc kubenswrapper[3562]: I1213 00:16:35.693841 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6ccd8985fb-7xkld" Dec 13 00:16:35 crc kubenswrapper[3562]: I1213 00:16:35.702575 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-85467fd499-h7nc8" Dec 13 00:16:35 crc kubenswrapper[3562]: I1213 00:16:35.867722 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6ccd8985fb-7xkld"] Dec 13 00:16:35 crc kubenswrapper[3562]: W1213 00:16:35.870851 3562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c4d00ae_cc05_4b86_a921_e8dd69d160fe.slice/crio-57a106b7a469216accfb7f1eedd387dcee2c03dae37ec62c1db723c9af99e5db WatchSource:0}: Error finding container 57a106b7a469216accfb7f1eedd387dcee2c03dae37ec62c1db723c9af99e5db: Status 404 returned error can't find the container with id 57a106b7a469216accfb7f1eedd387dcee2c03dae37ec62c1db723c9af99e5db Dec 13 00:16:35 crc kubenswrapper[3562]: I1213 00:16:35.881236 3562 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" path="/var/lib/kubelet/pods/1a3e81c3-c292-4130-9436-f94062c91efd/volumes" Dec 13 00:16:35 crc kubenswrapper[3562]: I1213 00:16:35.881811 3562 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" path="/var/lib/kubelet/pods/21d29937-debd-4407-b2b1-d1053cb0f342/volumes" Dec 13 00:16:35 crc kubenswrapper[3562]: I1213 00:16:35.906633 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-85467fd499-h7nc8"] Dec 13 00:16:35 crc kubenswrapper[3562]: W1213 00:16:35.915125 3562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d6c3d0f_1ab2_4544_862b_f91b3dc76d11.slice/crio-609567627f8d2b97226b449acc56f885de151800051d6b5a7a59bc7c6390ff9b WatchSource:0}: Error finding container 609567627f8d2b97226b449acc56f885de151800051d6b5a7a59bc7c6390ff9b: Status 404 returned error can't find the container with id 609567627f8d2b97226b449acc56f885de151800051d6b5a7a59bc7c6390ff9b Dec 13 00:16:36 crc kubenswrapper[3562]: I1213 00:16:36.873322 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6ccd8985fb-7xkld" event={"ID":"3c4d00ae-cc05-4b86-a921-e8dd69d160fe","Type":"ContainerStarted","Data":"b8dcf9bc9fa01341d5efa493687da8aff3a69b80d382b78e8e44aa2c975244e5"} Dec 13 00:16:36 crc kubenswrapper[3562]: I1213 00:16:36.873651 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6ccd8985fb-7xkld" event={"ID":"3c4d00ae-cc05-4b86-a921-e8dd69d160fe","Type":"ContainerStarted","Data":"57a106b7a469216accfb7f1eedd387dcee2c03dae37ec62c1db723c9af99e5db"} Dec 13 00:16:36 crc kubenswrapper[3562]: I1213 00:16:36.873672 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6ccd8985fb-7xkld" Dec 13 00:16:36 crc kubenswrapper[3562]: I1213 00:16:36.875255 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-85467fd499-h7nc8" event={"ID":"6d6c3d0f-1ab2-4544-862b-f91b3dc76d11","Type":"ContainerStarted","Data":"cfb142efdb092f5bbe3e45445d0c68758b2310771352b0a3a6384ffc8fa99c39"} Dec 13 00:16:36 crc kubenswrapper[3562]: I1213 00:16:36.875280 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-85467fd499-h7nc8" event={"ID":"6d6c3d0f-1ab2-4544-862b-f91b3dc76d11","Type":"ContainerStarted","Data":"609567627f8d2b97226b449acc56f885de151800051d6b5a7a59bc7c6390ff9b"} Dec 13 00:16:36 crc kubenswrapper[3562]: I1213 00:16:36.878830 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6ccd8985fb-7xkld" Dec 13 00:16:36 crc kubenswrapper[3562]: I1213 00:16:36.910640 3562 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-85467fd499-h7nc8" podStartSLOduration=3.9105878560000003 podStartE2EDuration="3.910587856s" podCreationTimestamp="2025-12-13 00:16:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 00:16:36.908839987 +0000 UTC m=+633.392070510" watchObservedRunningTime="2025-12-13 00:16:36.910587856 +0000 UTC m=+633.393818379" Dec 13 00:16:36 crc kubenswrapper[3562]: I1213 00:16:36.910971 3562 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6ccd8985fb-7xkld" podStartSLOduration=3.910918586 podStartE2EDuration="3.910918586s" podCreationTimestamp="2025-12-13 00:16:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 00:16:36.891695913 +0000 UTC m=+633.374926436" watchObservedRunningTime="2025-12-13 00:16:36.910918586 +0000 UTC m=+633.394149129" Dec 13 00:16:37 crc kubenswrapper[3562]: I1213 00:16:37.881350 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-85467fd499-h7nc8" Dec 13 00:16:37 crc kubenswrapper[3562]: I1213 00:16:37.884212 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-85467fd499-h7nc8" Dec 13 00:16:43 crc kubenswrapper[3562]: I1213 00:16:43.052610 3562 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 00:16:43 crc kubenswrapper[3562]: I1213 00:16:43.053008 3562 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 00:16:43 crc kubenswrapper[3562]: I1213 00:16:43.053052 3562 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Dec 13 00:16:43 crc kubenswrapper[3562]: I1213 00:16:43.053761 3562 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fe619c3248a2e072658a9e187d7c753cc185680dc456aa477791dd5c81bf75af"} pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 13 00:16:43 crc kubenswrapper[3562]: I1213 00:16:43.053989 3562 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" containerID="cri-o://fe619c3248a2e072658a9e187d7c753cc185680dc456aa477791dd5c81bf75af" gracePeriod=600 Dec 13 00:16:43 crc kubenswrapper[3562]: I1213 00:16:43.465124 3562 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-85467fd499-h7nc8"] Dec 13 00:16:43 crc kubenswrapper[3562]: I1213 00:16:43.465360 3562 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-85467fd499-h7nc8" podUID="6d6c3d0f-1ab2-4544-862b-f91b3dc76d11" containerName="controller-manager" containerID="cri-o://cfb142efdb092f5bbe3e45445d0c68758b2310771352b0a3a6384ffc8fa99c39" gracePeriod=30 Dec 13 00:16:43 crc kubenswrapper[3562]: I1213 00:16:43.494849 3562 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6ccd8985fb-7xkld"] Dec 13 00:16:43 crc kubenswrapper[3562]: I1213 00:16:43.495105 3562 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6ccd8985fb-7xkld" podUID="3c4d00ae-cc05-4b86-a921-e8dd69d160fe" containerName="route-controller-manager" containerID="cri-o://b8dcf9bc9fa01341d5efa493687da8aff3a69b80d382b78e8e44aa2c975244e5" gracePeriod=30 Dec 13 00:16:43 crc kubenswrapper[3562]: I1213 00:16:43.872218 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6ccd8985fb-7xkld" Dec 13 00:16:43 crc kubenswrapper[3562]: I1213 00:16:43.882081 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-85467fd499-h7nc8" Dec 13 00:16:43 crc kubenswrapper[3562]: I1213 00:16:43.907336 3562 generic.go:334] "Generic (PLEG): container finished" podID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerID="fe619c3248a2e072658a9e187d7c753cc185680dc456aa477791dd5c81bf75af" exitCode=0 Dec 13 00:16:43 crc kubenswrapper[3562]: I1213 00:16:43.907415 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerDied","Data":"fe619c3248a2e072658a9e187d7c753cc185680dc456aa477791dd5c81bf75af"} Dec 13 00:16:43 crc kubenswrapper[3562]: I1213 00:16:43.907440 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerStarted","Data":"f31a6a4a0aa2cc3bff798df54c8e89ea969afc1109ceee8230ab21aba1f5ffe2"} Dec 13 00:16:43 crc kubenswrapper[3562]: I1213 00:16:43.907458 3562 scope.go:117] "RemoveContainer" containerID="7a6e9eb4ac073e0dfa6ae2034844d4a0104e16d5711e34811caed4fa91b88545" Dec 13 00:16:43 crc kubenswrapper[3562]: I1213 00:16:43.912060 3562 generic.go:334] "Generic (PLEG): container finished" podID="3c4d00ae-cc05-4b86-a921-e8dd69d160fe" containerID="b8dcf9bc9fa01341d5efa493687da8aff3a69b80d382b78e8e44aa2c975244e5" exitCode=0 Dec 13 00:16:43 crc kubenswrapper[3562]: I1213 00:16:43.912249 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6ccd8985fb-7xkld" event={"ID":"3c4d00ae-cc05-4b86-a921-e8dd69d160fe","Type":"ContainerDied","Data":"b8dcf9bc9fa01341d5efa493687da8aff3a69b80d382b78e8e44aa2c975244e5"} Dec 13 00:16:43 crc kubenswrapper[3562]: I1213 00:16:43.912289 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6ccd8985fb-7xkld" event={"ID":"3c4d00ae-cc05-4b86-a921-e8dd69d160fe","Type":"ContainerDied","Data":"57a106b7a469216accfb7f1eedd387dcee2c03dae37ec62c1db723c9af99e5db"} Dec 13 00:16:43 crc kubenswrapper[3562]: I1213 00:16:43.912402 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6ccd8985fb-7xkld" Dec 13 00:16:43 crc kubenswrapper[3562]: I1213 00:16:43.922227 3562 generic.go:334] "Generic (PLEG): container finished" podID="6d6c3d0f-1ab2-4544-862b-f91b3dc76d11" containerID="cfb142efdb092f5bbe3e45445d0c68758b2310771352b0a3a6384ffc8fa99c39" exitCode=0 Dec 13 00:16:43 crc kubenswrapper[3562]: I1213 00:16:43.922273 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-85467fd499-h7nc8" event={"ID":"6d6c3d0f-1ab2-4544-862b-f91b3dc76d11","Type":"ContainerDied","Data":"cfb142efdb092f5bbe3e45445d0c68758b2310771352b0a3a6384ffc8fa99c39"} Dec 13 00:16:43 crc kubenswrapper[3562]: I1213 00:16:43.922281 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-85467fd499-h7nc8" Dec 13 00:16:43 crc kubenswrapper[3562]: I1213 00:16:43.922293 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-85467fd499-h7nc8" event={"ID":"6d6c3d0f-1ab2-4544-862b-f91b3dc76d11","Type":"ContainerDied","Data":"609567627f8d2b97226b449acc56f885de151800051d6b5a7a59bc7c6390ff9b"} Dec 13 00:16:43 crc kubenswrapper[3562]: I1213 00:16:43.963378 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d6c3d0f-1ab2-4544-862b-f91b3dc76d11-serving-cert\") pod \"6d6c3d0f-1ab2-4544-862b-f91b3dc76d11\" (UID: \"6d6c3d0f-1ab2-4544-862b-f91b3dc76d11\") " Dec 13 00:16:43 crc kubenswrapper[3562]: I1213 00:16:43.963455 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txlxj\" (UniqueName: \"kubernetes.io/projected/6d6c3d0f-1ab2-4544-862b-f91b3dc76d11-kube-api-access-txlxj\") pod \"6d6c3d0f-1ab2-4544-862b-f91b3dc76d11\" (UID: \"6d6c3d0f-1ab2-4544-862b-f91b3dc76d11\") " Dec 13 00:16:43 crc kubenswrapper[3562]: I1213 00:16:43.963498 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d6c3d0f-1ab2-4544-862b-f91b3dc76d11-config\") pod \"6d6c3d0f-1ab2-4544-862b-f91b3dc76d11\" (UID: \"6d6c3d0f-1ab2-4544-862b-f91b3dc76d11\") " Dec 13 00:16:43 crc kubenswrapper[3562]: I1213 00:16:43.963598 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6d6c3d0f-1ab2-4544-862b-f91b3dc76d11-client-ca\") pod \"6d6c3d0f-1ab2-4544-862b-f91b3dc76d11\" (UID: \"6d6c3d0f-1ab2-4544-862b-f91b3dc76d11\") " Dec 13 00:16:43 crc kubenswrapper[3562]: I1213 00:16:43.963640 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjdcn\" (UniqueName: \"kubernetes.io/projected/3c4d00ae-cc05-4b86-a921-e8dd69d160fe-kube-api-access-rjdcn\") pod \"3c4d00ae-cc05-4b86-a921-e8dd69d160fe\" (UID: \"3c4d00ae-cc05-4b86-a921-e8dd69d160fe\") " Dec 13 00:16:43 crc kubenswrapper[3562]: I1213 00:16:43.963709 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c4d00ae-cc05-4b86-a921-e8dd69d160fe-serving-cert\") pod \"3c4d00ae-cc05-4b86-a921-e8dd69d160fe\" (UID: \"3c4d00ae-cc05-4b86-a921-e8dd69d160fe\") " Dec 13 00:16:43 crc kubenswrapper[3562]: I1213 00:16:43.963743 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3c4d00ae-cc05-4b86-a921-e8dd69d160fe-client-ca\") pod \"3c4d00ae-cc05-4b86-a921-e8dd69d160fe\" (UID: \"3c4d00ae-cc05-4b86-a921-e8dd69d160fe\") " Dec 13 00:16:43 crc kubenswrapper[3562]: I1213 00:16:43.963775 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c4d00ae-cc05-4b86-a921-e8dd69d160fe-config\") pod \"3c4d00ae-cc05-4b86-a921-e8dd69d160fe\" (UID: \"3c4d00ae-cc05-4b86-a921-e8dd69d160fe\") " Dec 13 00:16:43 crc kubenswrapper[3562]: I1213 00:16:43.963816 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6d6c3d0f-1ab2-4544-862b-f91b3dc76d11-proxy-ca-bundles\") pod \"6d6c3d0f-1ab2-4544-862b-f91b3dc76d11\" (UID: \"6d6c3d0f-1ab2-4544-862b-f91b3dc76d11\") " Dec 13 00:16:43 crc kubenswrapper[3562]: I1213 00:16:43.964991 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d6c3d0f-1ab2-4544-862b-f91b3dc76d11-config" (OuterVolumeSpecName: "config") pod "6d6c3d0f-1ab2-4544-862b-f91b3dc76d11" (UID: "6d6c3d0f-1ab2-4544-862b-f91b3dc76d11"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 00:16:43 crc kubenswrapper[3562]: I1213 00:16:43.966197 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d6c3d0f-1ab2-4544-862b-f91b3dc76d11-client-ca" (OuterVolumeSpecName: "client-ca") pod "6d6c3d0f-1ab2-4544-862b-f91b3dc76d11" (UID: "6d6c3d0f-1ab2-4544-862b-f91b3dc76d11"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 00:16:43 crc kubenswrapper[3562]: I1213 00:16:43.966319 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c4d00ae-cc05-4b86-a921-e8dd69d160fe-client-ca" (OuterVolumeSpecName: "client-ca") pod "3c4d00ae-cc05-4b86-a921-e8dd69d160fe" (UID: "3c4d00ae-cc05-4b86-a921-e8dd69d160fe"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 00:16:43 crc kubenswrapper[3562]: I1213 00:16:43.966419 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c4d00ae-cc05-4b86-a921-e8dd69d160fe-config" (OuterVolumeSpecName: "config") pod "3c4d00ae-cc05-4b86-a921-e8dd69d160fe" (UID: "3c4d00ae-cc05-4b86-a921-e8dd69d160fe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 00:16:43 crc kubenswrapper[3562]: I1213 00:16:43.966635 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d6c3d0f-1ab2-4544-862b-f91b3dc76d11-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "6d6c3d0f-1ab2-4544-862b-f91b3dc76d11" (UID: "6d6c3d0f-1ab2-4544-862b-f91b3dc76d11"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 00:16:43 crc kubenswrapper[3562]: I1213 00:16:43.970161 3562 scope.go:117] "RemoveContainer" containerID="b8dcf9bc9fa01341d5efa493687da8aff3a69b80d382b78e8e44aa2c975244e5" Dec 13 00:16:43 crc kubenswrapper[3562]: I1213 00:16:43.970310 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c4d00ae-cc05-4b86-a921-e8dd69d160fe-kube-api-access-rjdcn" (OuterVolumeSpecName: "kube-api-access-rjdcn") pod "3c4d00ae-cc05-4b86-a921-e8dd69d160fe" (UID: "3c4d00ae-cc05-4b86-a921-e8dd69d160fe"). InnerVolumeSpecName "kube-api-access-rjdcn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 00:16:43 crc kubenswrapper[3562]: I1213 00:16:43.977939 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c4d00ae-cc05-4b86-a921-e8dd69d160fe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3c4d00ae-cc05-4b86-a921-e8dd69d160fe" (UID: "3c4d00ae-cc05-4b86-a921-e8dd69d160fe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 00:16:43 crc kubenswrapper[3562]: I1213 00:16:43.978504 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d6c3d0f-1ab2-4544-862b-f91b3dc76d11-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6d6c3d0f-1ab2-4544-862b-f91b3dc76d11" (UID: "6d6c3d0f-1ab2-4544-862b-f91b3dc76d11"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 00:16:43 crc kubenswrapper[3562]: I1213 00:16:43.980920 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d6c3d0f-1ab2-4544-862b-f91b3dc76d11-kube-api-access-txlxj" (OuterVolumeSpecName: "kube-api-access-txlxj") pod "6d6c3d0f-1ab2-4544-862b-f91b3dc76d11" (UID: "6d6c3d0f-1ab2-4544-862b-f91b3dc76d11"). InnerVolumeSpecName "kube-api-access-txlxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 00:16:44 crc kubenswrapper[3562]: I1213 00:16:44.017356 3562 scope.go:117] "RemoveContainer" containerID="b8dcf9bc9fa01341d5efa493687da8aff3a69b80d382b78e8e44aa2c975244e5" Dec 13 00:16:44 crc kubenswrapper[3562]: E1213 00:16:44.017801 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8dcf9bc9fa01341d5efa493687da8aff3a69b80d382b78e8e44aa2c975244e5\": container with ID starting with b8dcf9bc9fa01341d5efa493687da8aff3a69b80d382b78e8e44aa2c975244e5 not found: ID does not exist" containerID="b8dcf9bc9fa01341d5efa493687da8aff3a69b80d382b78e8e44aa2c975244e5" Dec 13 00:16:44 crc kubenswrapper[3562]: I1213 00:16:44.017842 3562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8dcf9bc9fa01341d5efa493687da8aff3a69b80d382b78e8e44aa2c975244e5"} err="failed to get container status \"b8dcf9bc9fa01341d5efa493687da8aff3a69b80d382b78e8e44aa2c975244e5\": rpc error: code = NotFound desc = could not find container \"b8dcf9bc9fa01341d5efa493687da8aff3a69b80d382b78e8e44aa2c975244e5\": container with ID starting with b8dcf9bc9fa01341d5efa493687da8aff3a69b80d382b78e8e44aa2c975244e5 not found: ID does not exist" Dec 13 00:16:44 crc kubenswrapper[3562]: I1213 00:16:44.017854 3562 scope.go:117] "RemoveContainer" containerID="cfb142efdb092f5bbe3e45445d0c68758b2310771352b0a3a6384ffc8fa99c39" Dec 13 00:16:44 crc kubenswrapper[3562]: I1213 00:16:44.041229 3562 scope.go:117] "RemoveContainer" containerID="cfb142efdb092f5bbe3e45445d0c68758b2310771352b0a3a6384ffc8fa99c39" Dec 13 00:16:44 crc kubenswrapper[3562]: E1213 00:16:44.041683 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfb142efdb092f5bbe3e45445d0c68758b2310771352b0a3a6384ffc8fa99c39\": container with ID starting with cfb142efdb092f5bbe3e45445d0c68758b2310771352b0a3a6384ffc8fa99c39 not found: ID does not exist" containerID="cfb142efdb092f5bbe3e45445d0c68758b2310771352b0a3a6384ffc8fa99c39" Dec 13 00:16:44 crc kubenswrapper[3562]: I1213 00:16:44.041723 3562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfb142efdb092f5bbe3e45445d0c68758b2310771352b0a3a6384ffc8fa99c39"} err="failed to get container status \"cfb142efdb092f5bbe3e45445d0c68758b2310771352b0a3a6384ffc8fa99c39\": rpc error: code = NotFound desc = could not find container \"cfb142efdb092f5bbe3e45445d0c68758b2310771352b0a3a6384ffc8fa99c39\": container with ID starting with cfb142efdb092f5bbe3e45445d0c68758b2310771352b0a3a6384ffc8fa99c39 not found: ID does not exist" Dec 13 00:16:44 crc kubenswrapper[3562]: I1213 00:16:44.065186 3562 reconciler_common.go:300] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d6c3d0f-1ab2-4544-862b-f91b3dc76d11-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 00:16:44 crc kubenswrapper[3562]: I1213 00:16:44.065231 3562 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-txlxj\" (UniqueName: \"kubernetes.io/projected/6d6c3d0f-1ab2-4544-862b-f91b3dc76d11-kube-api-access-txlxj\") on node \"crc\" DevicePath \"\"" Dec 13 00:16:44 crc kubenswrapper[3562]: I1213 00:16:44.065243 3562 reconciler_common.go:300] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d6c3d0f-1ab2-4544-862b-f91b3dc76d11-config\") on node \"crc\" DevicePath \"\"" Dec 13 00:16:44 crc kubenswrapper[3562]: I1213 00:16:44.065254 3562 reconciler_common.go:300] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6d6c3d0f-1ab2-4544-862b-f91b3dc76d11-client-ca\") on node \"crc\" DevicePath \"\"" Dec 13 00:16:44 crc kubenswrapper[3562]: I1213 00:16:44.065265 3562 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-rjdcn\" (UniqueName: \"kubernetes.io/projected/3c4d00ae-cc05-4b86-a921-e8dd69d160fe-kube-api-access-rjdcn\") on node \"crc\" DevicePath \"\"" Dec 13 00:16:44 crc kubenswrapper[3562]: I1213 00:16:44.065276 3562 reconciler_common.go:300] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c4d00ae-cc05-4b86-a921-e8dd69d160fe-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 00:16:44 crc kubenswrapper[3562]: I1213 00:16:44.065286 3562 reconciler_common.go:300] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3c4d00ae-cc05-4b86-a921-e8dd69d160fe-client-ca\") on node \"crc\" DevicePath \"\"" Dec 13 00:16:44 crc kubenswrapper[3562]: I1213 00:16:44.065297 3562 reconciler_common.go:300] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c4d00ae-cc05-4b86-a921-e8dd69d160fe-config\") on node \"crc\" DevicePath \"\"" Dec 13 00:16:44 crc kubenswrapper[3562]: I1213 00:16:44.065308 3562 reconciler_common.go:300] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6d6c3d0f-1ab2-4544-862b-f91b3dc76d11-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 13 00:16:44 crc kubenswrapper[3562]: I1213 00:16:44.240885 3562 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6ccd8985fb-7xkld"] Dec 13 00:16:44 crc kubenswrapper[3562]: I1213 00:16:44.246744 3562 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6ccd8985fb-7xkld"] Dec 13 00:16:44 crc kubenswrapper[3562]: I1213 00:16:44.252231 3562 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-85467fd499-h7nc8"] Dec 13 00:16:44 crc kubenswrapper[3562]: I1213 00:16:44.255137 3562 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-85467fd499-h7nc8"] Dec 13 00:16:45 crc kubenswrapper[3562]: I1213 00:16:45.362495 3562 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-55c69f9c9-p6m48"] Dec 13 00:16:45 crc kubenswrapper[3562]: I1213 00:16:45.362660 3562 topology_manager.go:215] "Topology Admit Handler" podUID="6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a" podNamespace="openshift-controller-manager" podName="controller-manager-55c69f9c9-p6m48" Dec 13 00:16:45 crc kubenswrapper[3562]: E1213 00:16:45.362958 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="6d6c3d0f-1ab2-4544-862b-f91b3dc76d11" containerName="controller-manager" Dec 13 00:16:45 crc kubenswrapper[3562]: I1213 00:16:45.362975 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d6c3d0f-1ab2-4544-862b-f91b3dc76d11" containerName="controller-manager" Dec 13 00:16:45 crc kubenswrapper[3562]: E1213 00:16:45.363003 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="3c4d00ae-cc05-4b86-a921-e8dd69d160fe" containerName="route-controller-manager" Dec 13 00:16:45 crc kubenswrapper[3562]: I1213 00:16:45.363013 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c4d00ae-cc05-4b86-a921-e8dd69d160fe" containerName="route-controller-manager" Dec 13 00:16:45 crc kubenswrapper[3562]: I1213 00:16:45.363166 3562 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c4d00ae-cc05-4b86-a921-e8dd69d160fe" containerName="route-controller-manager" Dec 13 00:16:45 crc kubenswrapper[3562]: I1213 00:16:45.363193 3562 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d6c3d0f-1ab2-4544-862b-f91b3dc76d11" containerName="controller-manager" Dec 13 00:16:45 crc kubenswrapper[3562]: I1213 00:16:45.363744 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-55c69f9c9-p6m48" Dec 13 00:16:45 crc kubenswrapper[3562]: I1213 00:16:45.366522 3562 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f47b7f8d6-rnddc"] Dec 13 00:16:45 crc kubenswrapper[3562]: I1213 00:16:45.366669 3562 topology_manager.go:215] "Topology Admit Handler" podUID="a14b5016-2cc0-4561-b177-5c65996557d8" podNamespace="openshift-route-controller-manager" podName="route-controller-manager-f47b7f8d6-rnddc" Dec 13 00:16:45 crc kubenswrapper[3562]: I1213 00:16:45.367413 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f47b7f8d6-rnddc" Dec 13 00:16:45 crc kubenswrapper[3562]: I1213 00:16:45.374276 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 13 00:16:45 crc kubenswrapper[3562]: I1213 00:16:45.374711 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 13 00:16:45 crc kubenswrapper[3562]: I1213 00:16:45.374949 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-58g82" Dec 13 00:16:45 crc kubenswrapper[3562]: I1213 00:16:45.376953 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 13 00:16:45 crc kubenswrapper[3562]: I1213 00:16:45.377013 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 13 00:16:45 crc kubenswrapper[3562]: I1213 00:16:45.377029 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 13 00:16:45 crc kubenswrapper[3562]: I1213 00:16:45.377029 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 13 00:16:45 crc kubenswrapper[3562]: I1213 00:16:45.377305 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 13 00:16:45 crc kubenswrapper[3562]: I1213 00:16:45.377422 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 13 00:16:45 crc kubenswrapper[3562]: I1213 00:16:45.377610 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-9r4gl" Dec 13 00:16:45 crc kubenswrapper[3562]: I1213 00:16:45.377954 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 13 00:16:45 crc kubenswrapper[3562]: I1213 00:16:45.381880 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 13 00:16:45 crc kubenswrapper[3562]: I1213 00:16:45.384642 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 13 00:16:45 crc kubenswrapper[3562]: I1213 00:16:45.385213 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a-config\") pod \"controller-manager-55c69f9c9-p6m48\" (UID: \"6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a\") " pod="openshift-controller-manager/controller-manager-55c69f9c9-p6m48" Dec 13 00:16:45 crc kubenswrapper[3562]: I1213 00:16:45.385266 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a-serving-cert\") pod \"controller-manager-55c69f9c9-p6m48\" (UID: \"6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a\") " pod="openshift-controller-manager/controller-manager-55c69f9c9-p6m48" Dec 13 00:16:45 crc kubenswrapper[3562]: I1213 00:16:45.385297 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkzrd\" (UniqueName: \"kubernetes.io/projected/6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a-kube-api-access-kkzrd\") pod \"controller-manager-55c69f9c9-p6m48\" (UID: \"6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a\") " pod="openshift-controller-manager/controller-manager-55c69f9c9-p6m48" Dec 13 00:16:45 crc kubenswrapper[3562]: I1213 00:16:45.385318 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a-client-ca\") pod \"controller-manager-55c69f9c9-p6m48\" (UID: \"6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a\") " pod="openshift-controller-manager/controller-manager-55c69f9c9-p6m48" Dec 13 00:16:45 crc kubenswrapper[3562]: I1213 00:16:45.385360 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a-proxy-ca-bundles\") pod \"controller-manager-55c69f9c9-p6m48\" (UID: \"6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a\") " pod="openshift-controller-manager/controller-manager-55c69f9c9-p6m48" Dec 13 00:16:45 crc kubenswrapper[3562]: I1213 00:16:45.395841 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-55c69f9c9-p6m48"] Dec 13 00:16:45 crc kubenswrapper[3562]: I1213 00:16:45.398893 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f47b7f8d6-rnddc"] Dec 13 00:16:45 crc kubenswrapper[3562]: I1213 00:16:45.486658 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98978\" (UniqueName: \"kubernetes.io/projected/a14b5016-2cc0-4561-b177-5c65996557d8-kube-api-access-98978\") pod \"route-controller-manager-f47b7f8d6-rnddc\" (UID: \"a14b5016-2cc0-4561-b177-5c65996557d8\") " pod="openshift-route-controller-manager/route-controller-manager-f47b7f8d6-rnddc" Dec 13 00:16:45 crc kubenswrapper[3562]: I1213 00:16:45.486717 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a-proxy-ca-bundles\") pod \"controller-manager-55c69f9c9-p6m48\" (UID: \"6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a\") " pod="openshift-controller-manager/controller-manager-55c69f9c9-p6m48" Dec 13 00:16:45 crc kubenswrapper[3562]: I1213 00:16:45.486742 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a14b5016-2cc0-4561-b177-5c65996557d8-serving-cert\") pod \"route-controller-manager-f47b7f8d6-rnddc\" (UID: \"a14b5016-2cc0-4561-b177-5c65996557d8\") " pod="openshift-route-controller-manager/route-controller-manager-f47b7f8d6-rnddc" Dec 13 00:16:45 crc kubenswrapper[3562]: I1213 00:16:45.486769 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a14b5016-2cc0-4561-b177-5c65996557d8-config\") pod \"route-controller-manager-f47b7f8d6-rnddc\" (UID: \"a14b5016-2cc0-4561-b177-5c65996557d8\") " pod="openshift-route-controller-manager/route-controller-manager-f47b7f8d6-rnddc" Dec 13 00:16:45 crc kubenswrapper[3562]: I1213 00:16:45.486799 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a-config\") pod \"controller-manager-55c69f9c9-p6m48\" (UID: \"6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a\") " pod="openshift-controller-manager/controller-manager-55c69f9c9-p6m48" Dec 13 00:16:45 crc kubenswrapper[3562]: I1213 00:16:45.486864 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a14b5016-2cc0-4561-b177-5c65996557d8-client-ca\") pod \"route-controller-manager-f47b7f8d6-rnddc\" (UID: \"a14b5016-2cc0-4561-b177-5c65996557d8\") " pod="openshift-route-controller-manager/route-controller-manager-f47b7f8d6-rnddc" Dec 13 00:16:45 crc kubenswrapper[3562]: I1213 00:16:45.486898 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a-serving-cert\") pod \"controller-manager-55c69f9c9-p6m48\" (UID: \"6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a\") " pod="openshift-controller-manager/controller-manager-55c69f9c9-p6m48" Dec 13 00:16:45 crc kubenswrapper[3562]: I1213 00:16:45.486955 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-kkzrd\" (UniqueName: \"kubernetes.io/projected/6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a-kube-api-access-kkzrd\") pod \"controller-manager-55c69f9c9-p6m48\" (UID: \"6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a\") " pod="openshift-controller-manager/controller-manager-55c69f9c9-p6m48" Dec 13 00:16:45 crc kubenswrapper[3562]: I1213 00:16:45.486986 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a-client-ca\") pod \"controller-manager-55c69f9c9-p6m48\" (UID: \"6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a\") " pod="openshift-controller-manager/controller-manager-55c69f9c9-p6m48" Dec 13 00:16:45 crc kubenswrapper[3562]: I1213 00:16:45.487993 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a-proxy-ca-bundles\") pod \"controller-manager-55c69f9c9-p6m48\" (UID: \"6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a\") " pod="openshift-controller-manager/controller-manager-55c69f9c9-p6m48" Dec 13 00:16:45 crc kubenswrapper[3562]: I1213 00:16:45.488507 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a-client-ca\") pod \"controller-manager-55c69f9c9-p6m48\" (UID: \"6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a\") " pod="openshift-controller-manager/controller-manager-55c69f9c9-p6m48" Dec 13 00:16:45 crc kubenswrapper[3562]: I1213 00:16:45.488665 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a-config\") pod \"controller-manager-55c69f9c9-p6m48\" (UID: \"6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a\") " pod="openshift-controller-manager/controller-manager-55c69f9c9-p6m48" Dec 13 00:16:45 crc kubenswrapper[3562]: I1213 00:16:45.491568 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a-serving-cert\") pod \"controller-manager-55c69f9c9-p6m48\" (UID: \"6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a\") " pod="openshift-controller-manager/controller-manager-55c69f9c9-p6m48" Dec 13 00:16:45 crc kubenswrapper[3562]: I1213 00:16:45.506514 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkzrd\" (UniqueName: \"kubernetes.io/projected/6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a-kube-api-access-kkzrd\") pod \"controller-manager-55c69f9c9-p6m48\" (UID: \"6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a\") " pod="openshift-controller-manager/controller-manager-55c69f9c9-p6m48" Dec 13 00:16:45 crc kubenswrapper[3562]: I1213 00:16:45.588217 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-98978\" (UniqueName: \"kubernetes.io/projected/a14b5016-2cc0-4561-b177-5c65996557d8-kube-api-access-98978\") pod \"route-controller-manager-f47b7f8d6-rnddc\" (UID: \"a14b5016-2cc0-4561-b177-5c65996557d8\") " pod="openshift-route-controller-manager/route-controller-manager-f47b7f8d6-rnddc" Dec 13 00:16:45 crc kubenswrapper[3562]: I1213 00:16:45.588278 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a14b5016-2cc0-4561-b177-5c65996557d8-serving-cert\") pod \"route-controller-manager-f47b7f8d6-rnddc\" (UID: \"a14b5016-2cc0-4561-b177-5c65996557d8\") " pod="openshift-route-controller-manager/route-controller-manager-f47b7f8d6-rnddc" Dec 13 00:16:45 crc kubenswrapper[3562]: I1213 00:16:45.588307 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a14b5016-2cc0-4561-b177-5c65996557d8-config\") pod \"route-controller-manager-f47b7f8d6-rnddc\" (UID: \"a14b5016-2cc0-4561-b177-5c65996557d8\") " pod="openshift-route-controller-manager/route-controller-manager-f47b7f8d6-rnddc" Dec 13 00:16:45 crc kubenswrapper[3562]: I1213 00:16:45.588333 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a14b5016-2cc0-4561-b177-5c65996557d8-client-ca\") pod \"route-controller-manager-f47b7f8d6-rnddc\" (UID: \"a14b5016-2cc0-4561-b177-5c65996557d8\") " pod="openshift-route-controller-manager/route-controller-manager-f47b7f8d6-rnddc" Dec 13 00:16:45 crc kubenswrapper[3562]: I1213 00:16:45.596380 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a14b5016-2cc0-4561-b177-5c65996557d8-client-ca\") pod \"route-controller-manager-f47b7f8d6-rnddc\" (UID: \"a14b5016-2cc0-4561-b177-5c65996557d8\") " pod="openshift-route-controller-manager/route-controller-manager-f47b7f8d6-rnddc" Dec 13 00:16:45 crc kubenswrapper[3562]: I1213 00:16:45.597192 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a14b5016-2cc0-4561-b177-5c65996557d8-config\") pod \"route-controller-manager-f47b7f8d6-rnddc\" (UID: \"a14b5016-2cc0-4561-b177-5c65996557d8\") " pod="openshift-route-controller-manager/route-controller-manager-f47b7f8d6-rnddc" Dec 13 00:16:45 crc kubenswrapper[3562]: I1213 00:16:45.601872 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a14b5016-2cc0-4561-b177-5c65996557d8-serving-cert\") pod \"route-controller-manager-f47b7f8d6-rnddc\" (UID: \"a14b5016-2cc0-4561-b177-5c65996557d8\") " pod="openshift-route-controller-manager/route-controller-manager-f47b7f8d6-rnddc" Dec 13 00:16:45 crc kubenswrapper[3562]: I1213 00:16:45.621827 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-98978\" (UniqueName: \"kubernetes.io/projected/a14b5016-2cc0-4561-b177-5c65996557d8-kube-api-access-98978\") pod \"route-controller-manager-f47b7f8d6-rnddc\" (UID: \"a14b5016-2cc0-4561-b177-5c65996557d8\") " pod="openshift-route-controller-manager/route-controller-manager-f47b7f8d6-rnddc" Dec 13 00:16:45 crc kubenswrapper[3562]: I1213 00:16:45.680214 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-55c69f9c9-p6m48" Dec 13 00:16:45 crc kubenswrapper[3562]: I1213 00:16:45.688294 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f47b7f8d6-rnddc" Dec 13 00:16:45 crc kubenswrapper[3562]: I1213 00:16:45.884875 3562 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c4d00ae-cc05-4b86-a921-e8dd69d160fe" path="/var/lib/kubelet/pods/3c4d00ae-cc05-4b86-a921-e8dd69d160fe/volumes" Dec 13 00:16:45 crc kubenswrapper[3562]: I1213 00:16:45.886267 3562 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d6c3d0f-1ab2-4544-862b-f91b3dc76d11" path="/var/lib/kubelet/pods/6d6c3d0f-1ab2-4544-862b-f91b3dc76d11/volumes" Dec 13 00:16:45 crc kubenswrapper[3562]: I1213 00:16:45.886916 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-55c69f9c9-p6m48"] Dec 13 00:16:45 crc kubenswrapper[3562]: W1213 00:16:45.897058 3562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e3fa2b0_5864_43fe_a7f5_3ecc0780ef9a.slice/crio-8a28de54a918c5998b5a3911571dcf6048030ae98386ee98c7d64793b5afbbad WatchSource:0}: Error finding container 8a28de54a918c5998b5a3911571dcf6048030ae98386ee98c7d64793b5afbbad: Status 404 returned error can't find the container with id 8a28de54a918c5998b5a3911571dcf6048030ae98386ee98c7d64793b5afbbad Dec 13 00:16:45 crc kubenswrapper[3562]: I1213 00:16:45.930246 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f47b7f8d6-rnddc"] Dec 13 00:16:45 crc kubenswrapper[3562]: W1213 00:16:45.938154 3562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda14b5016_2cc0_4561_b177_5c65996557d8.slice/crio-725eaf1b9e3594fd88ed39dc8ebf8756af2ad60563fcaa8f77c5813a29ee68f5 WatchSource:0}: Error finding container 725eaf1b9e3594fd88ed39dc8ebf8756af2ad60563fcaa8f77c5813a29ee68f5: Status 404 returned error can't find the container with id 725eaf1b9e3594fd88ed39dc8ebf8756af2ad60563fcaa8f77c5813a29ee68f5 Dec 13 00:16:45 crc kubenswrapper[3562]: I1213 00:16:45.949291 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-55c69f9c9-p6m48" event={"ID":"6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a","Type":"ContainerStarted","Data":"8a28de54a918c5998b5a3911571dcf6048030ae98386ee98c7d64793b5afbbad"} Dec 13 00:16:45 crc kubenswrapper[3562]: I1213 00:16:45.951681 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f47b7f8d6-rnddc" event={"ID":"a14b5016-2cc0-4561-b177-5c65996557d8","Type":"ContainerStarted","Data":"725eaf1b9e3594fd88ed39dc8ebf8756af2ad60563fcaa8f77c5813a29ee68f5"} Dec 13 00:16:46 crc kubenswrapper[3562]: I1213 00:16:46.960064 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f47b7f8d6-rnddc" event={"ID":"a14b5016-2cc0-4561-b177-5c65996557d8","Type":"ContainerStarted","Data":"d33114f91ac224997aa12532a38583c85a80c6083892e4e9cdd4ad5d52eb278a"} Dec 13 00:16:46 crc kubenswrapper[3562]: I1213 00:16:46.961841 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-f47b7f8d6-rnddc" Dec 13 00:16:46 crc kubenswrapper[3562]: I1213 00:16:46.962678 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-55c69f9c9-p6m48" event={"ID":"6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a","Type":"ContainerStarted","Data":"c857e6980cf7f2df8d7f5f53fb435a11caf9bb6721412209982cc35983771123"} Dec 13 00:16:46 crc kubenswrapper[3562]: I1213 00:16:46.968537 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-f47b7f8d6-rnddc" Dec 13 00:16:46 crc kubenswrapper[3562]: I1213 00:16:46.982008 3562 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-f47b7f8d6-rnddc" podStartSLOduration=3.981923657 podStartE2EDuration="3.981923657s" podCreationTimestamp="2025-12-13 00:16:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 00:16:46.978624214 +0000 UTC m=+643.461854737" watchObservedRunningTime="2025-12-13 00:16:46.981923657 +0000 UTC m=+643.465154180" Dec 13 00:16:47 crc kubenswrapper[3562]: I1213 00:16:47.021212 3562 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-55c69f9c9-p6m48" podStartSLOduration=4.021161284 podStartE2EDuration="4.021161284s" podCreationTimestamp="2025-12-13 00:16:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 00:16:47.0203082 +0000 UTC m=+643.503538733" watchObservedRunningTime="2025-12-13 00:16:47.021161284 +0000 UTC m=+643.504391807" Dec 13 00:16:47 crc kubenswrapper[3562]: I1213 00:16:47.969304 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-55c69f9c9-p6m48" Dec 13 00:16:47 crc kubenswrapper[3562]: I1213 00:16:47.974600 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-55c69f9c9-p6m48" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.228492 3562 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-44qcg"] Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.229870 3562 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="ovn-controller" containerID="cri-o://bf1a86f1acab43578dc69e33d56079d3193ddddd52c91b77cc329317f8aeef19" gracePeriod=30 Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.229945 3562 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="sbdb" containerID="cri-o://0facfc544e10ddf63b593145df26a401b3022a1eff530ef79bc1a721672e0769" gracePeriod=30 Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.230045 3562 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="kube-rbac-proxy-node" containerID="cri-o://6ff3cacf40a00028b6ec8f160befac64722580e285ef4efbc36852ad2230f9af" gracePeriod=30 Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.230068 3562 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="ovn-acl-logging" containerID="cri-o://f41680682228b8f78c399b3833b0efdc4f2e17060b4158c788bc97f18f98428b" gracePeriod=30 Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.230156 3562 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="northd" containerID="cri-o://8a731876e3bbbab3a44d68b148fce29008990c0afae08eff5e1b7843e2f119ad" gracePeriod=30 Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.230217 3562 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="nbdb" containerID="cri-o://0234c71f3bb882dec8a9d4dd88a3ac07afcfb144218f359950383a42a6d3a9dc" gracePeriod=30 Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.230297 3562 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://569f526146fcd390ebfe15d96174d7b40622a40e05ef443639d55b236fd6ca8f" gracePeriod=30 Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.321334 3562 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="ovnkube-controller" containerID="cri-o://42b26fd145f8c23082ac9366b503eaa610dfe5f8cf214181397855b5f66201ff" gracePeriod=30 Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.527002 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovn-acl-logging/1.log" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.527922 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovn-controller/1.log" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.528441 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.574337 3562 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-nwh2n"] Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.574438 3562 topology_manager.go:215] "Topology Admit Handler" podUID="8e04d604-29e2-4e25-b07c-f3041d4682b5" podNamespace="openshift-ovn-kubernetes" podName="ovnkube-node-nwh2n" Dec 13 00:16:56 crc kubenswrapper[3562]: E1213 00:16:56.574565 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="kube-rbac-proxy-ovn-metrics" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.574577 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="kube-rbac-proxy-ovn-metrics" Dec 13 00:16:56 crc kubenswrapper[3562]: E1213 00:16:56.574594 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="northd" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.574606 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="northd" Dec 13 00:16:56 crc kubenswrapper[3562]: E1213 00:16:56.574618 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="sbdb" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.574626 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="sbdb" Dec 13 00:16:56 crc kubenswrapper[3562]: E1213 00:16:56.574642 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="ovn-acl-logging" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.574650 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="ovn-acl-logging" Dec 13 00:16:56 crc kubenswrapper[3562]: E1213 00:16:56.574661 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="nbdb" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.574671 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="nbdb" Dec 13 00:16:56 crc kubenswrapper[3562]: E1213 00:16:56.574682 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="ovnkube-controller" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.574690 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="ovnkube-controller" Dec 13 00:16:56 crc kubenswrapper[3562]: E1213 00:16:56.574702 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="kubecfg-setup" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.574710 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="kubecfg-setup" Dec 13 00:16:56 crc kubenswrapper[3562]: E1213 00:16:56.574722 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="ovn-controller" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.574730 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="ovn-controller" Dec 13 00:16:56 crc kubenswrapper[3562]: E1213 00:16:56.574741 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="kube-rbac-proxy-node" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.574748 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="kube-rbac-proxy-node" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.574874 3562 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="sbdb" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.574898 3562 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="nbdb" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.574930 3562 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="ovn-controller" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.574946 3562 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="kube-rbac-proxy-node" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.574955 3562 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="kube-rbac-proxy-ovn-metrics" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.574965 3562 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="northd" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.574974 3562 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="ovn-acl-logging" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.574986 3562 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="ovnkube-controller" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.576983 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.581495 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-jpwlq" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.729144 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-env-overrides\") pod \"3e19f9e8-9a37-4ca8-9790-c219750ab482\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.729256 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-run-ovn\") pod \"3e19f9e8-9a37-4ca8-9790-c219750ab482\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.729297 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovnkube-config\") pod \"3e19f9e8-9a37-4ca8-9790-c219750ab482\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.729338 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-systemd-units\") pod \"3e19f9e8-9a37-4ca8-9790-c219750ab482\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.729367 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-var-lib-openvswitch\") pod \"3e19f9e8-9a37-4ca8-9790-c219750ab482\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.729396 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-cni-bin\") pod \"3e19f9e8-9a37-4ca8-9790-c219750ab482\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.729446 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9495\" (UniqueName: \"kubernetes.io/projected/3e19f9e8-9a37-4ca8-9790-c219750ab482-kube-api-access-f9495\") pod \"3e19f9e8-9a37-4ca8-9790-c219750ab482\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.729492 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-run-netns\") pod \"3e19f9e8-9a37-4ca8-9790-c219750ab482\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.729526 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-etc-openvswitch\") pod \"3e19f9e8-9a37-4ca8-9790-c219750ab482\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.729563 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-slash\") pod \"3e19f9e8-9a37-4ca8-9790-c219750ab482\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.729597 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-var-lib-cni-networks-ovn-kubernetes\") pod \"3e19f9e8-9a37-4ca8-9790-c219750ab482\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.729630 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-kubelet\") pod \"3e19f9e8-9a37-4ca8-9790-c219750ab482\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.729666 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-run-ovn-kubernetes\") pod \"3e19f9e8-9a37-4ca8-9790-c219750ab482\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.729706 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-run-openvswitch\") pod \"3e19f9e8-9a37-4ca8-9790-c219750ab482\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.729744 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-node-log\") pod \"3e19f9e8-9a37-4ca8-9790-c219750ab482\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.729780 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovn-node-metrics-cert\") pod \"3e19f9e8-9a37-4ca8-9790-c219750ab482\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.729783 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "3e19f9e8-9a37-4ca8-9790-c219750ab482" (UID: "3e19f9e8-9a37-4ca8-9790-c219750ab482"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.729810 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovnkube-script-lib\") pod \"3e19f9e8-9a37-4ca8-9790-c219750ab482\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.729871 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-log-socket\") pod \"3e19f9e8-9a37-4ca8-9790-c219750ab482\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.729880 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "3e19f9e8-9a37-4ca8-9790-c219750ab482" (UID: "3e19f9e8-9a37-4ca8-9790-c219750ab482"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.729934 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "3e19f9e8-9a37-4ca8-9790-c219750ab482" (UID: "3e19f9e8-9a37-4ca8-9790-c219750ab482"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.729991 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-slash" (OuterVolumeSpecName: "host-slash") pod "3e19f9e8-9a37-4ca8-9790-c219750ab482" (UID: "3e19f9e8-9a37-4ca8-9790-c219750ab482"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.729973 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "3e19f9e8-9a37-4ca8-9790-c219750ab482" (UID: "3e19f9e8-9a37-4ca8-9790-c219750ab482"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.729940 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "3e19f9e8-9a37-4ca8-9790-c219750ab482" (UID: "3e19f9e8-9a37-4ca8-9790-c219750ab482"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.729953 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-log-socket" (OuterVolumeSpecName: "log-socket") pod "3e19f9e8-9a37-4ca8-9790-c219750ab482" (UID: "3e19f9e8-9a37-4ca8-9790-c219750ab482"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.729926 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-cni-netd\") pod \"3e19f9e8-9a37-4ca8-9790-c219750ab482\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.730050 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "3e19f9e8-9a37-4ca8-9790-c219750ab482" (UID: "3e19f9e8-9a37-4ca8-9790-c219750ab482"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.730051 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-node-log" (OuterVolumeSpecName: "node-log") pod "3e19f9e8-9a37-4ca8-9790-c219750ab482" (UID: "3e19f9e8-9a37-4ca8-9790-c219750ab482"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.730052 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "3e19f9e8-9a37-4ca8-9790-c219750ab482" (UID: "3e19f9e8-9a37-4ca8-9790-c219750ab482"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.730102 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "3e19f9e8-9a37-4ca8-9790-c219750ab482" (UID: "3e19f9e8-9a37-4ca8-9790-c219750ab482"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.730106 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "3e19f9e8-9a37-4ca8-9790-c219750ab482" (UID: "3e19f9e8-9a37-4ca8-9790-c219750ab482"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.730139 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "3e19f9e8-9a37-4ca8-9790-c219750ab482" (UID: "3e19f9e8-9a37-4ca8-9790-c219750ab482"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.730002 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "3e19f9e8-9a37-4ca8-9790-c219750ab482" (UID: "3e19f9e8-9a37-4ca8-9790-c219750ab482"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.730026 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "3e19f9e8-9a37-4ca8-9790-c219750ab482" (UID: "3e19f9e8-9a37-4ca8-9790-c219750ab482"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.730391 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8e04d604-29e2-4e25-b07c-f3041d4682b5-host-cni-netd\") pod \"ovnkube-node-nwh2n\" (UID: \"8e04d604-29e2-4e25-b07c-f3041d4682b5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.730428 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8e04d604-29e2-4e25-b07c-f3041d4682b5-ovn-node-metrics-cert\") pod \"ovnkube-node-nwh2n\" (UID: \"8e04d604-29e2-4e25-b07c-f3041d4682b5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.730462 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8e04d604-29e2-4e25-b07c-f3041d4682b5-host-run-ovn-kubernetes\") pod \"ovnkube-node-nwh2n\" (UID: \"8e04d604-29e2-4e25-b07c-f3041d4682b5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.730498 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8e04d604-29e2-4e25-b07c-f3041d4682b5-node-log\") pod \"ovnkube-node-nwh2n\" (UID: \"8e04d604-29e2-4e25-b07c-f3041d4682b5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.730492 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "3e19f9e8-9a37-4ca8-9790-c219750ab482" (UID: "3e19f9e8-9a37-4ca8-9790-c219750ab482"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.730735 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8e04d604-29e2-4e25-b07c-f3041d4682b5-etc-openvswitch\") pod \"ovnkube-node-nwh2n\" (UID: \"8e04d604-29e2-4e25-b07c-f3041d4682b5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.730945 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8e04d604-29e2-4e25-b07c-f3041d4682b5-run-ovn\") pod \"ovnkube-node-nwh2n\" (UID: \"8e04d604-29e2-4e25-b07c-f3041d4682b5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.731005 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8e04d604-29e2-4e25-b07c-f3041d4682b5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nwh2n\" (UID: \"8e04d604-29e2-4e25-b07c-f3041d4682b5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.731111 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8e04d604-29e2-4e25-b07c-f3041d4682b5-var-lib-openvswitch\") pod \"ovnkube-node-nwh2n\" (UID: \"8e04d604-29e2-4e25-b07c-f3041d4682b5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.731220 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8e04d604-29e2-4e25-b07c-f3041d4682b5-run-openvswitch\") pod \"ovnkube-node-nwh2n\" (UID: \"8e04d604-29e2-4e25-b07c-f3041d4682b5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.731299 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8e04d604-29e2-4e25-b07c-f3041d4682b5-ovnkube-script-lib\") pod \"ovnkube-node-nwh2n\" (UID: \"8e04d604-29e2-4e25-b07c-f3041d4682b5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.731332 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8e04d604-29e2-4e25-b07c-f3041d4682b5-env-overrides\") pod \"ovnkube-node-nwh2n\" (UID: \"8e04d604-29e2-4e25-b07c-f3041d4682b5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.731374 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8e04d604-29e2-4e25-b07c-f3041d4682b5-log-socket\") pod \"ovnkube-node-nwh2n\" (UID: \"8e04d604-29e2-4e25-b07c-f3041d4682b5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.731455 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "3e19f9e8-9a37-4ca8-9790-c219750ab482" (UID: "3e19f9e8-9a37-4ca8-9790-c219750ab482"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.731545 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8e04d604-29e2-4e25-b07c-f3041d4682b5-host-kubelet\") pod \"ovnkube-node-nwh2n\" (UID: \"8e04d604-29e2-4e25-b07c-f3041d4682b5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.731597 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8e04d604-29e2-4e25-b07c-f3041d4682b5-systemd-units\") pod \"ovnkube-node-nwh2n\" (UID: \"8e04d604-29e2-4e25-b07c-f3041d4682b5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.731638 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8e04d604-29e2-4e25-b07c-f3041d4682b5-host-run-netns\") pod \"ovnkube-node-nwh2n\" (UID: \"8e04d604-29e2-4e25-b07c-f3041d4682b5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.731699 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8e04d604-29e2-4e25-b07c-f3041d4682b5-ovnkube-config\") pod \"ovnkube-node-nwh2n\" (UID: \"8e04d604-29e2-4e25-b07c-f3041d4682b5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.731751 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8e04d604-29e2-4e25-b07c-f3041d4682b5-host-cni-bin\") pod \"ovnkube-node-nwh2n\" (UID: \"8e04d604-29e2-4e25-b07c-f3041d4682b5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.731795 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kzp2\" (UniqueName: \"kubernetes.io/projected/8e04d604-29e2-4e25-b07c-f3041d4682b5-kube-api-access-8kzp2\") pod \"ovnkube-node-nwh2n\" (UID: \"8e04d604-29e2-4e25-b07c-f3041d4682b5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.731877 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8e04d604-29e2-4e25-b07c-f3041d4682b5-host-slash\") pod \"ovnkube-node-nwh2n\" (UID: \"8e04d604-29e2-4e25-b07c-f3041d4682b5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.732106 3562 reconciler_common.go:300] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.732141 3562 reconciler_common.go:300] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-slash\") on node \"crc\" DevicePath \"\"" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.732156 3562 reconciler_common.go:300] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.732171 3562 reconciler_common.go:300] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.732187 3562 reconciler_common.go:300] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.732197 3562 reconciler_common.go:300] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.732208 3562 reconciler_common.go:300] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-node-log\") on node \"crc\" DevicePath \"\"" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.732218 3562 reconciler_common.go:300] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.732229 3562 reconciler_common.go:300] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-log-socket\") on node \"crc\" DevicePath \"\"" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.732240 3562 reconciler_common.go:300] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.732251 3562 reconciler_common.go:300] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.732261 3562 reconciler_common.go:300] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.732272 3562 reconciler_common.go:300] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.732283 3562 reconciler_common.go:300] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.732292 3562 reconciler_common.go:300] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.732302 3562 reconciler_common.go:300] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.732314 3562 reconciler_common.go:300] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.737336 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "3e19f9e8-9a37-4ca8-9790-c219750ab482" (UID: "3e19f9e8-9a37-4ca8-9790-c219750ab482"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.737438 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e19f9e8-9a37-4ca8-9790-c219750ab482-kube-api-access-f9495" (OuterVolumeSpecName: "kube-api-access-f9495") pod "3e19f9e8-9a37-4ca8-9790-c219750ab482" (UID: "3e19f9e8-9a37-4ca8-9790-c219750ab482"). InnerVolumeSpecName "kube-api-access-f9495". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.834067 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8e04d604-29e2-4e25-b07c-f3041d4682b5-host-run-ovn-kubernetes\") pod \"ovnkube-node-nwh2n\" (UID: \"8e04d604-29e2-4e25-b07c-f3041d4682b5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.834126 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8e04d604-29e2-4e25-b07c-f3041d4682b5-node-log\") pod \"ovnkube-node-nwh2n\" (UID: \"8e04d604-29e2-4e25-b07c-f3041d4682b5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.834158 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8e04d604-29e2-4e25-b07c-f3041d4682b5-etc-openvswitch\") pod \"ovnkube-node-nwh2n\" (UID: \"8e04d604-29e2-4e25-b07c-f3041d4682b5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.834204 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8e04d604-29e2-4e25-b07c-f3041d4682b5-run-ovn\") pod \"ovnkube-node-nwh2n\" (UID: \"8e04d604-29e2-4e25-b07c-f3041d4682b5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.834237 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8e04d604-29e2-4e25-b07c-f3041d4682b5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nwh2n\" (UID: \"8e04d604-29e2-4e25-b07c-f3041d4682b5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.834234 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8e04d604-29e2-4e25-b07c-f3041d4682b5-host-run-ovn-kubernetes\") pod \"ovnkube-node-nwh2n\" (UID: \"8e04d604-29e2-4e25-b07c-f3041d4682b5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.834295 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8e04d604-29e2-4e25-b07c-f3041d4682b5-etc-openvswitch\") pod \"ovnkube-node-nwh2n\" (UID: \"8e04d604-29e2-4e25-b07c-f3041d4682b5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.834317 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8e04d604-29e2-4e25-b07c-f3041d4682b5-var-lib-openvswitch\") pod \"ovnkube-node-nwh2n\" (UID: \"8e04d604-29e2-4e25-b07c-f3041d4682b5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.834264 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8e04d604-29e2-4e25-b07c-f3041d4682b5-var-lib-openvswitch\") pod \"ovnkube-node-nwh2n\" (UID: \"8e04d604-29e2-4e25-b07c-f3041d4682b5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.834381 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8e04d604-29e2-4e25-b07c-f3041d4682b5-run-ovn\") pod \"ovnkube-node-nwh2n\" (UID: \"8e04d604-29e2-4e25-b07c-f3041d4682b5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.834475 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8e04d604-29e2-4e25-b07c-f3041d4682b5-node-log\") pod \"ovnkube-node-nwh2n\" (UID: \"8e04d604-29e2-4e25-b07c-f3041d4682b5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.834574 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8e04d604-29e2-4e25-b07c-f3041d4682b5-run-openvswitch\") pod \"ovnkube-node-nwh2n\" (UID: \"8e04d604-29e2-4e25-b07c-f3041d4682b5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.834621 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8e04d604-29e2-4e25-b07c-f3041d4682b5-run-openvswitch\") pod \"ovnkube-node-nwh2n\" (UID: \"8e04d604-29e2-4e25-b07c-f3041d4682b5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.834626 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8e04d604-29e2-4e25-b07c-f3041d4682b5-ovnkube-script-lib\") pod \"ovnkube-node-nwh2n\" (UID: \"8e04d604-29e2-4e25-b07c-f3041d4682b5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.834640 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8e04d604-29e2-4e25-b07c-f3041d4682b5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nwh2n\" (UID: \"8e04d604-29e2-4e25-b07c-f3041d4682b5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.834683 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8e04d604-29e2-4e25-b07c-f3041d4682b5-env-overrides\") pod \"ovnkube-node-nwh2n\" (UID: \"8e04d604-29e2-4e25-b07c-f3041d4682b5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.834798 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8e04d604-29e2-4e25-b07c-f3041d4682b5-log-socket\") pod \"ovnkube-node-nwh2n\" (UID: \"8e04d604-29e2-4e25-b07c-f3041d4682b5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.834850 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8e04d604-29e2-4e25-b07c-f3041d4682b5-host-kubelet\") pod \"ovnkube-node-nwh2n\" (UID: \"8e04d604-29e2-4e25-b07c-f3041d4682b5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.834877 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8e04d604-29e2-4e25-b07c-f3041d4682b5-systemd-units\") pod \"ovnkube-node-nwh2n\" (UID: \"8e04d604-29e2-4e25-b07c-f3041d4682b5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.834918 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8e04d604-29e2-4e25-b07c-f3041d4682b5-host-run-netns\") pod \"ovnkube-node-nwh2n\" (UID: \"8e04d604-29e2-4e25-b07c-f3041d4682b5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.834958 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8e04d604-29e2-4e25-b07c-f3041d4682b5-systemd-units\") pod \"ovnkube-node-nwh2n\" (UID: \"8e04d604-29e2-4e25-b07c-f3041d4682b5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.834968 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8e04d604-29e2-4e25-b07c-f3041d4682b5-ovnkube-config\") pod \"ovnkube-node-nwh2n\" (UID: \"8e04d604-29e2-4e25-b07c-f3041d4682b5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.834979 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8e04d604-29e2-4e25-b07c-f3041d4682b5-log-socket\") pod \"ovnkube-node-nwh2n\" (UID: \"8e04d604-29e2-4e25-b07c-f3041d4682b5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.834985 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8e04d604-29e2-4e25-b07c-f3041d4682b5-host-kubelet\") pod \"ovnkube-node-nwh2n\" (UID: \"8e04d604-29e2-4e25-b07c-f3041d4682b5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.835044 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8kzp2\" (UniqueName: \"kubernetes.io/projected/8e04d604-29e2-4e25-b07c-f3041d4682b5-kube-api-access-8kzp2\") pod \"ovnkube-node-nwh2n\" (UID: \"8e04d604-29e2-4e25-b07c-f3041d4682b5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.835071 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8e04d604-29e2-4e25-b07c-f3041d4682b5-host-cni-bin\") pod \"ovnkube-node-nwh2n\" (UID: \"8e04d604-29e2-4e25-b07c-f3041d4682b5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.835094 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8e04d604-29e2-4e25-b07c-f3041d4682b5-host-slash\") pod \"ovnkube-node-nwh2n\" (UID: \"8e04d604-29e2-4e25-b07c-f3041d4682b5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.835157 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8e04d604-29e2-4e25-b07c-f3041d4682b5-host-cni-netd\") pod \"ovnkube-node-nwh2n\" (UID: \"8e04d604-29e2-4e25-b07c-f3041d4682b5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.835190 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8e04d604-29e2-4e25-b07c-f3041d4682b5-host-cni-bin\") pod \"ovnkube-node-nwh2n\" (UID: \"8e04d604-29e2-4e25-b07c-f3041d4682b5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.835182 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8e04d604-29e2-4e25-b07c-f3041d4682b5-host-run-netns\") pod \"ovnkube-node-nwh2n\" (UID: \"8e04d604-29e2-4e25-b07c-f3041d4682b5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.835194 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8e04d604-29e2-4e25-b07c-f3041d4682b5-host-slash\") pod \"ovnkube-node-nwh2n\" (UID: \"8e04d604-29e2-4e25-b07c-f3041d4682b5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.835215 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8e04d604-29e2-4e25-b07c-f3041d4682b5-ovn-node-metrics-cert\") pod \"ovnkube-node-nwh2n\" (UID: \"8e04d604-29e2-4e25-b07c-f3041d4682b5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.835231 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8e04d604-29e2-4e25-b07c-f3041d4682b5-host-cni-netd\") pod \"ovnkube-node-nwh2n\" (UID: \"8e04d604-29e2-4e25-b07c-f3041d4682b5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.835471 3562 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-f9495\" (UniqueName: \"kubernetes.io/projected/3e19f9e8-9a37-4ca8-9790-c219750ab482-kube-api-access-f9495\") on node \"crc\" DevicePath \"\"" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.835517 3562 reconciler_common.go:300] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.835563 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8e04d604-29e2-4e25-b07c-f3041d4682b5-env-overrides\") pod \"ovnkube-node-nwh2n\" (UID: \"8e04d604-29e2-4e25-b07c-f3041d4682b5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.835782 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8e04d604-29e2-4e25-b07c-f3041d4682b5-ovnkube-config\") pod \"ovnkube-node-nwh2n\" (UID: \"8e04d604-29e2-4e25-b07c-f3041d4682b5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.835795 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8e04d604-29e2-4e25-b07c-f3041d4682b5-ovnkube-script-lib\") pod \"ovnkube-node-nwh2n\" (UID: \"8e04d604-29e2-4e25-b07c-f3041d4682b5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.838224 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8e04d604-29e2-4e25-b07c-f3041d4682b5-ovn-node-metrics-cert\") pod \"ovnkube-node-nwh2n\" (UID: \"8e04d604-29e2-4e25-b07c-f3041d4682b5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.853695 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kzp2\" (UniqueName: \"kubernetes.io/projected/8e04d604-29e2-4e25-b07c-f3041d4682b5-kube-api-access-8kzp2\") pod \"ovnkube-node-nwh2n\" (UID: \"8e04d604-29e2-4e25-b07c-f3041d4682b5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" Dec 13 00:16:56 crc kubenswrapper[3562]: I1213 00:16:56.889897 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.012235 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovn-acl-logging/1.log" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.012865 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovn-controller/1.log" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.013259 3562 generic.go:334] "Generic (PLEG): container finished" podID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerID="42b26fd145f8c23082ac9366b503eaa610dfe5f8cf214181397855b5f66201ff" exitCode=0 Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.013292 3562 generic.go:334] "Generic (PLEG): container finished" podID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerID="0facfc544e10ddf63b593145df26a401b3022a1eff530ef79bc1a721672e0769" exitCode=0 Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.013295 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.013310 3562 generic.go:334] "Generic (PLEG): container finished" podID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerID="0234c71f3bb882dec8a9d4dd88a3ac07afcfb144218f359950383a42a6d3a9dc" exitCode=0 Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.013326 3562 generic.go:334] "Generic (PLEG): container finished" podID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerID="8a731876e3bbbab3a44d68b148fce29008990c0afae08eff5e1b7843e2f119ad" exitCode=0 Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.013343 3562 generic.go:334] "Generic (PLEG): container finished" podID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerID="569f526146fcd390ebfe15d96174d7b40622a40e05ef443639d55b236fd6ca8f" exitCode=0 Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.013361 3562 generic.go:334] "Generic (PLEG): container finished" podID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerID="6ff3cacf40a00028b6ec8f160befac64722580e285ef4efbc36852ad2230f9af" exitCode=0 Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.013378 3562 generic.go:334] "Generic (PLEG): container finished" podID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerID="f41680682228b8f78c399b3833b0efdc4f2e17060b4158c788bc97f18f98428b" exitCode=143 Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.013374 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerDied","Data":"42b26fd145f8c23082ac9366b503eaa610dfe5f8cf214181397855b5f66201ff"} Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.013397 3562 generic.go:334] "Generic (PLEG): container finished" podID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerID="bf1a86f1acab43578dc69e33d56079d3193ddddd52c91b77cc329317f8aeef19" exitCode=143 Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.013427 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerDied","Data":"0facfc544e10ddf63b593145df26a401b3022a1eff530ef79bc1a721672e0769"} Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.013441 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerDied","Data":"0234c71f3bb882dec8a9d4dd88a3ac07afcfb144218f359950383a42a6d3a9dc"} Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.013453 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerDied","Data":"8a731876e3bbbab3a44d68b148fce29008990c0afae08eff5e1b7843e2f119ad"} Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.013463 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerDied","Data":"569f526146fcd390ebfe15d96174d7b40622a40e05ef443639d55b236fd6ca8f"} Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.013474 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerDied","Data":"6ff3cacf40a00028b6ec8f160befac64722580e285ef4efbc36852ad2230f9af"} Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.013489 3562 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f41680682228b8f78c399b3833b0efdc4f2e17060b4158c788bc97f18f98428b"} Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.013497 3562 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bf1a86f1acab43578dc69e33d56079d3193ddddd52c91b77cc329317f8aeef19"} Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.013507 3562 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f968a3e94307889638684d24e6122df1506a5cfad6e0cce68879d106182844b0"} Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.013516 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerDied","Data":"f41680682228b8f78c399b3833b0efdc4f2e17060b4158c788bc97f18f98428b"} Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.013524 3562 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"42b26fd145f8c23082ac9366b503eaa610dfe5f8cf214181397855b5f66201ff"} Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.013533 3562 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0facfc544e10ddf63b593145df26a401b3022a1eff530ef79bc1a721672e0769"} Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.013539 3562 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0234c71f3bb882dec8a9d4dd88a3ac07afcfb144218f359950383a42a6d3a9dc"} Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.013545 3562 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8a731876e3bbbab3a44d68b148fce29008990c0afae08eff5e1b7843e2f119ad"} Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.013551 3562 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"569f526146fcd390ebfe15d96174d7b40622a40e05ef443639d55b236fd6ca8f"} Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.013557 3562 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6ff3cacf40a00028b6ec8f160befac64722580e285ef4efbc36852ad2230f9af"} Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.013563 3562 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f41680682228b8f78c399b3833b0efdc4f2e17060b4158c788bc97f18f98428b"} Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.013569 3562 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bf1a86f1acab43578dc69e33d56079d3193ddddd52c91b77cc329317f8aeef19"} Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.013575 3562 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f968a3e94307889638684d24e6122df1506a5cfad6e0cce68879d106182844b0"} Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.013583 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerDied","Data":"bf1a86f1acab43578dc69e33d56079d3193ddddd52c91b77cc329317f8aeef19"} Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.013591 3562 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"42b26fd145f8c23082ac9366b503eaa610dfe5f8cf214181397855b5f66201ff"} Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.013599 3562 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0facfc544e10ddf63b593145df26a401b3022a1eff530ef79bc1a721672e0769"} Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.013605 3562 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0234c71f3bb882dec8a9d4dd88a3ac07afcfb144218f359950383a42a6d3a9dc"} Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.013608 3562 scope.go:117] "RemoveContainer" containerID="42b26fd145f8c23082ac9366b503eaa610dfe5f8cf214181397855b5f66201ff" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.013612 3562 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8a731876e3bbbab3a44d68b148fce29008990c0afae08eff5e1b7843e2f119ad"} Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.013756 3562 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"569f526146fcd390ebfe15d96174d7b40622a40e05ef443639d55b236fd6ca8f"} Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.013764 3562 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6ff3cacf40a00028b6ec8f160befac64722580e285ef4efbc36852ad2230f9af"} Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.013772 3562 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f41680682228b8f78c399b3833b0efdc4f2e17060b4158c788bc97f18f98428b"} Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.013778 3562 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bf1a86f1acab43578dc69e33d56079d3193ddddd52c91b77cc329317f8aeef19"} Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.013784 3562 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f968a3e94307889638684d24e6122df1506a5cfad6e0cce68879d106182844b0"} Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.013806 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerDied","Data":"c010b1c206b69a7e00e4458b3e8c1380ac49ca645d3bab467e11b98caeafb9f0"} Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.013818 3562 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"42b26fd145f8c23082ac9366b503eaa610dfe5f8cf214181397855b5f66201ff"} Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.013826 3562 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0facfc544e10ddf63b593145df26a401b3022a1eff530ef79bc1a721672e0769"} Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.013833 3562 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0234c71f3bb882dec8a9d4dd88a3ac07afcfb144218f359950383a42a6d3a9dc"} Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.013841 3562 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8a731876e3bbbab3a44d68b148fce29008990c0afae08eff5e1b7843e2f119ad"} Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.013847 3562 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"569f526146fcd390ebfe15d96174d7b40622a40e05ef443639d55b236fd6ca8f"} Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.013853 3562 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6ff3cacf40a00028b6ec8f160befac64722580e285ef4efbc36852ad2230f9af"} Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.013860 3562 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f41680682228b8f78c399b3833b0efdc4f2e17060b4158c788bc97f18f98428b"} Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.013866 3562 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bf1a86f1acab43578dc69e33d56079d3193ddddd52c91b77cc329317f8aeef19"} Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.013872 3562 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f968a3e94307889638684d24e6122df1506a5cfad6e0cce68879d106182844b0"} Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.016791 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" event={"ID":"8e04d604-29e2-4e25-b07c-f3041d4682b5","Type":"ContainerStarted","Data":"8c1da8b72d02229dd5d212845f9444267150b8fccb5049273ed69e7afbea48fa"} Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.018509 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/7.log" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.019065 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/6.log" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.019137 3562 generic.go:334] "Generic (PLEG): container finished" podID="475321a1-8b7e-4033-8f72-b05a8b377347" containerID="cbff4d1914227c5f07b10f5345c567907d0e1f2bf0a32861c6c049395796485b" exitCode=2 Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.019180 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q88th" event={"ID":"475321a1-8b7e-4033-8f72-b05a8b377347","Type":"ContainerDied","Data":"cbff4d1914227c5f07b10f5345c567907d0e1f2bf0a32861c6c049395796485b"} Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.019208 3562 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"958f2eff55abf2b420e4008cfd5b0306001e5f2c7ce3d7533e59c3fad562aeae"} Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.019742 3562 scope.go:117] "RemoveContainer" containerID="cbff4d1914227c5f07b10f5345c567907d0e1f2bf0a32861c6c049395796485b" Dec 13 00:16:57 crc kubenswrapper[3562]: E1213 00:16:57.020320 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\"" pod="openshift-multus/multus-q88th" podUID="475321a1-8b7e-4033-8f72-b05a8b377347" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.057773 3562 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-44qcg"] Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.062460 3562 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-44qcg"] Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.062591 3562 scope.go:117] "RemoveContainer" containerID="0facfc544e10ddf63b593145df26a401b3022a1eff530ef79bc1a721672e0769" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.088247 3562 scope.go:117] "RemoveContainer" containerID="0234c71f3bb882dec8a9d4dd88a3ac07afcfb144218f359950383a42a6d3a9dc" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.149010 3562 scope.go:117] "RemoveContainer" containerID="8a731876e3bbbab3a44d68b148fce29008990c0afae08eff5e1b7843e2f119ad" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.167927 3562 scope.go:117] "RemoveContainer" containerID="569f526146fcd390ebfe15d96174d7b40622a40e05ef443639d55b236fd6ca8f" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.185878 3562 scope.go:117] "RemoveContainer" containerID="6ff3cacf40a00028b6ec8f160befac64722580e285ef4efbc36852ad2230f9af" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.223078 3562 scope.go:117] "RemoveContainer" containerID="f41680682228b8f78c399b3833b0efdc4f2e17060b4158c788bc97f18f98428b" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.278079 3562 scope.go:117] "RemoveContainer" containerID="bf1a86f1acab43578dc69e33d56079d3193ddddd52c91b77cc329317f8aeef19" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.298318 3562 scope.go:117] "RemoveContainer" containerID="f968a3e94307889638684d24e6122df1506a5cfad6e0cce68879d106182844b0" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.317384 3562 scope.go:117] "RemoveContainer" containerID="42b26fd145f8c23082ac9366b503eaa610dfe5f8cf214181397855b5f66201ff" Dec 13 00:16:57 crc kubenswrapper[3562]: E1213 00:16:57.317764 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42b26fd145f8c23082ac9366b503eaa610dfe5f8cf214181397855b5f66201ff\": container with ID starting with 42b26fd145f8c23082ac9366b503eaa610dfe5f8cf214181397855b5f66201ff not found: ID does not exist" containerID="42b26fd145f8c23082ac9366b503eaa610dfe5f8cf214181397855b5f66201ff" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.317822 3562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42b26fd145f8c23082ac9366b503eaa610dfe5f8cf214181397855b5f66201ff"} err="failed to get container status \"42b26fd145f8c23082ac9366b503eaa610dfe5f8cf214181397855b5f66201ff\": rpc error: code = NotFound desc = could not find container \"42b26fd145f8c23082ac9366b503eaa610dfe5f8cf214181397855b5f66201ff\": container with ID starting with 42b26fd145f8c23082ac9366b503eaa610dfe5f8cf214181397855b5f66201ff not found: ID does not exist" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.317842 3562 scope.go:117] "RemoveContainer" containerID="0facfc544e10ddf63b593145df26a401b3022a1eff530ef79bc1a721672e0769" Dec 13 00:16:57 crc kubenswrapper[3562]: E1213 00:16:57.318431 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0facfc544e10ddf63b593145df26a401b3022a1eff530ef79bc1a721672e0769\": container with ID starting with 0facfc544e10ddf63b593145df26a401b3022a1eff530ef79bc1a721672e0769 not found: ID does not exist" containerID="0facfc544e10ddf63b593145df26a401b3022a1eff530ef79bc1a721672e0769" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.318486 3562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0facfc544e10ddf63b593145df26a401b3022a1eff530ef79bc1a721672e0769"} err="failed to get container status \"0facfc544e10ddf63b593145df26a401b3022a1eff530ef79bc1a721672e0769\": rpc error: code = NotFound desc = could not find container \"0facfc544e10ddf63b593145df26a401b3022a1eff530ef79bc1a721672e0769\": container with ID starting with 0facfc544e10ddf63b593145df26a401b3022a1eff530ef79bc1a721672e0769 not found: ID does not exist" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.318499 3562 scope.go:117] "RemoveContainer" containerID="0234c71f3bb882dec8a9d4dd88a3ac07afcfb144218f359950383a42a6d3a9dc" Dec 13 00:16:57 crc kubenswrapper[3562]: E1213 00:16:57.318783 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0234c71f3bb882dec8a9d4dd88a3ac07afcfb144218f359950383a42a6d3a9dc\": container with ID starting with 0234c71f3bb882dec8a9d4dd88a3ac07afcfb144218f359950383a42a6d3a9dc not found: ID does not exist" containerID="0234c71f3bb882dec8a9d4dd88a3ac07afcfb144218f359950383a42a6d3a9dc" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.318817 3562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0234c71f3bb882dec8a9d4dd88a3ac07afcfb144218f359950383a42a6d3a9dc"} err="failed to get container status \"0234c71f3bb882dec8a9d4dd88a3ac07afcfb144218f359950383a42a6d3a9dc\": rpc error: code = NotFound desc = could not find container \"0234c71f3bb882dec8a9d4dd88a3ac07afcfb144218f359950383a42a6d3a9dc\": container with ID starting with 0234c71f3bb882dec8a9d4dd88a3ac07afcfb144218f359950383a42a6d3a9dc not found: ID does not exist" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.318828 3562 scope.go:117] "RemoveContainer" containerID="8a731876e3bbbab3a44d68b148fce29008990c0afae08eff5e1b7843e2f119ad" Dec 13 00:16:57 crc kubenswrapper[3562]: E1213 00:16:57.319172 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a731876e3bbbab3a44d68b148fce29008990c0afae08eff5e1b7843e2f119ad\": container with ID starting with 8a731876e3bbbab3a44d68b148fce29008990c0afae08eff5e1b7843e2f119ad not found: ID does not exist" containerID="8a731876e3bbbab3a44d68b148fce29008990c0afae08eff5e1b7843e2f119ad" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.319197 3562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a731876e3bbbab3a44d68b148fce29008990c0afae08eff5e1b7843e2f119ad"} err="failed to get container status \"8a731876e3bbbab3a44d68b148fce29008990c0afae08eff5e1b7843e2f119ad\": rpc error: code = NotFound desc = could not find container \"8a731876e3bbbab3a44d68b148fce29008990c0afae08eff5e1b7843e2f119ad\": container with ID starting with 8a731876e3bbbab3a44d68b148fce29008990c0afae08eff5e1b7843e2f119ad not found: ID does not exist" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.319206 3562 scope.go:117] "RemoveContainer" containerID="569f526146fcd390ebfe15d96174d7b40622a40e05ef443639d55b236fd6ca8f" Dec 13 00:16:57 crc kubenswrapper[3562]: E1213 00:16:57.319592 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"569f526146fcd390ebfe15d96174d7b40622a40e05ef443639d55b236fd6ca8f\": container with ID starting with 569f526146fcd390ebfe15d96174d7b40622a40e05ef443639d55b236fd6ca8f not found: ID does not exist" containerID="569f526146fcd390ebfe15d96174d7b40622a40e05ef443639d55b236fd6ca8f" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.319633 3562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"569f526146fcd390ebfe15d96174d7b40622a40e05ef443639d55b236fd6ca8f"} err="failed to get container status \"569f526146fcd390ebfe15d96174d7b40622a40e05ef443639d55b236fd6ca8f\": rpc error: code = NotFound desc = could not find container \"569f526146fcd390ebfe15d96174d7b40622a40e05ef443639d55b236fd6ca8f\": container with ID starting with 569f526146fcd390ebfe15d96174d7b40622a40e05ef443639d55b236fd6ca8f not found: ID does not exist" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.319646 3562 scope.go:117] "RemoveContainer" containerID="6ff3cacf40a00028b6ec8f160befac64722580e285ef4efbc36852ad2230f9af" Dec 13 00:16:57 crc kubenswrapper[3562]: E1213 00:16:57.319951 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ff3cacf40a00028b6ec8f160befac64722580e285ef4efbc36852ad2230f9af\": container with ID starting with 6ff3cacf40a00028b6ec8f160befac64722580e285ef4efbc36852ad2230f9af not found: ID does not exist" containerID="6ff3cacf40a00028b6ec8f160befac64722580e285ef4efbc36852ad2230f9af" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.319981 3562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ff3cacf40a00028b6ec8f160befac64722580e285ef4efbc36852ad2230f9af"} err="failed to get container status \"6ff3cacf40a00028b6ec8f160befac64722580e285ef4efbc36852ad2230f9af\": rpc error: code = NotFound desc = could not find container \"6ff3cacf40a00028b6ec8f160befac64722580e285ef4efbc36852ad2230f9af\": container with ID starting with 6ff3cacf40a00028b6ec8f160befac64722580e285ef4efbc36852ad2230f9af not found: ID does not exist" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.319991 3562 scope.go:117] "RemoveContainer" containerID="f41680682228b8f78c399b3833b0efdc4f2e17060b4158c788bc97f18f98428b" Dec 13 00:16:57 crc kubenswrapper[3562]: E1213 00:16:57.320236 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f41680682228b8f78c399b3833b0efdc4f2e17060b4158c788bc97f18f98428b\": container with ID starting with f41680682228b8f78c399b3833b0efdc4f2e17060b4158c788bc97f18f98428b not found: ID does not exist" containerID="f41680682228b8f78c399b3833b0efdc4f2e17060b4158c788bc97f18f98428b" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.320271 3562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f41680682228b8f78c399b3833b0efdc4f2e17060b4158c788bc97f18f98428b"} err="failed to get container status \"f41680682228b8f78c399b3833b0efdc4f2e17060b4158c788bc97f18f98428b\": rpc error: code = NotFound desc = could not find container \"f41680682228b8f78c399b3833b0efdc4f2e17060b4158c788bc97f18f98428b\": container with ID starting with f41680682228b8f78c399b3833b0efdc4f2e17060b4158c788bc97f18f98428b not found: ID does not exist" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.320284 3562 scope.go:117] "RemoveContainer" containerID="bf1a86f1acab43578dc69e33d56079d3193ddddd52c91b77cc329317f8aeef19" Dec 13 00:16:57 crc kubenswrapper[3562]: E1213 00:16:57.320623 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf1a86f1acab43578dc69e33d56079d3193ddddd52c91b77cc329317f8aeef19\": container with ID starting with bf1a86f1acab43578dc69e33d56079d3193ddddd52c91b77cc329317f8aeef19 not found: ID does not exist" containerID="bf1a86f1acab43578dc69e33d56079d3193ddddd52c91b77cc329317f8aeef19" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.320652 3562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf1a86f1acab43578dc69e33d56079d3193ddddd52c91b77cc329317f8aeef19"} err="failed to get container status \"bf1a86f1acab43578dc69e33d56079d3193ddddd52c91b77cc329317f8aeef19\": rpc error: code = NotFound desc = could not find container \"bf1a86f1acab43578dc69e33d56079d3193ddddd52c91b77cc329317f8aeef19\": container with ID starting with bf1a86f1acab43578dc69e33d56079d3193ddddd52c91b77cc329317f8aeef19 not found: ID does not exist" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.320662 3562 scope.go:117] "RemoveContainer" containerID="f968a3e94307889638684d24e6122df1506a5cfad6e0cce68879d106182844b0" Dec 13 00:16:57 crc kubenswrapper[3562]: E1213 00:16:57.320896 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f968a3e94307889638684d24e6122df1506a5cfad6e0cce68879d106182844b0\": container with ID starting with f968a3e94307889638684d24e6122df1506a5cfad6e0cce68879d106182844b0 not found: ID does not exist" containerID="f968a3e94307889638684d24e6122df1506a5cfad6e0cce68879d106182844b0" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.320941 3562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f968a3e94307889638684d24e6122df1506a5cfad6e0cce68879d106182844b0"} err="failed to get container status \"f968a3e94307889638684d24e6122df1506a5cfad6e0cce68879d106182844b0\": rpc error: code = NotFound desc = could not find container \"f968a3e94307889638684d24e6122df1506a5cfad6e0cce68879d106182844b0\": container with ID starting with f968a3e94307889638684d24e6122df1506a5cfad6e0cce68879d106182844b0 not found: ID does not exist" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.320953 3562 scope.go:117] "RemoveContainer" containerID="42b26fd145f8c23082ac9366b503eaa610dfe5f8cf214181397855b5f66201ff" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.321180 3562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42b26fd145f8c23082ac9366b503eaa610dfe5f8cf214181397855b5f66201ff"} err="failed to get container status \"42b26fd145f8c23082ac9366b503eaa610dfe5f8cf214181397855b5f66201ff\": rpc error: code = NotFound desc = could not find container \"42b26fd145f8c23082ac9366b503eaa610dfe5f8cf214181397855b5f66201ff\": container with ID starting with 42b26fd145f8c23082ac9366b503eaa610dfe5f8cf214181397855b5f66201ff not found: ID does not exist" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.321202 3562 scope.go:117] "RemoveContainer" containerID="0facfc544e10ddf63b593145df26a401b3022a1eff530ef79bc1a721672e0769" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.321403 3562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0facfc544e10ddf63b593145df26a401b3022a1eff530ef79bc1a721672e0769"} err="failed to get container status \"0facfc544e10ddf63b593145df26a401b3022a1eff530ef79bc1a721672e0769\": rpc error: code = NotFound desc = could not find container \"0facfc544e10ddf63b593145df26a401b3022a1eff530ef79bc1a721672e0769\": container with ID starting with 0facfc544e10ddf63b593145df26a401b3022a1eff530ef79bc1a721672e0769 not found: ID does not exist" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.321431 3562 scope.go:117] "RemoveContainer" containerID="0234c71f3bb882dec8a9d4dd88a3ac07afcfb144218f359950383a42a6d3a9dc" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.321615 3562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0234c71f3bb882dec8a9d4dd88a3ac07afcfb144218f359950383a42a6d3a9dc"} err="failed to get container status \"0234c71f3bb882dec8a9d4dd88a3ac07afcfb144218f359950383a42a6d3a9dc\": rpc error: code = NotFound desc = could not find container \"0234c71f3bb882dec8a9d4dd88a3ac07afcfb144218f359950383a42a6d3a9dc\": container with ID starting with 0234c71f3bb882dec8a9d4dd88a3ac07afcfb144218f359950383a42a6d3a9dc not found: ID does not exist" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.321633 3562 scope.go:117] "RemoveContainer" containerID="8a731876e3bbbab3a44d68b148fce29008990c0afae08eff5e1b7843e2f119ad" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.321824 3562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a731876e3bbbab3a44d68b148fce29008990c0afae08eff5e1b7843e2f119ad"} err="failed to get container status \"8a731876e3bbbab3a44d68b148fce29008990c0afae08eff5e1b7843e2f119ad\": rpc error: code = NotFound desc = could not find container \"8a731876e3bbbab3a44d68b148fce29008990c0afae08eff5e1b7843e2f119ad\": container with ID starting with 8a731876e3bbbab3a44d68b148fce29008990c0afae08eff5e1b7843e2f119ad not found: ID does not exist" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.321842 3562 scope.go:117] "RemoveContainer" containerID="569f526146fcd390ebfe15d96174d7b40622a40e05ef443639d55b236fd6ca8f" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.322146 3562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"569f526146fcd390ebfe15d96174d7b40622a40e05ef443639d55b236fd6ca8f"} err="failed to get container status \"569f526146fcd390ebfe15d96174d7b40622a40e05ef443639d55b236fd6ca8f\": rpc error: code = NotFound desc = could not find container \"569f526146fcd390ebfe15d96174d7b40622a40e05ef443639d55b236fd6ca8f\": container with ID starting with 569f526146fcd390ebfe15d96174d7b40622a40e05ef443639d55b236fd6ca8f not found: ID does not exist" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.322164 3562 scope.go:117] "RemoveContainer" containerID="6ff3cacf40a00028b6ec8f160befac64722580e285ef4efbc36852ad2230f9af" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.322712 3562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ff3cacf40a00028b6ec8f160befac64722580e285ef4efbc36852ad2230f9af"} err="failed to get container status \"6ff3cacf40a00028b6ec8f160befac64722580e285ef4efbc36852ad2230f9af\": rpc error: code = NotFound desc = could not find container \"6ff3cacf40a00028b6ec8f160befac64722580e285ef4efbc36852ad2230f9af\": container with ID starting with 6ff3cacf40a00028b6ec8f160befac64722580e285ef4efbc36852ad2230f9af not found: ID does not exist" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.322733 3562 scope.go:117] "RemoveContainer" containerID="f41680682228b8f78c399b3833b0efdc4f2e17060b4158c788bc97f18f98428b" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.323067 3562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f41680682228b8f78c399b3833b0efdc4f2e17060b4158c788bc97f18f98428b"} err="failed to get container status \"f41680682228b8f78c399b3833b0efdc4f2e17060b4158c788bc97f18f98428b\": rpc error: code = NotFound desc = could not find container \"f41680682228b8f78c399b3833b0efdc4f2e17060b4158c788bc97f18f98428b\": container with ID starting with f41680682228b8f78c399b3833b0efdc4f2e17060b4158c788bc97f18f98428b not found: ID does not exist" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.323088 3562 scope.go:117] "RemoveContainer" containerID="bf1a86f1acab43578dc69e33d56079d3193ddddd52c91b77cc329317f8aeef19" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.323398 3562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf1a86f1acab43578dc69e33d56079d3193ddddd52c91b77cc329317f8aeef19"} err="failed to get container status \"bf1a86f1acab43578dc69e33d56079d3193ddddd52c91b77cc329317f8aeef19\": rpc error: code = NotFound desc = could not find container \"bf1a86f1acab43578dc69e33d56079d3193ddddd52c91b77cc329317f8aeef19\": container with ID starting with bf1a86f1acab43578dc69e33d56079d3193ddddd52c91b77cc329317f8aeef19 not found: ID does not exist" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.323421 3562 scope.go:117] "RemoveContainer" containerID="f968a3e94307889638684d24e6122df1506a5cfad6e0cce68879d106182844b0" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.323671 3562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f968a3e94307889638684d24e6122df1506a5cfad6e0cce68879d106182844b0"} err="failed to get container status \"f968a3e94307889638684d24e6122df1506a5cfad6e0cce68879d106182844b0\": rpc error: code = NotFound desc = could not find container \"f968a3e94307889638684d24e6122df1506a5cfad6e0cce68879d106182844b0\": container with ID starting with f968a3e94307889638684d24e6122df1506a5cfad6e0cce68879d106182844b0 not found: ID does not exist" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.323688 3562 scope.go:117] "RemoveContainer" containerID="42b26fd145f8c23082ac9366b503eaa610dfe5f8cf214181397855b5f66201ff" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.323927 3562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42b26fd145f8c23082ac9366b503eaa610dfe5f8cf214181397855b5f66201ff"} err="failed to get container status \"42b26fd145f8c23082ac9366b503eaa610dfe5f8cf214181397855b5f66201ff\": rpc error: code = NotFound desc = could not find container \"42b26fd145f8c23082ac9366b503eaa610dfe5f8cf214181397855b5f66201ff\": container with ID starting with 42b26fd145f8c23082ac9366b503eaa610dfe5f8cf214181397855b5f66201ff not found: ID does not exist" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.323948 3562 scope.go:117] "RemoveContainer" containerID="0facfc544e10ddf63b593145df26a401b3022a1eff530ef79bc1a721672e0769" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.324146 3562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0facfc544e10ddf63b593145df26a401b3022a1eff530ef79bc1a721672e0769"} err="failed to get container status \"0facfc544e10ddf63b593145df26a401b3022a1eff530ef79bc1a721672e0769\": rpc error: code = NotFound desc = could not find container \"0facfc544e10ddf63b593145df26a401b3022a1eff530ef79bc1a721672e0769\": container with ID starting with 0facfc544e10ddf63b593145df26a401b3022a1eff530ef79bc1a721672e0769 not found: ID does not exist" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.324162 3562 scope.go:117] "RemoveContainer" containerID="0234c71f3bb882dec8a9d4dd88a3ac07afcfb144218f359950383a42a6d3a9dc" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.324419 3562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0234c71f3bb882dec8a9d4dd88a3ac07afcfb144218f359950383a42a6d3a9dc"} err="failed to get container status \"0234c71f3bb882dec8a9d4dd88a3ac07afcfb144218f359950383a42a6d3a9dc\": rpc error: code = NotFound desc = could not find container \"0234c71f3bb882dec8a9d4dd88a3ac07afcfb144218f359950383a42a6d3a9dc\": container with ID starting with 0234c71f3bb882dec8a9d4dd88a3ac07afcfb144218f359950383a42a6d3a9dc not found: ID does not exist" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.324439 3562 scope.go:117] "RemoveContainer" containerID="8a731876e3bbbab3a44d68b148fce29008990c0afae08eff5e1b7843e2f119ad" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.324616 3562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a731876e3bbbab3a44d68b148fce29008990c0afae08eff5e1b7843e2f119ad"} err="failed to get container status \"8a731876e3bbbab3a44d68b148fce29008990c0afae08eff5e1b7843e2f119ad\": rpc error: code = NotFound desc = could not find container \"8a731876e3bbbab3a44d68b148fce29008990c0afae08eff5e1b7843e2f119ad\": container with ID starting with 8a731876e3bbbab3a44d68b148fce29008990c0afae08eff5e1b7843e2f119ad not found: ID does not exist" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.324630 3562 scope.go:117] "RemoveContainer" containerID="569f526146fcd390ebfe15d96174d7b40622a40e05ef443639d55b236fd6ca8f" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.324819 3562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"569f526146fcd390ebfe15d96174d7b40622a40e05ef443639d55b236fd6ca8f"} err="failed to get container status \"569f526146fcd390ebfe15d96174d7b40622a40e05ef443639d55b236fd6ca8f\": rpc error: code = NotFound desc = could not find container \"569f526146fcd390ebfe15d96174d7b40622a40e05ef443639d55b236fd6ca8f\": container with ID starting with 569f526146fcd390ebfe15d96174d7b40622a40e05ef443639d55b236fd6ca8f not found: ID does not exist" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.324839 3562 scope.go:117] "RemoveContainer" containerID="6ff3cacf40a00028b6ec8f160befac64722580e285ef4efbc36852ad2230f9af" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.325012 3562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ff3cacf40a00028b6ec8f160befac64722580e285ef4efbc36852ad2230f9af"} err="failed to get container status \"6ff3cacf40a00028b6ec8f160befac64722580e285ef4efbc36852ad2230f9af\": rpc error: code = NotFound desc = could not find container \"6ff3cacf40a00028b6ec8f160befac64722580e285ef4efbc36852ad2230f9af\": container with ID starting with 6ff3cacf40a00028b6ec8f160befac64722580e285ef4efbc36852ad2230f9af not found: ID does not exist" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.325030 3562 scope.go:117] "RemoveContainer" containerID="f41680682228b8f78c399b3833b0efdc4f2e17060b4158c788bc97f18f98428b" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.325232 3562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f41680682228b8f78c399b3833b0efdc4f2e17060b4158c788bc97f18f98428b"} err="failed to get container status \"f41680682228b8f78c399b3833b0efdc4f2e17060b4158c788bc97f18f98428b\": rpc error: code = NotFound desc = could not find container \"f41680682228b8f78c399b3833b0efdc4f2e17060b4158c788bc97f18f98428b\": container with ID starting with f41680682228b8f78c399b3833b0efdc4f2e17060b4158c788bc97f18f98428b not found: ID does not exist" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.325257 3562 scope.go:117] "RemoveContainer" containerID="bf1a86f1acab43578dc69e33d56079d3193ddddd52c91b77cc329317f8aeef19" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.325447 3562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf1a86f1acab43578dc69e33d56079d3193ddddd52c91b77cc329317f8aeef19"} err="failed to get container status \"bf1a86f1acab43578dc69e33d56079d3193ddddd52c91b77cc329317f8aeef19\": rpc error: code = NotFound desc = could not find container \"bf1a86f1acab43578dc69e33d56079d3193ddddd52c91b77cc329317f8aeef19\": container with ID starting with bf1a86f1acab43578dc69e33d56079d3193ddddd52c91b77cc329317f8aeef19 not found: ID does not exist" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.325462 3562 scope.go:117] "RemoveContainer" containerID="f968a3e94307889638684d24e6122df1506a5cfad6e0cce68879d106182844b0" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.325660 3562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f968a3e94307889638684d24e6122df1506a5cfad6e0cce68879d106182844b0"} err="failed to get container status \"f968a3e94307889638684d24e6122df1506a5cfad6e0cce68879d106182844b0\": rpc error: code = NotFound desc = could not find container \"f968a3e94307889638684d24e6122df1506a5cfad6e0cce68879d106182844b0\": container with ID starting with f968a3e94307889638684d24e6122df1506a5cfad6e0cce68879d106182844b0 not found: ID does not exist" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.325680 3562 scope.go:117] "RemoveContainer" containerID="42b26fd145f8c23082ac9366b503eaa610dfe5f8cf214181397855b5f66201ff" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.325839 3562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42b26fd145f8c23082ac9366b503eaa610dfe5f8cf214181397855b5f66201ff"} err="failed to get container status \"42b26fd145f8c23082ac9366b503eaa610dfe5f8cf214181397855b5f66201ff\": rpc error: code = NotFound desc = could not find container \"42b26fd145f8c23082ac9366b503eaa610dfe5f8cf214181397855b5f66201ff\": container with ID starting with 42b26fd145f8c23082ac9366b503eaa610dfe5f8cf214181397855b5f66201ff not found: ID does not exist" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.325855 3562 scope.go:117] "RemoveContainer" containerID="0facfc544e10ddf63b593145df26a401b3022a1eff530ef79bc1a721672e0769" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.326044 3562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0facfc544e10ddf63b593145df26a401b3022a1eff530ef79bc1a721672e0769"} err="failed to get container status \"0facfc544e10ddf63b593145df26a401b3022a1eff530ef79bc1a721672e0769\": rpc error: code = NotFound desc = could not find container \"0facfc544e10ddf63b593145df26a401b3022a1eff530ef79bc1a721672e0769\": container with ID starting with 0facfc544e10ddf63b593145df26a401b3022a1eff530ef79bc1a721672e0769 not found: ID does not exist" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.326063 3562 scope.go:117] "RemoveContainer" containerID="0234c71f3bb882dec8a9d4dd88a3ac07afcfb144218f359950383a42a6d3a9dc" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.326271 3562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0234c71f3bb882dec8a9d4dd88a3ac07afcfb144218f359950383a42a6d3a9dc"} err="failed to get container status \"0234c71f3bb882dec8a9d4dd88a3ac07afcfb144218f359950383a42a6d3a9dc\": rpc error: code = NotFound desc = could not find container \"0234c71f3bb882dec8a9d4dd88a3ac07afcfb144218f359950383a42a6d3a9dc\": container with ID starting with 0234c71f3bb882dec8a9d4dd88a3ac07afcfb144218f359950383a42a6d3a9dc not found: ID does not exist" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.326290 3562 scope.go:117] "RemoveContainer" containerID="8a731876e3bbbab3a44d68b148fce29008990c0afae08eff5e1b7843e2f119ad" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.326477 3562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a731876e3bbbab3a44d68b148fce29008990c0afae08eff5e1b7843e2f119ad"} err="failed to get container status \"8a731876e3bbbab3a44d68b148fce29008990c0afae08eff5e1b7843e2f119ad\": rpc error: code = NotFound desc = could not find container \"8a731876e3bbbab3a44d68b148fce29008990c0afae08eff5e1b7843e2f119ad\": container with ID starting with 8a731876e3bbbab3a44d68b148fce29008990c0afae08eff5e1b7843e2f119ad not found: ID does not exist" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.326498 3562 scope.go:117] "RemoveContainer" containerID="569f526146fcd390ebfe15d96174d7b40622a40e05ef443639d55b236fd6ca8f" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.326686 3562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"569f526146fcd390ebfe15d96174d7b40622a40e05ef443639d55b236fd6ca8f"} err="failed to get container status \"569f526146fcd390ebfe15d96174d7b40622a40e05ef443639d55b236fd6ca8f\": rpc error: code = NotFound desc = could not find container \"569f526146fcd390ebfe15d96174d7b40622a40e05ef443639d55b236fd6ca8f\": container with ID starting with 569f526146fcd390ebfe15d96174d7b40622a40e05ef443639d55b236fd6ca8f not found: ID does not exist" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.326704 3562 scope.go:117] "RemoveContainer" containerID="6ff3cacf40a00028b6ec8f160befac64722580e285ef4efbc36852ad2230f9af" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.326890 3562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ff3cacf40a00028b6ec8f160befac64722580e285ef4efbc36852ad2230f9af"} err="failed to get container status \"6ff3cacf40a00028b6ec8f160befac64722580e285ef4efbc36852ad2230f9af\": rpc error: code = NotFound desc = could not find container \"6ff3cacf40a00028b6ec8f160befac64722580e285ef4efbc36852ad2230f9af\": container with ID starting with 6ff3cacf40a00028b6ec8f160befac64722580e285ef4efbc36852ad2230f9af not found: ID does not exist" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.326948 3562 scope.go:117] "RemoveContainer" containerID="f41680682228b8f78c399b3833b0efdc4f2e17060b4158c788bc97f18f98428b" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.327316 3562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f41680682228b8f78c399b3833b0efdc4f2e17060b4158c788bc97f18f98428b"} err="failed to get container status \"f41680682228b8f78c399b3833b0efdc4f2e17060b4158c788bc97f18f98428b\": rpc error: code = NotFound desc = could not find container \"f41680682228b8f78c399b3833b0efdc4f2e17060b4158c788bc97f18f98428b\": container with ID starting with f41680682228b8f78c399b3833b0efdc4f2e17060b4158c788bc97f18f98428b not found: ID does not exist" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.327334 3562 scope.go:117] "RemoveContainer" containerID="bf1a86f1acab43578dc69e33d56079d3193ddddd52c91b77cc329317f8aeef19" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.327555 3562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf1a86f1acab43578dc69e33d56079d3193ddddd52c91b77cc329317f8aeef19"} err="failed to get container status \"bf1a86f1acab43578dc69e33d56079d3193ddddd52c91b77cc329317f8aeef19\": rpc error: code = NotFound desc = could not find container \"bf1a86f1acab43578dc69e33d56079d3193ddddd52c91b77cc329317f8aeef19\": container with ID starting with bf1a86f1acab43578dc69e33d56079d3193ddddd52c91b77cc329317f8aeef19 not found: ID does not exist" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.327579 3562 scope.go:117] "RemoveContainer" containerID="f968a3e94307889638684d24e6122df1506a5cfad6e0cce68879d106182844b0" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.327815 3562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f968a3e94307889638684d24e6122df1506a5cfad6e0cce68879d106182844b0"} err="failed to get container status \"f968a3e94307889638684d24e6122df1506a5cfad6e0cce68879d106182844b0\": rpc error: code = NotFound desc = could not find container \"f968a3e94307889638684d24e6122df1506a5cfad6e0cce68879d106182844b0\": container with ID starting with f968a3e94307889638684d24e6122df1506a5cfad6e0cce68879d106182844b0 not found: ID does not exist" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.327839 3562 scope.go:117] "RemoveContainer" containerID="42b26fd145f8c23082ac9366b503eaa610dfe5f8cf214181397855b5f66201ff" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.328111 3562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42b26fd145f8c23082ac9366b503eaa610dfe5f8cf214181397855b5f66201ff"} err="failed to get container status \"42b26fd145f8c23082ac9366b503eaa610dfe5f8cf214181397855b5f66201ff\": rpc error: code = NotFound desc = could not find container \"42b26fd145f8c23082ac9366b503eaa610dfe5f8cf214181397855b5f66201ff\": container with ID starting with 42b26fd145f8c23082ac9366b503eaa610dfe5f8cf214181397855b5f66201ff not found: ID does not exist" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.328127 3562 scope.go:117] "RemoveContainer" containerID="0facfc544e10ddf63b593145df26a401b3022a1eff530ef79bc1a721672e0769" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.328367 3562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0facfc544e10ddf63b593145df26a401b3022a1eff530ef79bc1a721672e0769"} err="failed to get container status \"0facfc544e10ddf63b593145df26a401b3022a1eff530ef79bc1a721672e0769\": rpc error: code = NotFound desc = could not find container \"0facfc544e10ddf63b593145df26a401b3022a1eff530ef79bc1a721672e0769\": container with ID starting with 0facfc544e10ddf63b593145df26a401b3022a1eff530ef79bc1a721672e0769 not found: ID does not exist" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.328389 3562 scope.go:117] "RemoveContainer" containerID="0234c71f3bb882dec8a9d4dd88a3ac07afcfb144218f359950383a42a6d3a9dc" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.328599 3562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0234c71f3bb882dec8a9d4dd88a3ac07afcfb144218f359950383a42a6d3a9dc"} err="failed to get container status \"0234c71f3bb882dec8a9d4dd88a3ac07afcfb144218f359950383a42a6d3a9dc\": rpc error: code = NotFound desc = could not find container \"0234c71f3bb882dec8a9d4dd88a3ac07afcfb144218f359950383a42a6d3a9dc\": container with ID starting with 0234c71f3bb882dec8a9d4dd88a3ac07afcfb144218f359950383a42a6d3a9dc not found: ID does not exist" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.328623 3562 scope.go:117] "RemoveContainer" containerID="8a731876e3bbbab3a44d68b148fce29008990c0afae08eff5e1b7843e2f119ad" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.328826 3562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a731876e3bbbab3a44d68b148fce29008990c0afae08eff5e1b7843e2f119ad"} err="failed to get container status \"8a731876e3bbbab3a44d68b148fce29008990c0afae08eff5e1b7843e2f119ad\": rpc error: code = NotFound desc = could not find container \"8a731876e3bbbab3a44d68b148fce29008990c0afae08eff5e1b7843e2f119ad\": container with ID starting with 8a731876e3bbbab3a44d68b148fce29008990c0afae08eff5e1b7843e2f119ad not found: ID does not exist" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.328846 3562 scope.go:117] "RemoveContainer" containerID="569f526146fcd390ebfe15d96174d7b40622a40e05ef443639d55b236fd6ca8f" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.329078 3562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"569f526146fcd390ebfe15d96174d7b40622a40e05ef443639d55b236fd6ca8f"} err="failed to get container status \"569f526146fcd390ebfe15d96174d7b40622a40e05ef443639d55b236fd6ca8f\": rpc error: code = NotFound desc = could not find container \"569f526146fcd390ebfe15d96174d7b40622a40e05ef443639d55b236fd6ca8f\": container with ID starting with 569f526146fcd390ebfe15d96174d7b40622a40e05ef443639d55b236fd6ca8f not found: ID does not exist" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.329126 3562 scope.go:117] "RemoveContainer" containerID="6ff3cacf40a00028b6ec8f160befac64722580e285ef4efbc36852ad2230f9af" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.329336 3562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ff3cacf40a00028b6ec8f160befac64722580e285ef4efbc36852ad2230f9af"} err="failed to get container status \"6ff3cacf40a00028b6ec8f160befac64722580e285ef4efbc36852ad2230f9af\": rpc error: code = NotFound desc = could not find container \"6ff3cacf40a00028b6ec8f160befac64722580e285ef4efbc36852ad2230f9af\": container with ID starting with 6ff3cacf40a00028b6ec8f160befac64722580e285ef4efbc36852ad2230f9af not found: ID does not exist" Dec 13 00:16:57 crc kubenswrapper[3562]: I1213 00:16:57.880080 3562 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" path="/var/lib/kubelet/pods/3e19f9e8-9a37-4ca8-9790-c219750ab482/volumes" Dec 13 00:16:58 crc kubenswrapper[3562]: I1213 00:16:58.026290 3562 generic.go:334] "Generic (PLEG): container finished" podID="8e04d604-29e2-4e25-b07c-f3041d4682b5" containerID="586893cc0e16d53204c234c3fb0478d3486d594c12cd2571c1253eee443bcf64" exitCode=0 Dec 13 00:16:58 crc kubenswrapper[3562]: I1213 00:16:58.026335 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" event={"ID":"8e04d604-29e2-4e25-b07c-f3041d4682b5","Type":"ContainerDied","Data":"586893cc0e16d53204c234c3fb0478d3486d594c12cd2571c1253eee443bcf64"} Dec 13 00:16:59 crc kubenswrapper[3562]: I1213 00:16:59.032271 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" event={"ID":"8e04d604-29e2-4e25-b07c-f3041d4682b5","Type":"ContainerStarted","Data":"cbcf9c4429de544dcd43e301bb31be06936bfe64b0dacccdc35018e7a212dd7d"} Dec 13 00:16:59 crc kubenswrapper[3562]: I1213 00:16:59.032810 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" event={"ID":"8e04d604-29e2-4e25-b07c-f3041d4682b5","Type":"ContainerStarted","Data":"931e9a138b468ab7c046caf45252c428c3b794a21f2ef2947c7c5f329cd0e2a0"} Dec 13 00:16:59 crc kubenswrapper[3562]: I1213 00:16:59.032821 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" event={"ID":"8e04d604-29e2-4e25-b07c-f3041d4682b5","Type":"ContainerStarted","Data":"b227b838c51610502494de91ba20dcf0941fc8d9b3824bc36abc4679a4248834"} Dec 13 00:16:59 crc kubenswrapper[3562]: I1213 00:16:59.032829 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" event={"ID":"8e04d604-29e2-4e25-b07c-f3041d4682b5","Type":"ContainerStarted","Data":"2686cc1fc3fa867294affc4f84c9c27b1e7f269b13c5a2c1920103822c23eeaf"} Dec 13 00:16:59 crc kubenswrapper[3562]: I1213 00:16:59.032839 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" event={"ID":"8e04d604-29e2-4e25-b07c-f3041d4682b5","Type":"ContainerStarted","Data":"1dc228a84f40a90cfcaa73e2d83eb00f4af5661f1ed0c5910e11b17bc8492922"} Dec 13 00:17:00 crc kubenswrapper[3562]: I1213 00:17:00.039533 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" event={"ID":"8e04d604-29e2-4e25-b07c-f3041d4682b5","Type":"ContainerStarted","Data":"428af3656877a01be64962ec23740c986e3411fed11dc64b7dc44b60d070cd94"} Dec 13 00:17:02 crc kubenswrapper[3562]: I1213 00:17:02.052548 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" event={"ID":"8e04d604-29e2-4e25-b07c-f3041d4682b5","Type":"ContainerStarted","Data":"b4055a84bace02f740df2cd9ec6f9f4751c1328c34c462d6556015cc604d6cc1"} Dec 13 00:17:03 crc kubenswrapper[3562]: I1213 00:17:03.738241 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 13 00:17:03 crc kubenswrapper[3562]: I1213 00:17:03.738507 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 13 00:17:03 crc kubenswrapper[3562]: I1213 00:17:03.738530 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 13 00:17:03 crc kubenswrapper[3562]: I1213 00:17:03.738558 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 13 00:17:03 crc kubenswrapper[3562]: I1213 00:17:03.738578 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 13 00:17:04 crc kubenswrapper[3562]: I1213 00:17:04.092655 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" event={"ID":"8e04d604-29e2-4e25-b07c-f3041d4682b5","Type":"ContainerStarted","Data":"e4f091794669adfdab3e931272ff5132da4bba5489d1107d9753d9784da52fa9"} Dec 13 00:17:04 crc kubenswrapper[3562]: I1213 00:17:04.118767 3562 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" podStartSLOduration=8.118726469 podStartE2EDuration="8.118726469s" podCreationTimestamp="2025-12-13 00:16:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 00:17:04.115289013 +0000 UTC m=+660.598519536" watchObservedRunningTime="2025-12-13 00:17:04.118726469 +0000 UTC m=+660.601956992" Dec 13 00:17:04 crc kubenswrapper[3562]: I1213 00:17:04.174391 3562 scope.go:117] "RemoveContainer" containerID="958f2eff55abf2b420e4008cfd5b0306001e5f2c7ce3d7533e59c3fad562aeae" Dec 13 00:17:04 crc kubenswrapper[3562]: E1213 00:17:04.470361 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de330230a01f03a2d68126ab9eeb5198d7000aa6559b4f3461344585212eb3fe\": container with ID starting with de330230a01f03a2d68126ab9eeb5198d7000aa6559b4f3461344585212eb3fe not found: ID does not exist" containerID="de330230a01f03a2d68126ab9eeb5198d7000aa6559b4f3461344585212eb3fe" Dec 13 00:17:04 crc kubenswrapper[3562]: I1213 00:17:04.470616 3562 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="de330230a01f03a2d68126ab9eeb5198d7000aa6559b4f3461344585212eb3fe" err="rpc error: code = NotFound desc = could not find container \"de330230a01f03a2d68126ab9eeb5198d7000aa6559b4f3461344585212eb3fe\": container with ID starting with de330230a01f03a2d68126ab9eeb5198d7000aa6559b4f3461344585212eb3fe not found: ID does not exist" Dec 13 00:17:04 crc kubenswrapper[3562]: E1213 00:17:04.473393 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\": container with ID starting with 51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652 not found: ID does not exist" containerID="51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652" Dec 13 00:17:04 crc kubenswrapper[3562]: I1213 00:17:04.473423 3562 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652" err="rpc error: code = NotFound desc = could not find container \"51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\": container with ID starting with 51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652 not found: ID does not exist" Dec 13 00:17:04 crc kubenswrapper[3562]: E1213 00:17:04.474139 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\": container with ID starting with cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9 not found: ID does not exist" containerID="cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9" Dec 13 00:17:04 crc kubenswrapper[3562]: I1213 00:17:04.474192 3562 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9" err="rpc error: code = NotFound desc = could not find container \"cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\": container with ID starting with cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9 not found: ID does not exist" Dec 13 00:17:04 crc kubenswrapper[3562]: E1213 00:17:04.474647 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\": container with ID starting with 4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e not found: ID does not exist" containerID="4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e" Dec 13 00:17:04 crc kubenswrapper[3562]: I1213 00:17:04.474675 3562 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e" err="rpc error: code = NotFound desc = could not find container \"4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\": container with ID starting with 4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e not found: ID does not exist" Dec 13 00:17:04 crc kubenswrapper[3562]: E1213 00:17:04.475144 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\": container with ID starting with 4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9 not found: ID does not exist" containerID="4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9" Dec 13 00:17:04 crc kubenswrapper[3562]: I1213 00:17:04.475177 3562 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9" err="rpc error: code = NotFound desc = could not find container \"4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\": container with ID starting with 4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9 not found: ID does not exist" Dec 13 00:17:04 crc kubenswrapper[3562]: E1213 00:17:04.475512 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\": container with ID starting with 951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa not found: ID does not exist" containerID="951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa" Dec 13 00:17:04 crc kubenswrapper[3562]: I1213 00:17:04.475539 3562 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa" err="rpc error: code = NotFound desc = could not find container \"951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\": container with ID starting with 951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa not found: ID does not exist" Dec 13 00:17:04 crc kubenswrapper[3562]: E1213 00:17:04.475859 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\": container with ID starting with 246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b not found: ID does not exist" containerID="246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b" Dec 13 00:17:04 crc kubenswrapper[3562]: I1213 00:17:04.475896 3562 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b" err="rpc error: code = NotFound desc = could not find container \"246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\": container with ID starting with 246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b not found: ID does not exist" Dec 13 00:17:04 crc kubenswrapper[3562]: E1213 00:17:04.476241 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\": container with ID starting with 6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212 not found: ID does not exist" containerID="6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212" Dec 13 00:17:04 crc kubenswrapper[3562]: I1213 00:17:04.476277 3562 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212" err="rpc error: code = NotFound desc = could not find container \"6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\": container with ID starting with 6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212 not found: ID does not exist" Dec 13 00:17:04 crc kubenswrapper[3562]: E1213 00:17:04.476587 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\": container with ID starting with 2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5 not found: ID does not exist" containerID="2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5" Dec 13 00:17:04 crc kubenswrapper[3562]: I1213 00:17:04.476614 3562 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5" err="rpc error: code = NotFound desc = could not find container \"2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\": container with ID starting with 2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5 not found: ID does not exist" Dec 13 00:17:04 crc kubenswrapper[3562]: E1213 00:17:04.477001 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a12818978287aa2891509aac46a2dffcb4a4895e9ad613cdd64b4d713d4507b9\": container with ID starting with a12818978287aa2891509aac46a2dffcb4a4895e9ad613cdd64b4d713d4507b9 not found: ID does not exist" containerID="a12818978287aa2891509aac46a2dffcb4a4895e9ad613cdd64b4d713d4507b9" Dec 13 00:17:04 crc kubenswrapper[3562]: I1213 00:17:04.477035 3562 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="a12818978287aa2891509aac46a2dffcb4a4895e9ad613cdd64b4d713d4507b9" err="rpc error: code = NotFound desc = could not find container \"a12818978287aa2891509aac46a2dffcb4a4895e9ad613cdd64b4d713d4507b9\": container with ID starting with a12818978287aa2891509aac46a2dffcb4a4895e9ad613cdd64b4d713d4507b9 not found: ID does not exist" Dec 13 00:17:04 crc kubenswrapper[3562]: E1213 00:17:04.477362 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\": container with ID starting with c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6 not found: ID does not exist" containerID="c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6" Dec 13 00:17:04 crc kubenswrapper[3562]: I1213 00:17:04.477398 3562 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6" err="rpc error: code = NotFound desc = could not find container \"c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\": container with ID starting with c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6 not found: ID does not exist" Dec 13 00:17:04 crc kubenswrapper[3562]: E1213 00:17:04.478366 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f10a0ff7dcdf058546a57661d593bbd03d3e01cad1ad00d318c0219c343a8ba\": container with ID starting with 0f10a0ff7dcdf058546a57661d593bbd03d3e01cad1ad00d318c0219c343a8ba not found: ID does not exist" containerID="0f10a0ff7dcdf058546a57661d593bbd03d3e01cad1ad00d318c0219c343a8ba" Dec 13 00:17:04 crc kubenswrapper[3562]: I1213 00:17:04.478401 3562 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="0f10a0ff7dcdf058546a57661d593bbd03d3e01cad1ad00d318c0219c343a8ba" err="rpc error: code = NotFound desc = could not find container \"0f10a0ff7dcdf058546a57661d593bbd03d3e01cad1ad00d318c0219c343a8ba\": container with ID starting with 0f10a0ff7dcdf058546a57661d593bbd03d3e01cad1ad00d318c0219c343a8ba not found: ID does not exist" Dec 13 00:17:05 crc kubenswrapper[3562]: I1213 00:17:05.098454 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/7.log" Dec 13 00:17:05 crc kubenswrapper[3562]: I1213 00:17:05.098930 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" Dec 13 00:17:05 crc kubenswrapper[3562]: I1213 00:17:05.098964 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" Dec 13 00:17:05 crc kubenswrapper[3562]: I1213 00:17:05.098978 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" Dec 13 00:17:05 crc kubenswrapper[3562]: I1213 00:17:05.180209 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" Dec 13 00:17:05 crc kubenswrapper[3562]: I1213 00:17:05.180601 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" Dec 13 00:17:08 crc kubenswrapper[3562]: I1213 00:17:08.875282 3562 scope.go:117] "RemoveContainer" containerID="cbff4d1914227c5f07b10f5345c567907d0e1f2bf0a32861c6c049395796485b" Dec 13 00:17:09 crc kubenswrapper[3562]: I1213 00:17:09.137075 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/7.log" Dec 13 00:17:10 crc kubenswrapper[3562]: I1213 00:17:10.143619 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/7.log" Dec 13 00:17:10 crc kubenswrapper[3562]: I1213 00:17:10.144110 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q88th" event={"ID":"475321a1-8b7e-4033-8f72-b05a8b377347","Type":"ContainerStarted","Data":"97d0d41a8ebcfc9690e470e9462931abe1bc9a854ee037dd387558e84e8f877f"} Dec 13 00:17:26 crc kubenswrapper[3562]: I1213 00:17:26.966095 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nwh2n" Dec 13 00:17:53 crc kubenswrapper[3562]: I1213 00:17:53.882736 3562 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-55c69f9c9-p6m48"] Dec 13 00:17:53 crc kubenswrapper[3562]: I1213 00:17:53.884658 3562 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-55c69f9c9-p6m48" podUID="6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a" containerName="controller-manager" containerID="cri-o://c857e6980cf7f2df8d7f5f53fb435a11caf9bb6721412209982cc35983771123" gracePeriod=30 Dec 13 00:17:54 crc kubenswrapper[3562]: I1213 00:17:54.299683 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-55c69f9c9-p6m48" Dec 13 00:17:54 crc kubenswrapper[3562]: I1213 00:17:54.398460 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a-config\") pod \"6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a\" (UID: \"6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a\") " Dec 13 00:17:54 crc kubenswrapper[3562]: I1213 00:17:54.398802 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a-client-ca\") pod \"6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a\" (UID: \"6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a\") " Dec 13 00:17:54 crc kubenswrapper[3562]: I1213 00:17:54.398915 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkzrd\" (UniqueName: \"kubernetes.io/projected/6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a-kube-api-access-kkzrd\") pod \"6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a\" (UID: \"6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a\") " Dec 13 00:17:54 crc kubenswrapper[3562]: I1213 00:17:54.398946 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a-proxy-ca-bundles\") pod \"6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a\" (UID: \"6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a\") " Dec 13 00:17:54 crc kubenswrapper[3562]: I1213 00:17:54.398969 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a-serving-cert\") pod \"6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a\" (UID: \"6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a\") " Dec 13 00:17:54 crc kubenswrapper[3562]: I1213 00:17:54.399384 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a-config" (OuterVolumeSpecName: "config") pod "6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a" (UID: "6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 00:17:54 crc kubenswrapper[3562]: I1213 00:17:54.400125 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a" (UID: "6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 00:17:54 crc kubenswrapper[3562]: I1213 00:17:54.400184 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a-client-ca" (OuterVolumeSpecName: "client-ca") pod "6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a" (UID: "6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 00:17:54 crc kubenswrapper[3562]: I1213 00:17:54.404803 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a-kube-api-access-kkzrd" (OuterVolumeSpecName: "kube-api-access-kkzrd") pod "6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a" (UID: "6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a"). InnerVolumeSpecName "kube-api-access-kkzrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 00:17:54 crc kubenswrapper[3562]: I1213 00:17:54.405393 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a" (UID: "6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 00:17:54 crc kubenswrapper[3562]: I1213 00:17:54.413322 3562 generic.go:334] "Generic (PLEG): container finished" podID="6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a" containerID="c857e6980cf7f2df8d7f5f53fb435a11caf9bb6721412209982cc35983771123" exitCode=0 Dec 13 00:17:54 crc kubenswrapper[3562]: I1213 00:17:54.413372 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-55c69f9c9-p6m48" event={"ID":"6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a","Type":"ContainerDied","Data":"c857e6980cf7f2df8d7f5f53fb435a11caf9bb6721412209982cc35983771123"} Dec 13 00:17:54 crc kubenswrapper[3562]: I1213 00:17:54.413397 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-55c69f9c9-p6m48" event={"ID":"6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a","Type":"ContainerDied","Data":"8a28de54a918c5998b5a3911571dcf6048030ae98386ee98c7d64793b5afbbad"} Dec 13 00:17:54 crc kubenswrapper[3562]: I1213 00:17:54.413406 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-55c69f9c9-p6m48" Dec 13 00:17:54 crc kubenswrapper[3562]: I1213 00:17:54.413413 3562 scope.go:117] "RemoveContainer" containerID="c857e6980cf7f2df8d7f5f53fb435a11caf9bb6721412209982cc35983771123" Dec 13 00:17:54 crc kubenswrapper[3562]: I1213 00:17:54.446002 3562 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-55c69f9c9-p6m48"] Dec 13 00:17:54 crc kubenswrapper[3562]: I1213 00:17:54.449386 3562 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-55c69f9c9-p6m48"] Dec 13 00:17:54 crc kubenswrapper[3562]: I1213 00:17:54.453199 3562 scope.go:117] "RemoveContainer" containerID="c857e6980cf7f2df8d7f5f53fb435a11caf9bb6721412209982cc35983771123" Dec 13 00:17:54 crc kubenswrapper[3562]: E1213 00:17:54.453612 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c857e6980cf7f2df8d7f5f53fb435a11caf9bb6721412209982cc35983771123\": container with ID starting with c857e6980cf7f2df8d7f5f53fb435a11caf9bb6721412209982cc35983771123 not found: ID does not exist" containerID="c857e6980cf7f2df8d7f5f53fb435a11caf9bb6721412209982cc35983771123" Dec 13 00:17:54 crc kubenswrapper[3562]: I1213 00:17:54.453665 3562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c857e6980cf7f2df8d7f5f53fb435a11caf9bb6721412209982cc35983771123"} err="failed to get container status \"c857e6980cf7f2df8d7f5f53fb435a11caf9bb6721412209982cc35983771123\": rpc error: code = NotFound desc = could not find container \"c857e6980cf7f2df8d7f5f53fb435a11caf9bb6721412209982cc35983771123\": container with ID starting with c857e6980cf7f2df8d7f5f53fb435a11caf9bb6721412209982cc35983771123 not found: ID does not exist" Dec 13 00:17:54 crc kubenswrapper[3562]: I1213 00:17:54.501676 3562 reconciler_common.go:300] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a-config\") on node \"crc\" DevicePath \"\"" Dec 13 00:17:54 crc kubenswrapper[3562]: I1213 00:17:54.501737 3562 reconciler_common.go:300] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a-client-ca\") on node \"crc\" DevicePath \"\"" Dec 13 00:17:54 crc kubenswrapper[3562]: I1213 00:17:54.501749 3562 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-kkzrd\" (UniqueName: \"kubernetes.io/projected/6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a-kube-api-access-kkzrd\") on node \"crc\" DevicePath \"\"" Dec 13 00:17:54 crc kubenswrapper[3562]: I1213 00:17:54.501781 3562 reconciler_common.go:300] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 13 00:17:54 crc kubenswrapper[3562]: I1213 00:17:54.501791 3562 reconciler_common.go:300] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 00:17:55 crc kubenswrapper[3562]: I1213 00:17:55.417179 3562 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-69bd6c8859-pwzkl"] Dec 13 00:17:55 crc kubenswrapper[3562]: I1213 00:17:55.417278 3562 topology_manager.go:215] "Topology Admit Handler" podUID="2c8d574c-a76e-4358-9318-ec713c334086" podNamespace="openshift-controller-manager" podName="controller-manager-69bd6c8859-pwzkl" Dec 13 00:17:55 crc kubenswrapper[3562]: E1213 00:17:55.417420 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a" containerName="controller-manager" Dec 13 00:17:55 crc kubenswrapper[3562]: I1213 00:17:55.417433 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a" containerName="controller-manager" Dec 13 00:17:55 crc kubenswrapper[3562]: I1213 00:17:55.417525 3562 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a" containerName="controller-manager" Dec 13 00:17:55 crc kubenswrapper[3562]: I1213 00:17:55.417853 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69bd6c8859-pwzkl" Dec 13 00:17:55 crc kubenswrapper[3562]: I1213 00:17:55.420006 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 13 00:17:55 crc kubenswrapper[3562]: I1213 00:17:55.420135 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 13 00:17:55 crc kubenswrapper[3562]: I1213 00:17:55.420510 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 13 00:17:55 crc kubenswrapper[3562]: I1213 00:17:55.420868 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 13 00:17:55 crc kubenswrapper[3562]: I1213 00:17:55.420947 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 13 00:17:55 crc kubenswrapper[3562]: I1213 00:17:55.422275 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-58g82" Dec 13 00:17:55 crc kubenswrapper[3562]: I1213 00:17:55.432647 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 13 00:17:55 crc kubenswrapper[3562]: I1213 00:17:55.435091 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-69bd6c8859-pwzkl"] Dec 13 00:17:55 crc kubenswrapper[3562]: I1213 00:17:55.614974 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2c8d574c-a76e-4358-9318-ec713c334086-proxy-ca-bundles\") pod \"controller-manager-69bd6c8859-pwzkl\" (UID: \"2c8d574c-a76e-4358-9318-ec713c334086\") " pod="openshift-controller-manager/controller-manager-69bd6c8859-pwzkl" Dec 13 00:17:55 crc kubenswrapper[3562]: I1213 00:17:55.615076 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c8d574c-a76e-4358-9318-ec713c334086-config\") pod \"controller-manager-69bd6c8859-pwzkl\" (UID: \"2c8d574c-a76e-4358-9318-ec713c334086\") " pod="openshift-controller-manager/controller-manager-69bd6c8859-pwzkl" Dec 13 00:17:55 crc kubenswrapper[3562]: I1213 00:17:55.615109 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2c8d574c-a76e-4358-9318-ec713c334086-client-ca\") pod \"controller-manager-69bd6c8859-pwzkl\" (UID: \"2c8d574c-a76e-4358-9318-ec713c334086\") " pod="openshift-controller-manager/controller-manager-69bd6c8859-pwzkl" Dec 13 00:17:55 crc kubenswrapper[3562]: I1213 00:17:55.615152 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c8d574c-a76e-4358-9318-ec713c334086-serving-cert\") pod \"controller-manager-69bd6c8859-pwzkl\" (UID: \"2c8d574c-a76e-4358-9318-ec713c334086\") " pod="openshift-controller-manager/controller-manager-69bd6c8859-pwzkl" Dec 13 00:17:55 crc kubenswrapper[3562]: I1213 00:17:55.615307 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgcbw\" (UniqueName: \"kubernetes.io/projected/2c8d574c-a76e-4358-9318-ec713c334086-kube-api-access-kgcbw\") pod \"controller-manager-69bd6c8859-pwzkl\" (UID: \"2c8d574c-a76e-4358-9318-ec713c334086\") " pod="openshift-controller-manager/controller-manager-69bd6c8859-pwzkl" Dec 13 00:17:55 crc kubenswrapper[3562]: I1213 00:17:55.716702 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c8d574c-a76e-4358-9318-ec713c334086-serving-cert\") pod \"controller-manager-69bd6c8859-pwzkl\" (UID: \"2c8d574c-a76e-4358-9318-ec713c334086\") " pod="openshift-controller-manager/controller-manager-69bd6c8859-pwzkl" Dec 13 00:17:55 crc kubenswrapper[3562]: I1213 00:17:55.716767 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-kgcbw\" (UniqueName: \"kubernetes.io/projected/2c8d574c-a76e-4358-9318-ec713c334086-kube-api-access-kgcbw\") pod \"controller-manager-69bd6c8859-pwzkl\" (UID: \"2c8d574c-a76e-4358-9318-ec713c334086\") " pod="openshift-controller-manager/controller-manager-69bd6c8859-pwzkl" Dec 13 00:17:55 crc kubenswrapper[3562]: I1213 00:17:55.716817 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2c8d574c-a76e-4358-9318-ec713c334086-proxy-ca-bundles\") pod \"controller-manager-69bd6c8859-pwzkl\" (UID: \"2c8d574c-a76e-4358-9318-ec713c334086\") " pod="openshift-controller-manager/controller-manager-69bd6c8859-pwzkl" Dec 13 00:17:55 crc kubenswrapper[3562]: I1213 00:17:55.716848 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c8d574c-a76e-4358-9318-ec713c334086-config\") pod \"controller-manager-69bd6c8859-pwzkl\" (UID: \"2c8d574c-a76e-4358-9318-ec713c334086\") " pod="openshift-controller-manager/controller-manager-69bd6c8859-pwzkl" Dec 13 00:17:55 crc kubenswrapper[3562]: I1213 00:17:55.716867 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2c8d574c-a76e-4358-9318-ec713c334086-client-ca\") pod \"controller-manager-69bd6c8859-pwzkl\" (UID: \"2c8d574c-a76e-4358-9318-ec713c334086\") " pod="openshift-controller-manager/controller-manager-69bd6c8859-pwzkl" Dec 13 00:17:55 crc kubenswrapper[3562]: I1213 00:17:55.718053 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2c8d574c-a76e-4358-9318-ec713c334086-client-ca\") pod \"controller-manager-69bd6c8859-pwzkl\" (UID: \"2c8d574c-a76e-4358-9318-ec713c334086\") " pod="openshift-controller-manager/controller-manager-69bd6c8859-pwzkl" Dec 13 00:17:55 crc kubenswrapper[3562]: I1213 00:17:55.718232 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c8d574c-a76e-4358-9318-ec713c334086-config\") pod \"controller-manager-69bd6c8859-pwzkl\" (UID: \"2c8d574c-a76e-4358-9318-ec713c334086\") " pod="openshift-controller-manager/controller-manager-69bd6c8859-pwzkl" Dec 13 00:17:55 crc kubenswrapper[3562]: I1213 00:17:55.718279 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2c8d574c-a76e-4358-9318-ec713c334086-proxy-ca-bundles\") pod \"controller-manager-69bd6c8859-pwzkl\" (UID: \"2c8d574c-a76e-4358-9318-ec713c334086\") " pod="openshift-controller-manager/controller-manager-69bd6c8859-pwzkl" Dec 13 00:17:55 crc kubenswrapper[3562]: I1213 00:17:55.726707 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c8d574c-a76e-4358-9318-ec713c334086-serving-cert\") pod \"controller-manager-69bd6c8859-pwzkl\" (UID: \"2c8d574c-a76e-4358-9318-ec713c334086\") " pod="openshift-controller-manager/controller-manager-69bd6c8859-pwzkl" Dec 13 00:17:55 crc kubenswrapper[3562]: I1213 00:17:55.733927 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgcbw\" (UniqueName: \"kubernetes.io/projected/2c8d574c-a76e-4358-9318-ec713c334086-kube-api-access-kgcbw\") pod \"controller-manager-69bd6c8859-pwzkl\" (UID: \"2c8d574c-a76e-4358-9318-ec713c334086\") " pod="openshift-controller-manager/controller-manager-69bd6c8859-pwzkl" Dec 13 00:17:55 crc kubenswrapper[3562]: I1213 00:17:55.882114 3562 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a" path="/var/lib/kubelet/pods/6e3fa2b0-5864-43fe-a7f5-3ecc0780ef9a/volumes" Dec 13 00:17:56 crc kubenswrapper[3562]: I1213 00:17:56.031593 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69bd6c8859-pwzkl" Dec 13 00:17:56 crc kubenswrapper[3562]: I1213 00:17:56.412537 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-69bd6c8859-pwzkl"] Dec 13 00:17:56 crc kubenswrapper[3562]: I1213 00:17:56.424447 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69bd6c8859-pwzkl" event={"ID":"2c8d574c-a76e-4358-9318-ec713c334086","Type":"ContainerStarted","Data":"3decc07a0c4bfbba29cfbaa553ea0890147c4cdb1b889ff84b810c2bf63f2852"} Dec 13 00:17:57 crc kubenswrapper[3562]: I1213 00:17:57.429778 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69bd6c8859-pwzkl" event={"ID":"2c8d574c-a76e-4358-9318-ec713c334086","Type":"ContainerStarted","Data":"28904765e750a182329f374ab8b6d35072569514ad0ec708b1e27fa23f9f15ac"} Dec 13 00:17:58 crc kubenswrapper[3562]: I1213 00:17:58.433511 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-69bd6c8859-pwzkl" Dec 13 00:17:58 crc kubenswrapper[3562]: I1213 00:17:58.438603 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-69bd6c8859-pwzkl" Dec 13 00:17:58 crc kubenswrapper[3562]: I1213 00:17:58.459535 3562 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-69bd6c8859-pwzkl" podStartSLOduration=5.459487927 podStartE2EDuration="5.459487927s" podCreationTimestamp="2025-12-13 00:17:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 00:17:57.446350813 +0000 UTC m=+713.929581356" watchObservedRunningTime="2025-12-13 00:17:58.459487927 +0000 UTC m=+714.942718450" Dec 13 00:18:03 crc kubenswrapper[3562]: I1213 00:18:03.739453 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 13 00:18:03 crc kubenswrapper[3562]: I1213 00:18:03.740088 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 13 00:18:03 crc kubenswrapper[3562]: I1213 00:18:03.740141 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 13 00:18:03 crc kubenswrapper[3562]: I1213 00:18:03.740161 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 13 00:18:03 crc kubenswrapper[3562]: I1213 00:18:03.740207 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 13 00:18:40 crc kubenswrapper[3562]: I1213 00:18:40.485884 3562 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-64qm7"] Dec 13 00:18:40 crc kubenswrapper[3562]: I1213 00:18:40.486802 3562 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-64qm7" podUID="ae12f059-fbec-4ccb-93d1-d8c606b0c6b7" containerName="registry-server" containerID="cri-o://5db7793626d7a19582f9855fa2977c277586bcca9b7198570985af98ef92818f" gracePeriod=30 Dec 13 00:18:40 crc kubenswrapper[3562]: I1213 00:18:40.891182 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-64qm7" Dec 13 00:18:41 crc kubenswrapper[3562]: I1213 00:18:41.076691 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7p7h\" (UniqueName: \"kubernetes.io/projected/ae12f059-fbec-4ccb-93d1-d8c606b0c6b7-kube-api-access-w7p7h\") pod \"ae12f059-fbec-4ccb-93d1-d8c606b0c6b7\" (UID: \"ae12f059-fbec-4ccb-93d1-d8c606b0c6b7\") " Dec 13 00:18:41 crc kubenswrapper[3562]: I1213 00:18:41.076776 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae12f059-fbec-4ccb-93d1-d8c606b0c6b7-utilities\") pod \"ae12f059-fbec-4ccb-93d1-d8c606b0c6b7\" (UID: \"ae12f059-fbec-4ccb-93d1-d8c606b0c6b7\") " Dec 13 00:18:41 crc kubenswrapper[3562]: I1213 00:18:41.077010 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae12f059-fbec-4ccb-93d1-d8c606b0c6b7-catalog-content\") pod \"ae12f059-fbec-4ccb-93d1-d8c606b0c6b7\" (UID: \"ae12f059-fbec-4ccb-93d1-d8c606b0c6b7\") " Dec 13 00:18:41 crc kubenswrapper[3562]: I1213 00:18:41.078764 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae12f059-fbec-4ccb-93d1-d8c606b0c6b7-utilities" (OuterVolumeSpecName: "utilities") pod "ae12f059-fbec-4ccb-93d1-d8c606b0c6b7" (UID: "ae12f059-fbec-4ccb-93d1-d8c606b0c6b7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 00:18:41 crc kubenswrapper[3562]: I1213 00:18:41.088677 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae12f059-fbec-4ccb-93d1-d8c606b0c6b7-kube-api-access-w7p7h" (OuterVolumeSpecName: "kube-api-access-w7p7h") pod "ae12f059-fbec-4ccb-93d1-d8c606b0c6b7" (UID: "ae12f059-fbec-4ccb-93d1-d8c606b0c6b7"). InnerVolumeSpecName "kube-api-access-w7p7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 00:18:41 crc kubenswrapper[3562]: I1213 00:18:41.178543 3562 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-w7p7h\" (UniqueName: \"kubernetes.io/projected/ae12f059-fbec-4ccb-93d1-d8c606b0c6b7-kube-api-access-w7p7h\") on node \"crc\" DevicePath \"\"" Dec 13 00:18:41 crc kubenswrapper[3562]: I1213 00:18:41.178585 3562 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae12f059-fbec-4ccb-93d1-d8c606b0c6b7-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 00:18:41 crc kubenswrapper[3562]: I1213 00:18:41.223333 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae12f059-fbec-4ccb-93d1-d8c606b0c6b7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ae12f059-fbec-4ccb-93d1-d8c606b0c6b7" (UID: "ae12f059-fbec-4ccb-93d1-d8c606b0c6b7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 00:18:41 crc kubenswrapper[3562]: I1213 00:18:41.279742 3562 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae12f059-fbec-4ccb-93d1-d8c606b0c6b7-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 00:18:41 crc kubenswrapper[3562]: I1213 00:18:41.629791 3562 generic.go:334] "Generic (PLEG): container finished" podID="ae12f059-fbec-4ccb-93d1-d8c606b0c6b7" containerID="5db7793626d7a19582f9855fa2977c277586bcca9b7198570985af98ef92818f" exitCode=0 Dec 13 00:18:41 crc kubenswrapper[3562]: I1213 00:18:41.630201 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-64qm7" event={"ID":"ae12f059-fbec-4ccb-93d1-d8c606b0c6b7","Type":"ContainerDied","Data":"5db7793626d7a19582f9855fa2977c277586bcca9b7198570985af98ef92818f"} Dec 13 00:18:41 crc kubenswrapper[3562]: I1213 00:18:41.630223 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-64qm7" event={"ID":"ae12f059-fbec-4ccb-93d1-d8c606b0c6b7","Type":"ContainerDied","Data":"a6e36a02a16e0ac1d9a2c36448f6448fb962963f2a2ac9df2fcba0117f58eb0f"} Dec 13 00:18:41 crc kubenswrapper[3562]: I1213 00:18:41.630243 3562 scope.go:117] "RemoveContainer" containerID="5db7793626d7a19582f9855fa2977c277586bcca9b7198570985af98ef92818f" Dec 13 00:18:41 crc kubenswrapper[3562]: I1213 00:18:41.630358 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-64qm7" Dec 13 00:18:41 crc kubenswrapper[3562]: I1213 00:18:41.665121 3562 scope.go:117] "RemoveContainer" containerID="96c9b60fbbecf9d3f7f517941c5b0438b35398274df73dba5eeb077206034db6" Dec 13 00:18:41 crc kubenswrapper[3562]: I1213 00:18:41.665713 3562 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-64qm7"] Dec 13 00:18:41 crc kubenswrapper[3562]: I1213 00:18:41.670322 3562 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-64qm7"] Dec 13 00:18:41 crc kubenswrapper[3562]: I1213 00:18:41.689852 3562 scope.go:117] "RemoveContainer" containerID="575e8a8215a47d53e0f5a57f6ca008dc59416d9fe2a567e2476e64d517e1237e" Dec 13 00:18:41 crc kubenswrapper[3562]: I1213 00:18:41.719836 3562 scope.go:117] "RemoveContainer" containerID="5db7793626d7a19582f9855fa2977c277586bcca9b7198570985af98ef92818f" Dec 13 00:18:41 crc kubenswrapper[3562]: E1213 00:18:41.720309 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5db7793626d7a19582f9855fa2977c277586bcca9b7198570985af98ef92818f\": container with ID starting with 5db7793626d7a19582f9855fa2977c277586bcca9b7198570985af98ef92818f not found: ID does not exist" containerID="5db7793626d7a19582f9855fa2977c277586bcca9b7198570985af98ef92818f" Dec 13 00:18:41 crc kubenswrapper[3562]: I1213 00:18:41.720364 3562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5db7793626d7a19582f9855fa2977c277586bcca9b7198570985af98ef92818f"} err="failed to get container status \"5db7793626d7a19582f9855fa2977c277586bcca9b7198570985af98ef92818f\": rpc error: code = NotFound desc = could not find container \"5db7793626d7a19582f9855fa2977c277586bcca9b7198570985af98ef92818f\": container with ID starting with 5db7793626d7a19582f9855fa2977c277586bcca9b7198570985af98ef92818f not found: ID does not exist" Dec 13 00:18:41 crc kubenswrapper[3562]: I1213 00:18:41.720379 3562 scope.go:117] "RemoveContainer" containerID="96c9b60fbbecf9d3f7f517941c5b0438b35398274df73dba5eeb077206034db6" Dec 13 00:18:41 crc kubenswrapper[3562]: E1213 00:18:41.720792 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96c9b60fbbecf9d3f7f517941c5b0438b35398274df73dba5eeb077206034db6\": container with ID starting with 96c9b60fbbecf9d3f7f517941c5b0438b35398274df73dba5eeb077206034db6 not found: ID does not exist" containerID="96c9b60fbbecf9d3f7f517941c5b0438b35398274df73dba5eeb077206034db6" Dec 13 00:18:41 crc kubenswrapper[3562]: I1213 00:18:41.720826 3562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96c9b60fbbecf9d3f7f517941c5b0438b35398274df73dba5eeb077206034db6"} err="failed to get container status \"96c9b60fbbecf9d3f7f517941c5b0438b35398274df73dba5eeb077206034db6\": rpc error: code = NotFound desc = could not find container \"96c9b60fbbecf9d3f7f517941c5b0438b35398274df73dba5eeb077206034db6\": container with ID starting with 96c9b60fbbecf9d3f7f517941c5b0438b35398274df73dba5eeb077206034db6 not found: ID does not exist" Dec 13 00:18:41 crc kubenswrapper[3562]: I1213 00:18:41.720836 3562 scope.go:117] "RemoveContainer" containerID="575e8a8215a47d53e0f5a57f6ca008dc59416d9fe2a567e2476e64d517e1237e" Dec 13 00:18:41 crc kubenswrapper[3562]: E1213 00:18:41.721205 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"575e8a8215a47d53e0f5a57f6ca008dc59416d9fe2a567e2476e64d517e1237e\": container with ID starting with 575e8a8215a47d53e0f5a57f6ca008dc59416d9fe2a567e2476e64d517e1237e not found: ID does not exist" containerID="575e8a8215a47d53e0f5a57f6ca008dc59416d9fe2a567e2476e64d517e1237e" Dec 13 00:18:41 crc kubenswrapper[3562]: I1213 00:18:41.721225 3562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"575e8a8215a47d53e0f5a57f6ca008dc59416d9fe2a567e2476e64d517e1237e"} err="failed to get container status \"575e8a8215a47d53e0f5a57f6ca008dc59416d9fe2a567e2476e64d517e1237e\": rpc error: code = NotFound desc = could not find container \"575e8a8215a47d53e0f5a57f6ca008dc59416d9fe2a567e2476e64d517e1237e\": container with ID starting with 575e8a8215a47d53e0f5a57f6ca008dc59416d9fe2a567e2476e64d517e1237e not found: ID does not exist" Dec 13 00:18:41 crc kubenswrapper[3562]: I1213 00:18:41.883036 3562 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae12f059-fbec-4ccb-93d1-d8c606b0c6b7" path="/var/lib/kubelet/pods/ae12f059-fbec-4ccb-93d1-d8c606b0c6b7/volumes" Dec 13 00:18:43 crc kubenswrapper[3562]: I1213 00:18:43.052609 3562 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 00:18:43 crc kubenswrapper[3562]: I1213 00:18:43.052689 3562 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 00:19:03 crc kubenswrapper[3562]: I1213 00:19:03.741191 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 13 00:19:03 crc kubenswrapper[3562]: I1213 00:19:03.741719 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 13 00:19:03 crc kubenswrapper[3562]: I1213 00:19:03.741742 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 13 00:19:03 crc kubenswrapper[3562]: I1213 00:19:03.741776 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 13 00:19:03 crc kubenswrapper[3562]: I1213 00:19:03.741795 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 13 00:19:13 crc kubenswrapper[3562]: I1213 00:19:13.052712 3562 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 00:19:13 crc kubenswrapper[3562]: I1213 00:19:13.053081 3562 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 00:19:43 crc kubenswrapper[3562]: I1213 00:19:43.053241 3562 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 00:19:43 crc kubenswrapper[3562]: I1213 00:19:43.053772 3562 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 00:19:43 crc kubenswrapper[3562]: I1213 00:19:43.053815 3562 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Dec 13 00:19:43 crc kubenswrapper[3562]: I1213 00:19:43.054651 3562 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f31a6a4a0aa2cc3bff798df54c8e89ea969afc1109ceee8230ab21aba1f5ffe2"} pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 13 00:19:43 crc kubenswrapper[3562]: I1213 00:19:43.054866 3562 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" containerID="cri-o://f31a6a4a0aa2cc3bff798df54c8e89ea969afc1109ceee8230ab21aba1f5ffe2" gracePeriod=600 Dec 13 00:19:43 crc kubenswrapper[3562]: I1213 00:19:43.879271 3562 generic.go:334] "Generic (PLEG): container finished" podID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerID="f31a6a4a0aa2cc3bff798df54c8e89ea969afc1109ceee8230ab21aba1f5ffe2" exitCode=0 Dec 13 00:19:43 crc kubenswrapper[3562]: I1213 00:19:43.880066 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerDied","Data":"f31a6a4a0aa2cc3bff798df54c8e89ea969afc1109ceee8230ab21aba1f5ffe2"} Dec 13 00:19:43 crc kubenswrapper[3562]: I1213 00:19:43.880108 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerStarted","Data":"4ad64da496031e88d35a286745296696d921692888dc942adb2368e3bb1eb23a"} Dec 13 00:19:43 crc kubenswrapper[3562]: I1213 00:19:43.880127 3562 scope.go:117] "RemoveContainer" containerID="fe619c3248a2e072658a9e187d7c753cc185680dc456aa477791dd5c81bf75af" Dec 13 00:19:45 crc kubenswrapper[3562]: I1213 00:19:45.494638 3562 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kt6b2"] Dec 13 00:19:45 crc kubenswrapper[3562]: I1213 00:19:45.494755 3562 topology_manager.go:215] "Topology Admit Handler" podUID="6aae24fa-152a-4dd6-a89e-1feb71e3b5f1" podNamespace="openshift-marketplace" podName="certified-operators-kt6b2" Dec 13 00:19:45 crc kubenswrapper[3562]: E1213 00:19:45.494920 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="ae12f059-fbec-4ccb-93d1-d8c606b0c6b7" containerName="registry-server" Dec 13 00:19:45 crc kubenswrapper[3562]: I1213 00:19:45.494934 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae12f059-fbec-4ccb-93d1-d8c606b0c6b7" containerName="registry-server" Dec 13 00:19:45 crc kubenswrapper[3562]: E1213 00:19:45.494946 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="ae12f059-fbec-4ccb-93d1-d8c606b0c6b7" containerName="extract-content" Dec 13 00:19:45 crc kubenswrapper[3562]: I1213 00:19:45.494956 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae12f059-fbec-4ccb-93d1-d8c606b0c6b7" containerName="extract-content" Dec 13 00:19:45 crc kubenswrapper[3562]: E1213 00:19:45.494979 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="ae12f059-fbec-4ccb-93d1-d8c606b0c6b7" containerName="extract-utilities" Dec 13 00:19:45 crc kubenswrapper[3562]: I1213 00:19:45.494987 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae12f059-fbec-4ccb-93d1-d8c606b0c6b7" containerName="extract-utilities" Dec 13 00:19:45 crc kubenswrapper[3562]: I1213 00:19:45.495092 3562 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae12f059-fbec-4ccb-93d1-d8c606b0c6b7" containerName="registry-server" Dec 13 00:19:45 crc kubenswrapper[3562]: I1213 00:19:45.495941 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kt6b2" Dec 13 00:19:45 crc kubenswrapper[3562]: I1213 00:19:45.505607 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kt6b2"] Dec 13 00:19:45 crc kubenswrapper[3562]: I1213 00:19:45.583208 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6aae24fa-152a-4dd6-a89e-1feb71e3b5f1-catalog-content\") pod \"certified-operators-kt6b2\" (UID: \"6aae24fa-152a-4dd6-a89e-1feb71e3b5f1\") " pod="openshift-marketplace/certified-operators-kt6b2" Dec 13 00:19:45 crc kubenswrapper[3562]: I1213 00:19:45.583300 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbbvf\" (UniqueName: \"kubernetes.io/projected/6aae24fa-152a-4dd6-a89e-1feb71e3b5f1-kube-api-access-qbbvf\") pod \"certified-operators-kt6b2\" (UID: \"6aae24fa-152a-4dd6-a89e-1feb71e3b5f1\") " pod="openshift-marketplace/certified-operators-kt6b2" Dec 13 00:19:45 crc kubenswrapper[3562]: I1213 00:19:45.583331 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6aae24fa-152a-4dd6-a89e-1feb71e3b5f1-utilities\") pod \"certified-operators-kt6b2\" (UID: \"6aae24fa-152a-4dd6-a89e-1feb71e3b5f1\") " pod="openshift-marketplace/certified-operators-kt6b2" Dec 13 00:19:45 crc kubenswrapper[3562]: I1213 00:19:45.684119 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qbbvf\" (UniqueName: \"kubernetes.io/projected/6aae24fa-152a-4dd6-a89e-1feb71e3b5f1-kube-api-access-qbbvf\") pod \"certified-operators-kt6b2\" (UID: \"6aae24fa-152a-4dd6-a89e-1feb71e3b5f1\") " pod="openshift-marketplace/certified-operators-kt6b2" Dec 13 00:19:45 crc kubenswrapper[3562]: I1213 00:19:45.684176 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6aae24fa-152a-4dd6-a89e-1feb71e3b5f1-utilities\") pod \"certified-operators-kt6b2\" (UID: \"6aae24fa-152a-4dd6-a89e-1feb71e3b5f1\") " pod="openshift-marketplace/certified-operators-kt6b2" Dec 13 00:19:45 crc kubenswrapper[3562]: I1213 00:19:45.684207 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6aae24fa-152a-4dd6-a89e-1feb71e3b5f1-catalog-content\") pod \"certified-operators-kt6b2\" (UID: \"6aae24fa-152a-4dd6-a89e-1feb71e3b5f1\") " pod="openshift-marketplace/certified-operators-kt6b2" Dec 13 00:19:45 crc kubenswrapper[3562]: I1213 00:19:45.684660 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6aae24fa-152a-4dd6-a89e-1feb71e3b5f1-catalog-content\") pod \"certified-operators-kt6b2\" (UID: \"6aae24fa-152a-4dd6-a89e-1feb71e3b5f1\") " pod="openshift-marketplace/certified-operators-kt6b2" Dec 13 00:19:45 crc kubenswrapper[3562]: I1213 00:19:45.684691 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6aae24fa-152a-4dd6-a89e-1feb71e3b5f1-utilities\") pod \"certified-operators-kt6b2\" (UID: \"6aae24fa-152a-4dd6-a89e-1feb71e3b5f1\") " pod="openshift-marketplace/certified-operators-kt6b2" Dec 13 00:19:45 crc kubenswrapper[3562]: I1213 00:19:45.716074 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbbvf\" (UniqueName: \"kubernetes.io/projected/6aae24fa-152a-4dd6-a89e-1feb71e3b5f1-kube-api-access-qbbvf\") pod \"certified-operators-kt6b2\" (UID: \"6aae24fa-152a-4dd6-a89e-1feb71e3b5f1\") " pod="openshift-marketplace/certified-operators-kt6b2" Dec 13 00:19:45 crc kubenswrapper[3562]: I1213 00:19:45.838122 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kt6b2" Dec 13 00:19:46 crc kubenswrapper[3562]: I1213 00:19:46.236044 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kt6b2"] Dec 13 00:19:46 crc kubenswrapper[3562]: W1213 00:19:46.242869 3562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6aae24fa_152a_4dd6_a89e_1feb71e3b5f1.slice/crio-7df1dd644358aeb16e5f181d1ea741e983b856ea8af2b18f7b4874704f27ddb5 WatchSource:0}: Error finding container 7df1dd644358aeb16e5f181d1ea741e983b856ea8af2b18f7b4874704f27ddb5: Status 404 returned error can't find the container with id 7df1dd644358aeb16e5f181d1ea741e983b856ea8af2b18f7b4874704f27ddb5 Dec 13 00:19:46 crc kubenswrapper[3562]: I1213 00:19:46.902395 3562 generic.go:334] "Generic (PLEG): container finished" podID="6aae24fa-152a-4dd6-a89e-1feb71e3b5f1" containerID="6f533d23ee1caf4b812c925ac463bdeefe173f568462b0f071041e6754fd0fbe" exitCode=0 Dec 13 00:19:46 crc kubenswrapper[3562]: I1213 00:19:46.902516 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kt6b2" event={"ID":"6aae24fa-152a-4dd6-a89e-1feb71e3b5f1","Type":"ContainerDied","Data":"6f533d23ee1caf4b812c925ac463bdeefe173f568462b0f071041e6754fd0fbe"} Dec 13 00:19:46 crc kubenswrapper[3562]: I1213 00:19:46.902740 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kt6b2" event={"ID":"6aae24fa-152a-4dd6-a89e-1feb71e3b5f1","Type":"ContainerStarted","Data":"7df1dd644358aeb16e5f181d1ea741e983b856ea8af2b18f7b4874704f27ddb5"} Dec 13 00:19:46 crc kubenswrapper[3562]: I1213 00:19:46.904149 3562 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 13 00:19:47 crc kubenswrapper[3562]: I1213 00:19:47.908055 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kt6b2" event={"ID":"6aae24fa-152a-4dd6-a89e-1feb71e3b5f1","Type":"ContainerStarted","Data":"3035232801ce245842f79518f5c994711220c54999f4e7ea4f7feb24214aa4cd"} Dec 13 00:19:49 crc kubenswrapper[3562]: I1213 00:19:49.918655 3562 generic.go:334] "Generic (PLEG): container finished" podID="6aae24fa-152a-4dd6-a89e-1feb71e3b5f1" containerID="3035232801ce245842f79518f5c994711220c54999f4e7ea4f7feb24214aa4cd" exitCode=0 Dec 13 00:19:49 crc kubenswrapper[3562]: I1213 00:19:49.918701 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kt6b2" event={"ID":"6aae24fa-152a-4dd6-a89e-1feb71e3b5f1","Type":"ContainerDied","Data":"3035232801ce245842f79518f5c994711220c54999f4e7ea4f7feb24214aa4cd"} Dec 13 00:19:50 crc kubenswrapper[3562]: I1213 00:19:50.924765 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kt6b2" event={"ID":"6aae24fa-152a-4dd6-a89e-1feb71e3b5f1","Type":"ContainerStarted","Data":"84ca226bfb8b7b2ce5f041060dc706776d170a19e186a809d855d5fbf1cb9a9a"} Dec 13 00:19:50 crc kubenswrapper[3562]: I1213 00:19:50.948238 3562 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kt6b2" podStartSLOduration=2.673714887 podStartE2EDuration="5.948196402s" podCreationTimestamp="2025-12-13 00:19:45 +0000 UTC" firstStartedPulling="2025-12-13 00:19:46.903943685 +0000 UTC m=+823.387174208" lastFinishedPulling="2025-12-13 00:19:50.1784252 +0000 UTC m=+826.661655723" observedRunningTime="2025-12-13 00:19:50.945841877 +0000 UTC m=+827.429072400" watchObservedRunningTime="2025-12-13 00:19:50.948196402 +0000 UTC m=+827.431426925" Dec 13 00:19:55 crc kubenswrapper[3562]: I1213 00:19:55.839374 3562 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kt6b2" Dec 13 00:19:55 crc kubenswrapper[3562]: I1213 00:19:55.839905 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kt6b2" Dec 13 00:19:56 crc kubenswrapper[3562]: I1213 00:19:56.029321 3562 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kt6b2" Dec 13 00:19:56 crc kubenswrapper[3562]: I1213 00:19:56.117304 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kt6b2" Dec 13 00:19:56 crc kubenswrapper[3562]: I1213 00:19:56.156306 3562 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kt6b2"] Dec 13 00:19:57 crc kubenswrapper[3562]: I1213 00:19:57.959012 3562 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kt6b2" podUID="6aae24fa-152a-4dd6-a89e-1feb71e3b5f1" containerName="registry-server" containerID="cri-o://84ca226bfb8b7b2ce5f041060dc706776d170a19e186a809d855d5fbf1cb9a9a" gracePeriod=2 Dec 13 00:20:00 crc kubenswrapper[3562]: I1213 00:20:00.948654 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kt6b2" Dec 13 00:20:00 crc kubenswrapper[3562]: I1213 00:20:00.975031 3562 generic.go:334] "Generic (PLEG): container finished" podID="6aae24fa-152a-4dd6-a89e-1feb71e3b5f1" containerID="84ca226bfb8b7b2ce5f041060dc706776d170a19e186a809d855d5fbf1cb9a9a" exitCode=0 Dec 13 00:20:00 crc kubenswrapper[3562]: I1213 00:20:00.975079 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kt6b2" event={"ID":"6aae24fa-152a-4dd6-a89e-1feb71e3b5f1","Type":"ContainerDied","Data":"84ca226bfb8b7b2ce5f041060dc706776d170a19e186a809d855d5fbf1cb9a9a"} Dec 13 00:20:00 crc kubenswrapper[3562]: I1213 00:20:00.975113 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kt6b2" event={"ID":"6aae24fa-152a-4dd6-a89e-1feb71e3b5f1","Type":"ContainerDied","Data":"7df1dd644358aeb16e5f181d1ea741e983b856ea8af2b18f7b4874704f27ddb5"} Dec 13 00:20:00 crc kubenswrapper[3562]: I1213 00:20:00.975136 3562 scope.go:117] "RemoveContainer" containerID="84ca226bfb8b7b2ce5f041060dc706776d170a19e186a809d855d5fbf1cb9a9a" Dec 13 00:20:00 crc kubenswrapper[3562]: I1213 00:20:00.975200 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kt6b2" Dec 13 00:20:01 crc kubenswrapper[3562]: I1213 00:20:01.007678 3562 scope.go:117] "RemoveContainer" containerID="3035232801ce245842f79518f5c994711220c54999f4e7ea4f7feb24214aa4cd" Dec 13 00:20:01 crc kubenswrapper[3562]: I1213 00:20:01.050321 3562 scope.go:117] "RemoveContainer" containerID="6f533d23ee1caf4b812c925ac463bdeefe173f568462b0f071041e6754fd0fbe" Dec 13 00:20:01 crc kubenswrapper[3562]: I1213 00:20:01.066872 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6aae24fa-152a-4dd6-a89e-1feb71e3b5f1-catalog-content\") pod \"6aae24fa-152a-4dd6-a89e-1feb71e3b5f1\" (UID: \"6aae24fa-152a-4dd6-a89e-1feb71e3b5f1\") " Dec 13 00:20:01 crc kubenswrapper[3562]: I1213 00:20:01.067040 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbbvf\" (UniqueName: \"kubernetes.io/projected/6aae24fa-152a-4dd6-a89e-1feb71e3b5f1-kube-api-access-qbbvf\") pod \"6aae24fa-152a-4dd6-a89e-1feb71e3b5f1\" (UID: \"6aae24fa-152a-4dd6-a89e-1feb71e3b5f1\") " Dec 13 00:20:01 crc kubenswrapper[3562]: I1213 00:20:01.067070 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6aae24fa-152a-4dd6-a89e-1feb71e3b5f1-utilities\") pod \"6aae24fa-152a-4dd6-a89e-1feb71e3b5f1\" (UID: \"6aae24fa-152a-4dd6-a89e-1feb71e3b5f1\") " Dec 13 00:20:01 crc kubenswrapper[3562]: I1213 00:20:01.068064 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6aae24fa-152a-4dd6-a89e-1feb71e3b5f1-utilities" (OuterVolumeSpecName: "utilities") pod "6aae24fa-152a-4dd6-a89e-1feb71e3b5f1" (UID: "6aae24fa-152a-4dd6-a89e-1feb71e3b5f1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 00:20:01 crc kubenswrapper[3562]: I1213 00:20:01.073103 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6aae24fa-152a-4dd6-a89e-1feb71e3b5f1-kube-api-access-qbbvf" (OuterVolumeSpecName: "kube-api-access-qbbvf") pod "6aae24fa-152a-4dd6-a89e-1feb71e3b5f1" (UID: "6aae24fa-152a-4dd6-a89e-1feb71e3b5f1"). InnerVolumeSpecName "kube-api-access-qbbvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 00:20:01 crc kubenswrapper[3562]: I1213 00:20:01.073424 3562 scope.go:117] "RemoveContainer" containerID="84ca226bfb8b7b2ce5f041060dc706776d170a19e186a809d855d5fbf1cb9a9a" Dec 13 00:20:01 crc kubenswrapper[3562]: E1213 00:20:01.073873 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84ca226bfb8b7b2ce5f041060dc706776d170a19e186a809d855d5fbf1cb9a9a\": container with ID starting with 84ca226bfb8b7b2ce5f041060dc706776d170a19e186a809d855d5fbf1cb9a9a not found: ID does not exist" containerID="84ca226bfb8b7b2ce5f041060dc706776d170a19e186a809d855d5fbf1cb9a9a" Dec 13 00:20:01 crc kubenswrapper[3562]: I1213 00:20:01.073964 3562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84ca226bfb8b7b2ce5f041060dc706776d170a19e186a809d855d5fbf1cb9a9a"} err="failed to get container status \"84ca226bfb8b7b2ce5f041060dc706776d170a19e186a809d855d5fbf1cb9a9a\": rpc error: code = NotFound desc = could not find container \"84ca226bfb8b7b2ce5f041060dc706776d170a19e186a809d855d5fbf1cb9a9a\": container with ID starting with 84ca226bfb8b7b2ce5f041060dc706776d170a19e186a809d855d5fbf1cb9a9a not found: ID does not exist" Dec 13 00:20:01 crc kubenswrapper[3562]: I1213 00:20:01.073979 3562 scope.go:117] "RemoveContainer" containerID="3035232801ce245842f79518f5c994711220c54999f4e7ea4f7feb24214aa4cd" Dec 13 00:20:01 crc kubenswrapper[3562]: E1213 00:20:01.074533 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3035232801ce245842f79518f5c994711220c54999f4e7ea4f7feb24214aa4cd\": container with ID starting with 3035232801ce245842f79518f5c994711220c54999f4e7ea4f7feb24214aa4cd not found: ID does not exist" containerID="3035232801ce245842f79518f5c994711220c54999f4e7ea4f7feb24214aa4cd" Dec 13 00:20:01 crc kubenswrapper[3562]: I1213 00:20:01.074557 3562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3035232801ce245842f79518f5c994711220c54999f4e7ea4f7feb24214aa4cd"} err="failed to get container status \"3035232801ce245842f79518f5c994711220c54999f4e7ea4f7feb24214aa4cd\": rpc error: code = NotFound desc = could not find container \"3035232801ce245842f79518f5c994711220c54999f4e7ea4f7feb24214aa4cd\": container with ID starting with 3035232801ce245842f79518f5c994711220c54999f4e7ea4f7feb24214aa4cd not found: ID does not exist" Dec 13 00:20:01 crc kubenswrapper[3562]: I1213 00:20:01.074565 3562 scope.go:117] "RemoveContainer" containerID="6f533d23ee1caf4b812c925ac463bdeefe173f568462b0f071041e6754fd0fbe" Dec 13 00:20:01 crc kubenswrapper[3562]: E1213 00:20:01.074822 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f533d23ee1caf4b812c925ac463bdeefe173f568462b0f071041e6754fd0fbe\": container with ID starting with 6f533d23ee1caf4b812c925ac463bdeefe173f568462b0f071041e6754fd0fbe not found: ID does not exist" containerID="6f533d23ee1caf4b812c925ac463bdeefe173f568462b0f071041e6754fd0fbe" Dec 13 00:20:01 crc kubenswrapper[3562]: I1213 00:20:01.074842 3562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f533d23ee1caf4b812c925ac463bdeefe173f568462b0f071041e6754fd0fbe"} err="failed to get container status \"6f533d23ee1caf4b812c925ac463bdeefe173f568462b0f071041e6754fd0fbe\": rpc error: code = NotFound desc = could not find container \"6f533d23ee1caf4b812c925ac463bdeefe173f568462b0f071041e6754fd0fbe\": container with ID starting with 6f533d23ee1caf4b812c925ac463bdeefe173f568462b0f071041e6754fd0fbe not found: ID does not exist" Dec 13 00:20:01 crc kubenswrapper[3562]: I1213 00:20:01.168572 3562 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-qbbvf\" (UniqueName: \"kubernetes.io/projected/6aae24fa-152a-4dd6-a89e-1feb71e3b5f1-kube-api-access-qbbvf\") on node \"crc\" DevicePath \"\"" Dec 13 00:20:01 crc kubenswrapper[3562]: I1213 00:20:01.168606 3562 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6aae24fa-152a-4dd6-a89e-1feb71e3b5f1-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 00:20:01 crc kubenswrapper[3562]: I1213 00:20:01.299558 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6aae24fa-152a-4dd6-a89e-1feb71e3b5f1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6aae24fa-152a-4dd6-a89e-1feb71e3b5f1" (UID: "6aae24fa-152a-4dd6-a89e-1feb71e3b5f1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 00:20:01 crc kubenswrapper[3562]: I1213 00:20:01.370946 3562 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6aae24fa-152a-4dd6-a89e-1feb71e3b5f1-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 00:20:01 crc kubenswrapper[3562]: I1213 00:20:01.524503 3562 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5esrsvj"] Dec 13 00:20:01 crc kubenswrapper[3562]: I1213 00:20:01.524626 3562 topology_manager.go:215] "Topology Admit Handler" podUID="02a21e18-3551-4536-a03a-23f7d60b531d" podNamespace="openshift-marketplace" podName="8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5esrsvj" Dec 13 00:20:01 crc kubenswrapper[3562]: E1213 00:20:01.524777 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="6aae24fa-152a-4dd6-a89e-1feb71e3b5f1" containerName="extract-utilities" Dec 13 00:20:01 crc kubenswrapper[3562]: I1213 00:20:01.524791 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aae24fa-152a-4dd6-a89e-1feb71e3b5f1" containerName="extract-utilities" Dec 13 00:20:01 crc kubenswrapper[3562]: E1213 00:20:01.524815 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="6aae24fa-152a-4dd6-a89e-1feb71e3b5f1" containerName="registry-server" Dec 13 00:20:01 crc kubenswrapper[3562]: I1213 00:20:01.524823 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aae24fa-152a-4dd6-a89e-1feb71e3b5f1" containerName="registry-server" Dec 13 00:20:01 crc kubenswrapper[3562]: E1213 00:20:01.524837 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="6aae24fa-152a-4dd6-a89e-1feb71e3b5f1" containerName="extract-content" Dec 13 00:20:01 crc kubenswrapper[3562]: I1213 00:20:01.524845 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aae24fa-152a-4dd6-a89e-1feb71e3b5f1" containerName="extract-content" Dec 13 00:20:01 crc kubenswrapper[3562]: I1213 00:20:01.525005 3562 memory_manager.go:354] "RemoveStaleState removing state" podUID="6aae24fa-152a-4dd6-a89e-1feb71e3b5f1" containerName="registry-server" Dec 13 00:20:01 crc kubenswrapper[3562]: I1213 00:20:01.525803 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5esrsvj" Dec 13 00:20:01 crc kubenswrapper[3562]: I1213 00:20:01.528650 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-4w6pc" Dec 13 00:20:01 crc kubenswrapper[3562]: I1213 00:20:01.535008 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5esrsvj"] Dec 13 00:20:01 crc kubenswrapper[3562]: I1213 00:20:01.608176 3562 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kt6b2"] Dec 13 00:20:01 crc kubenswrapper[3562]: I1213 00:20:01.612378 3562 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kt6b2"] Dec 13 00:20:01 crc kubenswrapper[3562]: I1213 00:20:01.674300 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jbsh\" (UniqueName: \"kubernetes.io/projected/02a21e18-3551-4536-a03a-23f7d60b531d-kube-api-access-9jbsh\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5esrsvj\" (UID: \"02a21e18-3551-4536-a03a-23f7d60b531d\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5esrsvj" Dec 13 00:20:01 crc kubenswrapper[3562]: I1213 00:20:01.674368 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/02a21e18-3551-4536-a03a-23f7d60b531d-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5esrsvj\" (UID: \"02a21e18-3551-4536-a03a-23f7d60b531d\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5esrsvj" Dec 13 00:20:01 crc kubenswrapper[3562]: I1213 00:20:01.674535 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/02a21e18-3551-4536-a03a-23f7d60b531d-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5esrsvj\" (UID: \"02a21e18-3551-4536-a03a-23f7d60b531d\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5esrsvj" Dec 13 00:20:01 crc kubenswrapper[3562]: I1213 00:20:01.775818 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9jbsh\" (UniqueName: \"kubernetes.io/projected/02a21e18-3551-4536-a03a-23f7d60b531d-kube-api-access-9jbsh\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5esrsvj\" (UID: \"02a21e18-3551-4536-a03a-23f7d60b531d\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5esrsvj" Dec 13 00:20:01 crc kubenswrapper[3562]: I1213 00:20:01.775875 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/02a21e18-3551-4536-a03a-23f7d60b531d-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5esrsvj\" (UID: \"02a21e18-3551-4536-a03a-23f7d60b531d\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5esrsvj" Dec 13 00:20:01 crc kubenswrapper[3562]: I1213 00:20:01.775960 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/02a21e18-3551-4536-a03a-23f7d60b531d-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5esrsvj\" (UID: \"02a21e18-3551-4536-a03a-23f7d60b531d\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5esrsvj" Dec 13 00:20:01 crc kubenswrapper[3562]: I1213 00:20:01.776502 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/02a21e18-3551-4536-a03a-23f7d60b531d-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5esrsvj\" (UID: \"02a21e18-3551-4536-a03a-23f7d60b531d\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5esrsvj" Dec 13 00:20:01 crc kubenswrapper[3562]: I1213 00:20:01.776561 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/02a21e18-3551-4536-a03a-23f7d60b531d-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5esrsvj\" (UID: \"02a21e18-3551-4536-a03a-23f7d60b531d\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5esrsvj" Dec 13 00:20:01 crc kubenswrapper[3562]: I1213 00:20:01.793894 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jbsh\" (UniqueName: \"kubernetes.io/projected/02a21e18-3551-4536-a03a-23f7d60b531d-kube-api-access-9jbsh\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5esrsvj\" (UID: \"02a21e18-3551-4536-a03a-23f7d60b531d\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5esrsvj" Dec 13 00:20:01 crc kubenswrapper[3562]: I1213 00:20:01.844749 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5esrsvj" Dec 13 00:20:01 crc kubenswrapper[3562]: I1213 00:20:01.880536 3562 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6aae24fa-152a-4dd6-a89e-1feb71e3b5f1" path="/var/lib/kubelet/pods/6aae24fa-152a-4dd6-a89e-1feb71e3b5f1/volumes" Dec 13 00:20:02 crc kubenswrapper[3562]: I1213 00:20:02.030689 3562 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qqgf9"] Dec 13 00:20:02 crc kubenswrapper[3562]: I1213 00:20:02.031098 3562 topology_manager.go:215] "Topology Admit Handler" podUID="c41a3fdd-e80b-4bc6-b513-912f3503e60f" podNamespace="openshift-marketplace" podName="6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qqgf9" Dec 13 00:20:02 crc kubenswrapper[3562]: I1213 00:20:02.031995 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qqgf9" Dec 13 00:20:02 crc kubenswrapper[3562]: I1213 00:20:02.040798 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qqgf9"] Dec 13 00:20:02 crc kubenswrapper[3562]: I1213 00:20:02.182614 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c41a3fdd-e80b-4bc6-b513-912f3503e60f-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qqgf9\" (UID: \"c41a3fdd-e80b-4bc6-b513-912f3503e60f\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qqgf9" Dec 13 00:20:02 crc kubenswrapper[3562]: I1213 00:20:02.182704 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsq9p\" (UniqueName: \"kubernetes.io/projected/c41a3fdd-e80b-4bc6-b513-912f3503e60f-kube-api-access-rsq9p\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qqgf9\" (UID: \"c41a3fdd-e80b-4bc6-b513-912f3503e60f\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qqgf9" Dec 13 00:20:02 crc kubenswrapper[3562]: I1213 00:20:02.182973 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c41a3fdd-e80b-4bc6-b513-912f3503e60f-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qqgf9\" (UID: \"c41a3fdd-e80b-4bc6-b513-912f3503e60f\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qqgf9" Dec 13 00:20:02 crc kubenswrapper[3562]: I1213 00:20:02.248273 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5esrsvj"] Dec 13 00:20:02 crc kubenswrapper[3562]: I1213 00:20:02.284695 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c41a3fdd-e80b-4bc6-b513-912f3503e60f-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qqgf9\" (UID: \"c41a3fdd-e80b-4bc6-b513-912f3503e60f\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qqgf9" Dec 13 00:20:02 crc kubenswrapper[3562]: I1213 00:20:02.284745 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rsq9p\" (UniqueName: \"kubernetes.io/projected/c41a3fdd-e80b-4bc6-b513-912f3503e60f-kube-api-access-rsq9p\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qqgf9\" (UID: \"c41a3fdd-e80b-4bc6-b513-912f3503e60f\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qqgf9" Dec 13 00:20:02 crc kubenswrapper[3562]: I1213 00:20:02.284786 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c41a3fdd-e80b-4bc6-b513-912f3503e60f-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qqgf9\" (UID: \"c41a3fdd-e80b-4bc6-b513-912f3503e60f\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qqgf9" Dec 13 00:20:02 crc kubenswrapper[3562]: I1213 00:20:02.285235 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c41a3fdd-e80b-4bc6-b513-912f3503e60f-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qqgf9\" (UID: \"c41a3fdd-e80b-4bc6-b513-912f3503e60f\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qqgf9" Dec 13 00:20:02 crc kubenswrapper[3562]: I1213 00:20:02.285330 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c41a3fdd-e80b-4bc6-b513-912f3503e60f-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qqgf9\" (UID: \"c41a3fdd-e80b-4bc6-b513-912f3503e60f\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qqgf9" Dec 13 00:20:02 crc kubenswrapper[3562]: I1213 00:20:02.306751 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsq9p\" (UniqueName: \"kubernetes.io/projected/c41a3fdd-e80b-4bc6-b513-912f3503e60f-kube-api-access-rsq9p\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qqgf9\" (UID: \"c41a3fdd-e80b-4bc6-b513-912f3503e60f\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qqgf9" Dec 13 00:20:02 crc kubenswrapper[3562]: I1213 00:20:02.361518 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qqgf9" Dec 13 00:20:02 crc kubenswrapper[3562]: I1213 00:20:02.766030 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qqgf9"] Dec 13 00:20:02 crc kubenswrapper[3562]: I1213 00:20:02.986366 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5esrsvj" event={"ID":"02a21e18-3551-4536-a03a-23f7d60b531d","Type":"ContainerStarted","Data":"6a96d089734b1803104659d4af6265ae55c6e8edc182550319a21ad9a227d899"} Dec 13 00:20:02 crc kubenswrapper[3562]: I1213 00:20:02.986411 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5esrsvj" event={"ID":"02a21e18-3551-4536-a03a-23f7d60b531d","Type":"ContainerStarted","Data":"496d243c4e7fa3aa6cc1fbd6c1c68fa7abf4fe5821d465a46147b384df86af33"} Dec 13 00:20:02 crc kubenswrapper[3562]: I1213 00:20:02.987184 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qqgf9" event={"ID":"c41a3fdd-e80b-4bc6-b513-912f3503e60f","Type":"ContainerStarted","Data":"5ce1d7d5f93bce9593f29550a8b73462e4c346ee15a942da310b979a9972f62f"} Dec 13 00:20:03 crc kubenswrapper[3562]: I1213 00:20:03.419898 3562 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69d2hsx"] Dec 13 00:20:03 crc kubenswrapper[3562]: I1213 00:20:03.420303 3562 topology_manager.go:215] "Topology Admit Handler" podUID="7ac424f6-5cb0-466a-8ca8-c2b58d61c8c3" podNamespace="openshift-marketplace" podName="695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69d2hsx" Dec 13 00:20:03 crc kubenswrapper[3562]: I1213 00:20:03.421217 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69d2hsx" Dec 13 00:20:03 crc kubenswrapper[3562]: I1213 00:20:03.428186 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69d2hsx"] Dec 13 00:20:03 crc kubenswrapper[3562]: I1213 00:20:03.599168 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7ac424f6-5cb0-466a-8ca8-c2b58d61c8c3-util\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69d2hsx\" (UID: \"7ac424f6-5cb0-466a-8ca8-c2b58d61c8c3\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69d2hsx" Dec 13 00:20:03 crc kubenswrapper[3562]: I1213 00:20:03.599246 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25jp5\" (UniqueName: \"kubernetes.io/projected/7ac424f6-5cb0-466a-8ca8-c2b58d61c8c3-kube-api-access-25jp5\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69d2hsx\" (UID: \"7ac424f6-5cb0-466a-8ca8-c2b58d61c8c3\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69d2hsx" Dec 13 00:20:03 crc kubenswrapper[3562]: I1213 00:20:03.599456 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7ac424f6-5cb0-466a-8ca8-c2b58d61c8c3-bundle\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69d2hsx\" (UID: \"7ac424f6-5cb0-466a-8ca8-c2b58d61c8c3\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69d2hsx" Dec 13 00:20:03 crc kubenswrapper[3562]: I1213 00:20:03.700845 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7ac424f6-5cb0-466a-8ca8-c2b58d61c8c3-bundle\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69d2hsx\" (UID: \"7ac424f6-5cb0-466a-8ca8-c2b58d61c8c3\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69d2hsx" Dec 13 00:20:03 crc kubenswrapper[3562]: I1213 00:20:03.700937 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7ac424f6-5cb0-466a-8ca8-c2b58d61c8c3-util\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69d2hsx\" (UID: \"7ac424f6-5cb0-466a-8ca8-c2b58d61c8c3\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69d2hsx" Dec 13 00:20:03 crc kubenswrapper[3562]: I1213 00:20:03.701013 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-25jp5\" (UniqueName: \"kubernetes.io/projected/7ac424f6-5cb0-466a-8ca8-c2b58d61c8c3-kube-api-access-25jp5\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69d2hsx\" (UID: \"7ac424f6-5cb0-466a-8ca8-c2b58d61c8c3\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69d2hsx" Dec 13 00:20:03 crc kubenswrapper[3562]: I1213 00:20:03.701441 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7ac424f6-5cb0-466a-8ca8-c2b58d61c8c3-bundle\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69d2hsx\" (UID: \"7ac424f6-5cb0-466a-8ca8-c2b58d61c8c3\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69d2hsx" Dec 13 00:20:03 crc kubenswrapper[3562]: I1213 00:20:03.701492 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7ac424f6-5cb0-466a-8ca8-c2b58d61c8c3-util\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69d2hsx\" (UID: \"7ac424f6-5cb0-466a-8ca8-c2b58d61c8c3\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69d2hsx" Dec 13 00:20:03 crc kubenswrapper[3562]: I1213 00:20:03.721871 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-25jp5\" (UniqueName: \"kubernetes.io/projected/7ac424f6-5cb0-466a-8ca8-c2b58d61c8c3-kube-api-access-25jp5\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69d2hsx\" (UID: \"7ac424f6-5cb0-466a-8ca8-c2b58d61c8c3\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69d2hsx" Dec 13 00:20:03 crc kubenswrapper[3562]: I1213 00:20:03.735076 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69d2hsx" Dec 13 00:20:03 crc kubenswrapper[3562]: I1213 00:20:03.742019 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 13 00:20:03 crc kubenswrapper[3562]: I1213 00:20:03.742085 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 13 00:20:03 crc kubenswrapper[3562]: I1213 00:20:03.742119 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 13 00:20:03 crc kubenswrapper[3562]: I1213 00:20:03.742144 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 13 00:20:03 crc kubenswrapper[3562]: I1213 00:20:03.742184 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 13 00:20:03 crc kubenswrapper[3562]: I1213 00:20:03.813413 3562 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-r72v7"] Dec 13 00:20:03 crc kubenswrapper[3562]: I1213 00:20:03.813832 3562 topology_manager.go:215] "Topology Admit Handler" podUID="0ebc0c2b-b940-4992-830c-39d6c3021ad1" podNamespace="openshift-marketplace" podName="redhat-operators-r72v7" Dec 13 00:20:03 crc kubenswrapper[3562]: I1213 00:20:03.817254 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r72v7"] Dec 13 00:20:03 crc kubenswrapper[3562]: I1213 00:20:03.817349 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r72v7" Dec 13 00:20:03 crc kubenswrapper[3562]: I1213 00:20:03.903933 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ebc0c2b-b940-4992-830c-39d6c3021ad1-catalog-content\") pod \"redhat-operators-r72v7\" (UID: \"0ebc0c2b-b940-4992-830c-39d6c3021ad1\") " pod="openshift-marketplace/redhat-operators-r72v7" Dec 13 00:20:03 crc kubenswrapper[3562]: I1213 00:20:03.903994 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ebc0c2b-b940-4992-830c-39d6c3021ad1-utilities\") pod \"redhat-operators-r72v7\" (UID: \"0ebc0c2b-b940-4992-830c-39d6c3021ad1\") " pod="openshift-marketplace/redhat-operators-r72v7" Dec 13 00:20:03 crc kubenswrapper[3562]: I1213 00:20:03.904049 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlzn2\" (UniqueName: \"kubernetes.io/projected/0ebc0c2b-b940-4992-830c-39d6c3021ad1-kube-api-access-hlzn2\") pod \"redhat-operators-r72v7\" (UID: \"0ebc0c2b-b940-4992-830c-39d6c3021ad1\") " pod="openshift-marketplace/redhat-operators-r72v7" Dec 13 00:20:03 crc kubenswrapper[3562]: I1213 00:20:03.951229 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69d2hsx"] Dec 13 00:20:03 crc kubenswrapper[3562]: W1213 00:20:03.961864 3562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ac424f6_5cb0_466a_8ca8_c2b58d61c8c3.slice/crio-cbe3d91802237045c15c654b8aae04ca96dafb420eb5275743c3bdb8cc86631a WatchSource:0}: Error finding container cbe3d91802237045c15c654b8aae04ca96dafb420eb5275743c3bdb8cc86631a: Status 404 returned error can't find the container with id cbe3d91802237045c15c654b8aae04ca96dafb420eb5275743c3bdb8cc86631a Dec 13 00:20:03 crc kubenswrapper[3562]: I1213 00:20:03.992270 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69d2hsx" event={"ID":"7ac424f6-5cb0-466a-8ca8-c2b58d61c8c3","Type":"ContainerStarted","Data":"cbe3d91802237045c15c654b8aae04ca96dafb420eb5275743c3bdb8cc86631a"} Dec 13 00:20:03 crc kubenswrapper[3562]: I1213 00:20:03.993633 3562 generic.go:334] "Generic (PLEG): container finished" podID="c41a3fdd-e80b-4bc6-b513-912f3503e60f" containerID="6841e6efd3adf45651feb952e5c8096dccdbe1322889cc7a76e8de4a797971e1" exitCode=0 Dec 13 00:20:03 crc kubenswrapper[3562]: I1213 00:20:03.993690 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qqgf9" event={"ID":"c41a3fdd-e80b-4bc6-b513-912f3503e60f","Type":"ContainerDied","Data":"6841e6efd3adf45651feb952e5c8096dccdbe1322889cc7a76e8de4a797971e1"} Dec 13 00:20:03 crc kubenswrapper[3562]: I1213 00:20:03.995635 3562 generic.go:334] "Generic (PLEG): container finished" podID="02a21e18-3551-4536-a03a-23f7d60b531d" containerID="6a96d089734b1803104659d4af6265ae55c6e8edc182550319a21ad9a227d899" exitCode=0 Dec 13 00:20:03 crc kubenswrapper[3562]: I1213 00:20:03.995675 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5esrsvj" event={"ID":"02a21e18-3551-4536-a03a-23f7d60b531d","Type":"ContainerDied","Data":"6a96d089734b1803104659d4af6265ae55c6e8edc182550319a21ad9a227d899"} Dec 13 00:20:04 crc kubenswrapper[3562]: I1213 00:20:04.005228 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ebc0c2b-b940-4992-830c-39d6c3021ad1-catalog-content\") pod \"redhat-operators-r72v7\" (UID: \"0ebc0c2b-b940-4992-830c-39d6c3021ad1\") " pod="openshift-marketplace/redhat-operators-r72v7" Dec 13 00:20:04 crc kubenswrapper[3562]: I1213 00:20:04.006090 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ebc0c2b-b940-4992-830c-39d6c3021ad1-utilities\") pod \"redhat-operators-r72v7\" (UID: \"0ebc0c2b-b940-4992-830c-39d6c3021ad1\") " pod="openshift-marketplace/redhat-operators-r72v7" Dec 13 00:20:04 crc kubenswrapper[3562]: I1213 00:20:04.006284 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hlzn2\" (UniqueName: \"kubernetes.io/projected/0ebc0c2b-b940-4992-830c-39d6c3021ad1-kube-api-access-hlzn2\") pod \"redhat-operators-r72v7\" (UID: \"0ebc0c2b-b940-4992-830c-39d6c3021ad1\") " pod="openshift-marketplace/redhat-operators-r72v7" Dec 13 00:20:04 crc kubenswrapper[3562]: I1213 00:20:04.005860 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ebc0c2b-b940-4992-830c-39d6c3021ad1-catalog-content\") pod \"redhat-operators-r72v7\" (UID: \"0ebc0c2b-b940-4992-830c-39d6c3021ad1\") " pod="openshift-marketplace/redhat-operators-r72v7" Dec 13 00:20:04 crc kubenswrapper[3562]: I1213 00:20:04.006583 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ebc0c2b-b940-4992-830c-39d6c3021ad1-utilities\") pod \"redhat-operators-r72v7\" (UID: \"0ebc0c2b-b940-4992-830c-39d6c3021ad1\") " pod="openshift-marketplace/redhat-operators-r72v7" Dec 13 00:20:04 crc kubenswrapper[3562]: I1213 00:20:04.024479 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlzn2\" (UniqueName: \"kubernetes.io/projected/0ebc0c2b-b940-4992-830c-39d6c3021ad1-kube-api-access-hlzn2\") pod \"redhat-operators-r72v7\" (UID: \"0ebc0c2b-b940-4992-830c-39d6c3021ad1\") " pod="openshift-marketplace/redhat-operators-r72v7" Dec 13 00:20:04 crc kubenswrapper[3562]: I1213 00:20:04.140089 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r72v7" Dec 13 00:20:04 crc kubenswrapper[3562]: I1213 00:20:04.339166 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r72v7"] Dec 13 00:20:04 crc kubenswrapper[3562]: W1213 00:20:04.370425 3562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ebc0c2b_b940_4992_830c_39d6c3021ad1.slice/crio-85e4e82134773e6aa02b650246ebb4bc4ce49df89103f821239e42d034d65ec6 WatchSource:0}: Error finding container 85e4e82134773e6aa02b650246ebb4bc4ce49df89103f821239e42d034d65ec6: Status 404 returned error can't find the container with id 85e4e82134773e6aa02b650246ebb4bc4ce49df89103f821239e42d034d65ec6 Dec 13 00:20:05 crc kubenswrapper[3562]: I1213 00:20:05.029079 3562 generic.go:334] "Generic (PLEG): container finished" podID="7ac424f6-5cb0-466a-8ca8-c2b58d61c8c3" containerID="7e67cb1755558b90cdf72aec337749ae441ec49e2d3603bb49bf4993398b30e9" exitCode=0 Dec 13 00:20:05 crc kubenswrapper[3562]: I1213 00:20:05.029192 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69d2hsx" event={"ID":"7ac424f6-5cb0-466a-8ca8-c2b58d61c8c3","Type":"ContainerDied","Data":"7e67cb1755558b90cdf72aec337749ae441ec49e2d3603bb49bf4993398b30e9"} Dec 13 00:20:05 crc kubenswrapper[3562]: I1213 00:20:05.034112 3562 generic.go:334] "Generic (PLEG): container finished" podID="0ebc0c2b-b940-4992-830c-39d6c3021ad1" containerID="b867880eff1e57bbc195c400a4b45c9fcb10d6b436468e9e6eca3d50ed610080" exitCode=0 Dec 13 00:20:05 crc kubenswrapper[3562]: I1213 00:20:05.034149 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r72v7" event={"ID":"0ebc0c2b-b940-4992-830c-39d6c3021ad1","Type":"ContainerDied","Data":"b867880eff1e57bbc195c400a4b45c9fcb10d6b436468e9e6eca3d50ed610080"} Dec 13 00:20:05 crc kubenswrapper[3562]: I1213 00:20:05.034315 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r72v7" event={"ID":"0ebc0c2b-b940-4992-830c-39d6c3021ad1","Type":"ContainerStarted","Data":"85e4e82134773e6aa02b650246ebb4bc4ce49df89103f821239e42d034d65ec6"} Dec 13 00:20:06 crc kubenswrapper[3562]: I1213 00:20:06.039811 3562 generic.go:334] "Generic (PLEG): container finished" podID="c41a3fdd-e80b-4bc6-b513-912f3503e60f" containerID="424e6457f26b027cdac55bf371f96145083f56fc4ca303d2a8e1d97ccb0e01f0" exitCode=0 Dec 13 00:20:06 crc kubenswrapper[3562]: I1213 00:20:06.039843 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qqgf9" event={"ID":"c41a3fdd-e80b-4bc6-b513-912f3503e60f","Type":"ContainerDied","Data":"424e6457f26b027cdac55bf371f96145083f56fc4ca303d2a8e1d97ccb0e01f0"} Dec 13 00:20:06 crc kubenswrapper[3562]: I1213 00:20:06.044109 3562 generic.go:334] "Generic (PLEG): container finished" podID="02a21e18-3551-4536-a03a-23f7d60b531d" containerID="77b40adfc638f844f6d55e521dca0d5290bf5b61788285903dc7f60016ea991e" exitCode=0 Dec 13 00:20:06 crc kubenswrapper[3562]: I1213 00:20:06.044187 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5esrsvj" event={"ID":"02a21e18-3551-4536-a03a-23f7d60b531d","Type":"ContainerDied","Data":"77b40adfc638f844f6d55e521dca0d5290bf5b61788285903dc7f60016ea991e"} Dec 13 00:20:06 crc kubenswrapper[3562]: I1213 00:20:06.046243 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r72v7" event={"ID":"0ebc0c2b-b940-4992-830c-39d6c3021ad1","Type":"ContainerStarted","Data":"5f789167d275dcd1204ff8a197f1bb6105fef37330d7d9345568804614518a77"} Dec 13 00:20:07 crc kubenswrapper[3562]: I1213 00:20:07.060983 3562 generic.go:334] "Generic (PLEG): container finished" podID="c41a3fdd-e80b-4bc6-b513-912f3503e60f" containerID="502deb069eab045b02efdf66b72925320b317a2c171b0ec13259ce56aa0d87d5" exitCode=0 Dec 13 00:20:07 crc kubenswrapper[3562]: I1213 00:20:07.061052 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qqgf9" event={"ID":"c41a3fdd-e80b-4bc6-b513-912f3503e60f","Type":"ContainerDied","Data":"502deb069eab045b02efdf66b72925320b317a2c171b0ec13259ce56aa0d87d5"} Dec 13 00:20:07 crc kubenswrapper[3562]: I1213 00:20:07.062882 3562 generic.go:334] "Generic (PLEG): container finished" podID="02a21e18-3551-4536-a03a-23f7d60b531d" containerID="e792839b3eca8311e2dd4e9235d4060ad5620e5b0ec87b99cfe0f7361e2af24f" exitCode=0 Dec 13 00:20:07 crc kubenswrapper[3562]: I1213 00:20:07.063515 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5esrsvj" event={"ID":"02a21e18-3551-4536-a03a-23f7d60b531d","Type":"ContainerDied","Data":"e792839b3eca8311e2dd4e9235d4060ad5620e5b0ec87b99cfe0f7361e2af24f"} Dec 13 00:20:21 crc kubenswrapper[3562]: I1213 00:20:21.930912 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5esrsvj" Dec 13 00:20:21 crc kubenswrapper[3562]: I1213 00:20:21.936734 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qqgf9" Dec 13 00:20:22 crc kubenswrapper[3562]: I1213 00:20:22.021494 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/02a21e18-3551-4536-a03a-23f7d60b531d-bundle\") pod \"02a21e18-3551-4536-a03a-23f7d60b531d\" (UID: \"02a21e18-3551-4536-a03a-23f7d60b531d\") " Dec 13 00:20:22 crc kubenswrapper[3562]: I1213 00:20:22.021553 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c41a3fdd-e80b-4bc6-b513-912f3503e60f-util\") pod \"c41a3fdd-e80b-4bc6-b513-912f3503e60f\" (UID: \"c41a3fdd-e80b-4bc6-b513-912f3503e60f\") " Dec 13 00:20:22 crc kubenswrapper[3562]: I1213 00:20:22.021602 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jbsh\" (UniqueName: \"kubernetes.io/projected/02a21e18-3551-4536-a03a-23f7d60b531d-kube-api-access-9jbsh\") pod \"02a21e18-3551-4536-a03a-23f7d60b531d\" (UID: \"02a21e18-3551-4536-a03a-23f7d60b531d\") " Dec 13 00:20:22 crc kubenswrapper[3562]: I1213 00:20:22.021666 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/02a21e18-3551-4536-a03a-23f7d60b531d-util\") pod \"02a21e18-3551-4536-a03a-23f7d60b531d\" (UID: \"02a21e18-3551-4536-a03a-23f7d60b531d\") " Dec 13 00:20:22 crc kubenswrapper[3562]: I1213 00:20:22.021704 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsq9p\" (UniqueName: \"kubernetes.io/projected/c41a3fdd-e80b-4bc6-b513-912f3503e60f-kube-api-access-rsq9p\") pod \"c41a3fdd-e80b-4bc6-b513-912f3503e60f\" (UID: \"c41a3fdd-e80b-4bc6-b513-912f3503e60f\") " Dec 13 00:20:22 crc kubenswrapper[3562]: I1213 00:20:22.021827 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c41a3fdd-e80b-4bc6-b513-912f3503e60f-bundle\") pod \"c41a3fdd-e80b-4bc6-b513-912f3503e60f\" (UID: \"c41a3fdd-e80b-4bc6-b513-912f3503e60f\") " Dec 13 00:20:22 crc kubenswrapper[3562]: I1213 00:20:22.023950 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02a21e18-3551-4536-a03a-23f7d60b531d-bundle" (OuterVolumeSpecName: "bundle") pod "02a21e18-3551-4536-a03a-23f7d60b531d" (UID: "02a21e18-3551-4536-a03a-23f7d60b531d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 00:20:22 crc kubenswrapper[3562]: I1213 00:20:22.036615 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c41a3fdd-e80b-4bc6-b513-912f3503e60f-bundle" (OuterVolumeSpecName: "bundle") pod "c41a3fdd-e80b-4bc6-b513-912f3503e60f" (UID: "c41a3fdd-e80b-4bc6-b513-912f3503e60f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 00:20:22 crc kubenswrapper[3562]: I1213 00:20:22.036841 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c41a3fdd-e80b-4bc6-b513-912f3503e60f-util" (OuterVolumeSpecName: "util") pod "c41a3fdd-e80b-4bc6-b513-912f3503e60f" (UID: "c41a3fdd-e80b-4bc6-b513-912f3503e60f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 00:20:22 crc kubenswrapper[3562]: I1213 00:20:22.051758 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02a21e18-3551-4536-a03a-23f7d60b531d-util" (OuterVolumeSpecName: "util") pod "02a21e18-3551-4536-a03a-23f7d60b531d" (UID: "02a21e18-3551-4536-a03a-23f7d60b531d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 00:20:22 crc kubenswrapper[3562]: I1213 00:20:22.122982 3562 reconciler_common.go:300] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c41a3fdd-e80b-4bc6-b513-912f3503e60f-bundle\") on node \"crc\" DevicePath \"\"" Dec 13 00:20:22 crc kubenswrapper[3562]: I1213 00:20:22.123019 3562 reconciler_common.go:300] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/02a21e18-3551-4536-a03a-23f7d60b531d-bundle\") on node \"crc\" DevicePath \"\"" Dec 13 00:20:22 crc kubenswrapper[3562]: I1213 00:20:22.123029 3562 reconciler_common.go:300] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c41a3fdd-e80b-4bc6-b513-912f3503e60f-util\") on node \"crc\" DevicePath \"\"" Dec 13 00:20:22 crc kubenswrapper[3562]: I1213 00:20:22.123040 3562 reconciler_common.go:300] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/02a21e18-3551-4536-a03a-23f7d60b531d-util\") on node \"crc\" DevicePath \"\"" Dec 13 00:20:22 crc kubenswrapper[3562]: I1213 00:20:22.150742 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c41a3fdd-e80b-4bc6-b513-912f3503e60f-kube-api-access-rsq9p" (OuterVolumeSpecName: "kube-api-access-rsq9p") pod "c41a3fdd-e80b-4bc6-b513-912f3503e60f" (UID: "c41a3fdd-e80b-4bc6-b513-912f3503e60f"). InnerVolumeSpecName "kube-api-access-rsq9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 00:20:22 crc kubenswrapper[3562]: I1213 00:20:22.151221 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02a21e18-3551-4536-a03a-23f7d60b531d-kube-api-access-9jbsh" (OuterVolumeSpecName: "kube-api-access-9jbsh") pod "02a21e18-3551-4536-a03a-23f7d60b531d" (UID: "02a21e18-3551-4536-a03a-23f7d60b531d"). InnerVolumeSpecName "kube-api-access-9jbsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 00:20:22 crc kubenswrapper[3562]: I1213 00:20:22.226236 3562 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-9jbsh\" (UniqueName: \"kubernetes.io/projected/02a21e18-3551-4536-a03a-23f7d60b531d-kube-api-access-9jbsh\") on node \"crc\" DevicePath \"\"" Dec 13 00:20:22 crc kubenswrapper[3562]: I1213 00:20:22.226503 3562 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-rsq9p\" (UniqueName: \"kubernetes.io/projected/c41a3fdd-e80b-4bc6-b513-912f3503e60f-kube-api-access-rsq9p\") on node \"crc\" DevicePath \"\"" Dec 13 00:20:22 crc kubenswrapper[3562]: I1213 00:20:22.325273 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qqgf9" event={"ID":"c41a3fdd-e80b-4bc6-b513-912f3503e60f","Type":"ContainerDied","Data":"5ce1d7d5f93bce9593f29550a8b73462e4c346ee15a942da310b979a9972f62f"} Dec 13 00:20:22 crc kubenswrapper[3562]: I1213 00:20:22.325304 3562 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ce1d7d5f93bce9593f29550a8b73462e4c346ee15a942da310b979a9972f62f" Dec 13 00:20:22 crc kubenswrapper[3562]: I1213 00:20:22.325360 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qqgf9" Dec 13 00:20:22 crc kubenswrapper[3562]: I1213 00:20:22.331526 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5esrsvj" event={"ID":"02a21e18-3551-4536-a03a-23f7d60b531d","Type":"ContainerDied","Data":"496d243c4e7fa3aa6cc1fbd6c1c68fa7abf4fe5821d465a46147b384df86af33"} Dec 13 00:20:22 crc kubenswrapper[3562]: I1213 00:20:22.331556 3562 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="496d243c4e7fa3aa6cc1fbd6c1c68fa7abf4fe5821d465a46147b384df86af33" Dec 13 00:20:22 crc kubenswrapper[3562]: I1213 00:20:22.331608 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5esrsvj" Dec 13 00:20:25 crc kubenswrapper[3562]: I1213 00:20:25.345868 3562 generic.go:334] "Generic (PLEG): container finished" podID="0ebc0c2b-b940-4992-830c-39d6c3021ad1" containerID="5f789167d275dcd1204ff8a197f1bb6105fef37330d7d9345568804614518a77" exitCode=0 Dec 13 00:20:25 crc kubenswrapper[3562]: I1213 00:20:25.345982 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r72v7" event={"ID":"0ebc0c2b-b940-4992-830c-39d6c3021ad1","Type":"ContainerDied","Data":"5f789167d275dcd1204ff8a197f1bb6105fef37330d7d9345568804614518a77"} Dec 13 00:20:25 crc kubenswrapper[3562]: I1213 00:20:25.348135 3562 generic.go:334] "Generic (PLEG): container finished" podID="7ac424f6-5cb0-466a-8ca8-c2b58d61c8c3" containerID="6be5528ee3aeadf008be261badc8350a55914beb39f2e65fd15512d8b1cbef3c" exitCode=0 Dec 13 00:20:25 crc kubenswrapper[3562]: I1213 00:20:25.348181 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69d2hsx" event={"ID":"7ac424f6-5cb0-466a-8ca8-c2b58d61c8c3","Type":"ContainerDied","Data":"6be5528ee3aeadf008be261badc8350a55914beb39f2e65fd15512d8b1cbef3c"} Dec 13 00:20:26 crc kubenswrapper[3562]: I1213 00:20:26.355038 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69d2hsx" event={"ID":"7ac424f6-5cb0-466a-8ca8-c2b58d61c8c3","Type":"ContainerStarted","Data":"e6b0eececd0cdbbbcf5e2aad7d86a747a8fd8bc8257b964c25fa87f1650eeea8"} Dec 13 00:20:27 crc kubenswrapper[3562]: I1213 00:20:27.362852 3562 generic.go:334] "Generic (PLEG): container finished" podID="7ac424f6-5cb0-466a-8ca8-c2b58d61c8c3" containerID="e6b0eececd0cdbbbcf5e2aad7d86a747a8fd8bc8257b964c25fa87f1650eeea8" exitCode=0 Dec 13 00:20:27 crc kubenswrapper[3562]: I1213 00:20:27.362967 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69d2hsx" event={"ID":"7ac424f6-5cb0-466a-8ca8-c2b58d61c8c3","Type":"ContainerDied","Data":"e6b0eececd0cdbbbcf5e2aad7d86a747a8fd8bc8257b964c25fa87f1650eeea8"} Dec 13 00:20:27 crc kubenswrapper[3562]: I1213 00:20:27.365090 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r72v7" event={"ID":"0ebc0c2b-b940-4992-830c-39d6c3021ad1","Type":"ContainerStarted","Data":"c83796786760a47100d9eb9fc799a678100a954e34259c6bdaa2503cdc648772"} Dec 13 00:20:27 crc kubenswrapper[3562]: I1213 00:20:27.415317 3562 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-r72v7" podStartSLOduration=3.797155463 podStartE2EDuration="24.415268851s" podCreationTimestamp="2025-12-13 00:20:03 +0000 UTC" firstStartedPulling="2025-12-13 00:20:05.035525728 +0000 UTC m=+841.518756251" lastFinishedPulling="2025-12-13 00:20:25.653639116 +0000 UTC m=+862.136869639" observedRunningTime="2025-12-13 00:20:27.414717366 +0000 UTC m=+863.897947889" watchObservedRunningTime="2025-12-13 00:20:27.415268851 +0000 UTC m=+863.898499364" Dec 13 00:20:29 crc kubenswrapper[3562]: I1213 00:20:29.111529 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69d2hsx" Dec 13 00:20:29 crc kubenswrapper[3562]: I1213 00:20:29.227363 3562 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-64749bd6b4-x67b2"] Dec 13 00:20:29 crc kubenswrapper[3562]: I1213 00:20:29.227764 3562 topology_manager.go:215] "Topology Admit Handler" podUID="ad3d55c4-7a74-41c0-b5ac-c2688031aaf9" podNamespace="service-telemetry" podName="elastic-operator-64749bd6b4-x67b2" Dec 13 00:20:29 crc kubenswrapper[3562]: E1213 00:20:29.227936 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="c41a3fdd-e80b-4bc6-b513-912f3503e60f" containerName="util" Dec 13 00:20:29 crc kubenswrapper[3562]: I1213 00:20:29.227947 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="c41a3fdd-e80b-4bc6-b513-912f3503e60f" containerName="util" Dec 13 00:20:29 crc kubenswrapper[3562]: E1213 00:20:29.227986 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="02a21e18-3551-4536-a03a-23f7d60b531d" containerName="pull" Dec 13 00:20:29 crc kubenswrapper[3562]: I1213 00:20:29.227996 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="02a21e18-3551-4536-a03a-23f7d60b531d" containerName="pull" Dec 13 00:20:29 crc kubenswrapper[3562]: E1213 00:20:29.228005 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="7ac424f6-5cb0-466a-8ca8-c2b58d61c8c3" containerName="pull" Dec 13 00:20:29 crc kubenswrapper[3562]: I1213 00:20:29.228012 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ac424f6-5cb0-466a-8ca8-c2b58d61c8c3" containerName="pull" Dec 13 00:20:29 crc kubenswrapper[3562]: E1213 00:20:29.228024 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="02a21e18-3551-4536-a03a-23f7d60b531d" containerName="util" Dec 13 00:20:29 crc kubenswrapper[3562]: I1213 00:20:29.228032 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="02a21e18-3551-4536-a03a-23f7d60b531d" containerName="util" Dec 13 00:20:29 crc kubenswrapper[3562]: E1213 00:20:29.228042 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="c41a3fdd-e80b-4bc6-b513-912f3503e60f" containerName="extract" Dec 13 00:20:29 crc kubenswrapper[3562]: I1213 00:20:29.228052 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="c41a3fdd-e80b-4bc6-b513-912f3503e60f" containerName="extract" Dec 13 00:20:29 crc kubenswrapper[3562]: E1213 00:20:29.228066 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="c41a3fdd-e80b-4bc6-b513-912f3503e60f" containerName="pull" Dec 13 00:20:29 crc kubenswrapper[3562]: I1213 00:20:29.228074 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="c41a3fdd-e80b-4bc6-b513-912f3503e60f" containerName="pull" Dec 13 00:20:29 crc kubenswrapper[3562]: E1213 00:20:29.228086 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="7ac424f6-5cb0-466a-8ca8-c2b58d61c8c3" containerName="util" Dec 13 00:20:29 crc kubenswrapper[3562]: I1213 00:20:29.228093 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ac424f6-5cb0-466a-8ca8-c2b58d61c8c3" containerName="util" Dec 13 00:20:29 crc kubenswrapper[3562]: E1213 00:20:29.228105 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="02a21e18-3551-4536-a03a-23f7d60b531d" containerName="extract" Dec 13 00:20:29 crc kubenswrapper[3562]: I1213 00:20:29.228112 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="02a21e18-3551-4536-a03a-23f7d60b531d" containerName="extract" Dec 13 00:20:29 crc kubenswrapper[3562]: E1213 00:20:29.228120 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="7ac424f6-5cb0-466a-8ca8-c2b58d61c8c3" containerName="extract" Dec 13 00:20:29 crc kubenswrapper[3562]: I1213 00:20:29.228127 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ac424f6-5cb0-466a-8ca8-c2b58d61c8c3" containerName="extract" Dec 13 00:20:29 crc kubenswrapper[3562]: I1213 00:20:29.228236 3562 memory_manager.go:354] "RemoveStaleState removing state" podUID="c41a3fdd-e80b-4bc6-b513-912f3503e60f" containerName="extract" Dec 13 00:20:29 crc kubenswrapper[3562]: I1213 00:20:29.228255 3562 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ac424f6-5cb0-466a-8ca8-c2b58d61c8c3" containerName="extract" Dec 13 00:20:29 crc kubenswrapper[3562]: I1213 00:20:29.228265 3562 memory_manager.go:354] "RemoveStaleState removing state" podUID="02a21e18-3551-4536-a03a-23f7d60b531d" containerName="extract" Dec 13 00:20:29 crc kubenswrapper[3562]: I1213 00:20:29.228667 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-64749bd6b4-x67b2" Dec 13 00:20:29 crc kubenswrapper[3562]: I1213 00:20:29.229617 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7ac424f6-5cb0-466a-8ca8-c2b58d61c8c3-bundle\") pod \"7ac424f6-5cb0-466a-8ca8-c2b58d61c8c3\" (UID: \"7ac424f6-5cb0-466a-8ca8-c2b58d61c8c3\") " Dec 13 00:20:29 crc kubenswrapper[3562]: I1213 00:20:29.229820 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25jp5\" (UniqueName: \"kubernetes.io/projected/7ac424f6-5cb0-466a-8ca8-c2b58d61c8c3-kube-api-access-25jp5\") pod \"7ac424f6-5cb0-466a-8ca8-c2b58d61c8c3\" (UID: \"7ac424f6-5cb0-466a-8ca8-c2b58d61c8c3\") " Dec 13 00:20:29 crc kubenswrapper[3562]: I1213 00:20:29.229895 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7ac424f6-5cb0-466a-8ca8-c2b58d61c8c3-util\") pod \"7ac424f6-5cb0-466a-8ca8-c2b58d61c8c3\" (UID: \"7ac424f6-5cb0-466a-8ca8-c2b58d61c8c3\") " Dec 13 00:20:29 crc kubenswrapper[3562]: I1213 00:20:29.230047 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl5x2\" (UniqueName: \"kubernetes.io/projected/ad3d55c4-7a74-41c0-b5ac-c2688031aaf9-kube-api-access-zl5x2\") pod \"elastic-operator-64749bd6b4-x67b2\" (UID: \"ad3d55c4-7a74-41c0-b5ac-c2688031aaf9\") " pod="service-telemetry/elastic-operator-64749bd6b4-x67b2" Dec 13 00:20:29 crc kubenswrapper[3562]: I1213 00:20:29.230110 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ad3d55c4-7a74-41c0-b5ac-c2688031aaf9-webhook-cert\") pod \"elastic-operator-64749bd6b4-x67b2\" (UID: \"ad3d55c4-7a74-41c0-b5ac-c2688031aaf9\") " pod="service-telemetry/elastic-operator-64749bd6b4-x67b2" Dec 13 00:20:29 crc kubenswrapper[3562]: I1213 00:20:29.230152 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ad3d55c4-7a74-41c0-b5ac-c2688031aaf9-apiservice-cert\") pod \"elastic-operator-64749bd6b4-x67b2\" (UID: \"ad3d55c4-7a74-41c0-b5ac-c2688031aaf9\") " pod="service-telemetry/elastic-operator-64749bd6b4-x67b2" Dec 13 00:20:29 crc kubenswrapper[3562]: I1213 00:20:29.230772 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ac424f6-5cb0-466a-8ca8-c2b58d61c8c3-bundle" (OuterVolumeSpecName: "bundle") pod "7ac424f6-5cb0-466a-8ca8-c2b58d61c8c3" (UID: "7ac424f6-5cb0-466a-8ca8-c2b58d61c8c3"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 00:20:29 crc kubenswrapper[3562]: I1213 00:20:29.233222 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"openshift-service-ca.crt" Dec 13 00:20:29 crc kubenswrapper[3562]: I1213 00:20:29.233260 3562 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"elastic-operator-dockercfg-ptg98" Dec 13 00:20:29 crc kubenswrapper[3562]: I1213 00:20:29.242587 3562 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"elastic-operator-service-cert" Dec 13 00:20:29 crc kubenswrapper[3562]: I1213 00:20:29.242608 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"kube-root-ca.crt" Dec 13 00:20:29 crc kubenswrapper[3562]: I1213 00:20:29.245526 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ac424f6-5cb0-466a-8ca8-c2b58d61c8c3-util" (OuterVolumeSpecName: "util") pod "7ac424f6-5cb0-466a-8ca8-c2b58d61c8c3" (UID: "7ac424f6-5cb0-466a-8ca8-c2b58d61c8c3"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 00:20:29 crc kubenswrapper[3562]: I1213 00:20:29.248075 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ac424f6-5cb0-466a-8ca8-c2b58d61c8c3-kube-api-access-25jp5" (OuterVolumeSpecName: "kube-api-access-25jp5") pod "7ac424f6-5cb0-466a-8ca8-c2b58d61c8c3" (UID: "7ac424f6-5cb0-466a-8ca8-c2b58d61c8c3"). InnerVolumeSpecName "kube-api-access-25jp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 00:20:29 crc kubenswrapper[3562]: I1213 00:20:29.255221 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-64749bd6b4-x67b2"] Dec 13 00:20:29 crc kubenswrapper[3562]: I1213 00:20:29.331233 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ad3d55c4-7a74-41c0-b5ac-c2688031aaf9-apiservice-cert\") pod \"elastic-operator-64749bd6b4-x67b2\" (UID: \"ad3d55c4-7a74-41c0-b5ac-c2688031aaf9\") " pod="service-telemetry/elastic-operator-64749bd6b4-x67b2" Dec 13 00:20:29 crc kubenswrapper[3562]: I1213 00:20:29.331309 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-zl5x2\" (UniqueName: \"kubernetes.io/projected/ad3d55c4-7a74-41c0-b5ac-c2688031aaf9-kube-api-access-zl5x2\") pod \"elastic-operator-64749bd6b4-x67b2\" (UID: \"ad3d55c4-7a74-41c0-b5ac-c2688031aaf9\") " pod="service-telemetry/elastic-operator-64749bd6b4-x67b2" Dec 13 00:20:29 crc kubenswrapper[3562]: I1213 00:20:29.331353 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ad3d55c4-7a74-41c0-b5ac-c2688031aaf9-webhook-cert\") pod \"elastic-operator-64749bd6b4-x67b2\" (UID: \"ad3d55c4-7a74-41c0-b5ac-c2688031aaf9\") " pod="service-telemetry/elastic-operator-64749bd6b4-x67b2" Dec 13 00:20:29 crc kubenswrapper[3562]: I1213 00:20:29.331394 3562 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-25jp5\" (UniqueName: \"kubernetes.io/projected/7ac424f6-5cb0-466a-8ca8-c2b58d61c8c3-kube-api-access-25jp5\") on node \"crc\" DevicePath \"\"" Dec 13 00:20:29 crc kubenswrapper[3562]: I1213 00:20:29.331406 3562 reconciler_common.go:300] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7ac424f6-5cb0-466a-8ca8-c2b58d61c8c3-util\") on node \"crc\" DevicePath \"\"" Dec 13 00:20:29 crc kubenswrapper[3562]: I1213 00:20:29.331416 3562 reconciler_common.go:300] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7ac424f6-5cb0-466a-8ca8-c2b58d61c8c3-bundle\") on node \"crc\" DevicePath \"\"" Dec 13 00:20:29 crc kubenswrapper[3562]: I1213 00:20:29.336514 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ad3d55c4-7a74-41c0-b5ac-c2688031aaf9-apiservice-cert\") pod \"elastic-operator-64749bd6b4-x67b2\" (UID: \"ad3d55c4-7a74-41c0-b5ac-c2688031aaf9\") " pod="service-telemetry/elastic-operator-64749bd6b4-x67b2" Dec 13 00:20:29 crc kubenswrapper[3562]: I1213 00:20:29.336572 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ad3d55c4-7a74-41c0-b5ac-c2688031aaf9-webhook-cert\") pod \"elastic-operator-64749bd6b4-x67b2\" (UID: \"ad3d55c4-7a74-41c0-b5ac-c2688031aaf9\") " pod="service-telemetry/elastic-operator-64749bd6b4-x67b2" Dec 13 00:20:29 crc kubenswrapper[3562]: I1213 00:20:29.354562 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-zl5x2\" (UniqueName: \"kubernetes.io/projected/ad3d55c4-7a74-41c0-b5ac-c2688031aaf9-kube-api-access-zl5x2\") pod \"elastic-operator-64749bd6b4-x67b2\" (UID: \"ad3d55c4-7a74-41c0-b5ac-c2688031aaf9\") " pod="service-telemetry/elastic-operator-64749bd6b4-x67b2" Dec 13 00:20:29 crc kubenswrapper[3562]: I1213 00:20:29.375440 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69d2hsx" event={"ID":"7ac424f6-5cb0-466a-8ca8-c2b58d61c8c3","Type":"ContainerDied","Data":"cbe3d91802237045c15c654b8aae04ca96dafb420eb5275743c3bdb8cc86631a"} Dec 13 00:20:29 crc kubenswrapper[3562]: I1213 00:20:29.375483 3562 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cbe3d91802237045c15c654b8aae04ca96dafb420eb5275743c3bdb8cc86631a" Dec 13 00:20:29 crc kubenswrapper[3562]: I1213 00:20:29.375546 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69d2hsx" Dec 13 00:20:29 crc kubenswrapper[3562]: I1213 00:20:29.569884 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-64749bd6b4-x67b2" Dec 13 00:20:30 crc kubenswrapper[3562]: I1213 00:20:30.014661 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-64749bd6b4-x67b2"] Dec 13 00:20:30 crc kubenswrapper[3562]: W1213 00:20:30.023825 3562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad3d55c4_7a74_41c0_b5ac_c2688031aaf9.slice/crio-ab8597620782144ed9b9958632e8fc22da5576af080e8b4439b993cc134bb0ad WatchSource:0}: Error finding container ab8597620782144ed9b9958632e8fc22da5576af080e8b4439b993cc134bb0ad: Status 404 returned error can't find the container with id ab8597620782144ed9b9958632e8fc22da5576af080e8b4439b993cc134bb0ad Dec 13 00:20:30 crc kubenswrapper[3562]: I1213 00:20:30.380269 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-64749bd6b4-x67b2" event={"ID":"ad3d55c4-7a74-41c0-b5ac-c2688031aaf9","Type":"ContainerStarted","Data":"ab8597620782144ed9b9958632e8fc22da5576af080e8b4439b993cc134bb0ad"} Dec 13 00:20:34 crc kubenswrapper[3562]: I1213 00:20:34.140731 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-r72v7" Dec 13 00:20:34 crc kubenswrapper[3562]: I1213 00:20:34.141370 3562 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-r72v7" Dec 13 00:20:34 crc kubenswrapper[3562]: I1213 00:20:34.354558 3562 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-r72v7" Dec 13 00:20:34 crc kubenswrapper[3562]: I1213 00:20:34.514714 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-r72v7" Dec 13 00:20:34 crc kubenswrapper[3562]: I1213 00:20:34.677441 3562 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r72v7"] Dec 13 00:20:36 crc kubenswrapper[3562]: I1213 00:20:36.728187 3562 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-r72v7" podUID="0ebc0c2b-b940-4992-830c-39d6c3021ad1" containerName="registry-server" containerID="cri-o://c83796786760a47100d9eb9fc799a678100a954e34259c6bdaa2503cdc648772" gracePeriod=2 Dec 13 00:20:37 crc kubenswrapper[3562]: I1213 00:20:37.747726 3562 generic.go:334] "Generic (PLEG): container finished" podID="0ebc0c2b-b940-4992-830c-39d6c3021ad1" containerID="c83796786760a47100d9eb9fc799a678100a954e34259c6bdaa2503cdc648772" exitCode=0 Dec 13 00:20:37 crc kubenswrapper[3562]: I1213 00:20:37.747859 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r72v7" event={"ID":"0ebc0c2b-b940-4992-830c-39d6c3021ad1","Type":"ContainerDied","Data":"c83796786760a47100d9eb9fc799a678100a954e34259c6bdaa2503cdc648772"} Dec 13 00:20:40 crc kubenswrapper[3562]: I1213 00:20:40.017366 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r72v7" Dec 13 00:20:40 crc kubenswrapper[3562]: I1213 00:20:40.080722 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ebc0c2b-b940-4992-830c-39d6c3021ad1-catalog-content\") pod \"0ebc0c2b-b940-4992-830c-39d6c3021ad1\" (UID: \"0ebc0c2b-b940-4992-830c-39d6c3021ad1\") " Dec 13 00:20:40 crc kubenswrapper[3562]: I1213 00:20:40.080813 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ebc0c2b-b940-4992-830c-39d6c3021ad1-utilities\") pod \"0ebc0c2b-b940-4992-830c-39d6c3021ad1\" (UID: \"0ebc0c2b-b940-4992-830c-39d6c3021ad1\") " Dec 13 00:20:40 crc kubenswrapper[3562]: I1213 00:20:40.080878 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlzn2\" (UniqueName: \"kubernetes.io/projected/0ebc0c2b-b940-4992-830c-39d6c3021ad1-kube-api-access-hlzn2\") pod \"0ebc0c2b-b940-4992-830c-39d6c3021ad1\" (UID: \"0ebc0c2b-b940-4992-830c-39d6c3021ad1\") " Dec 13 00:20:40 crc kubenswrapper[3562]: I1213 00:20:40.081970 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ebc0c2b-b940-4992-830c-39d6c3021ad1-utilities" (OuterVolumeSpecName: "utilities") pod "0ebc0c2b-b940-4992-830c-39d6c3021ad1" (UID: "0ebc0c2b-b940-4992-830c-39d6c3021ad1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 00:20:40 crc kubenswrapper[3562]: I1213 00:20:40.088370 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ebc0c2b-b940-4992-830c-39d6c3021ad1-kube-api-access-hlzn2" (OuterVolumeSpecName: "kube-api-access-hlzn2") pod "0ebc0c2b-b940-4992-830c-39d6c3021ad1" (UID: "0ebc0c2b-b940-4992-830c-39d6c3021ad1"). InnerVolumeSpecName "kube-api-access-hlzn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 00:20:40 crc kubenswrapper[3562]: I1213 00:20:40.185324 3562 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ebc0c2b-b940-4992-830c-39d6c3021ad1-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 00:20:40 crc kubenswrapper[3562]: I1213 00:20:40.185372 3562 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-hlzn2\" (UniqueName: \"kubernetes.io/projected/0ebc0c2b-b940-4992-830c-39d6c3021ad1-kube-api-access-hlzn2\") on node \"crc\" DevicePath \"\"" Dec 13 00:20:40 crc kubenswrapper[3562]: I1213 00:20:40.790344 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-64749bd6b4-x67b2" event={"ID":"ad3d55c4-7a74-41c0-b5ac-c2688031aaf9","Type":"ContainerStarted","Data":"4907bd9d709ed1958d358a46431bd8117e01c1c262cb4c51967a97fbf2160e28"} Dec 13 00:20:40 crc kubenswrapper[3562]: I1213 00:20:40.800537 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r72v7" event={"ID":"0ebc0c2b-b940-4992-830c-39d6c3021ad1","Type":"ContainerDied","Data":"85e4e82134773e6aa02b650246ebb4bc4ce49df89103f821239e42d034d65ec6"} Dec 13 00:20:40 crc kubenswrapper[3562]: I1213 00:20:40.800582 3562 scope.go:117] "RemoveContainer" containerID="c83796786760a47100d9eb9fc799a678100a954e34259c6bdaa2503cdc648772" Dec 13 00:20:40 crc kubenswrapper[3562]: I1213 00:20:40.800687 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r72v7" Dec 13 00:20:40 crc kubenswrapper[3562]: I1213 00:20:40.851484 3562 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/elastic-operator-64749bd6b4-x67b2" podStartSLOduration=1.840642857 podStartE2EDuration="11.851015932s" podCreationTimestamp="2025-12-13 00:20:29 +0000 UTC" firstStartedPulling="2025-12-13 00:20:30.032767367 +0000 UTC m=+866.515997890" lastFinishedPulling="2025-12-13 00:20:40.043140442 +0000 UTC m=+876.526370965" observedRunningTime="2025-12-13 00:20:40.829350495 +0000 UTC m=+877.312581018" watchObservedRunningTime="2025-12-13 00:20:40.851015932 +0000 UTC m=+877.334246455" Dec 13 00:20:40 crc kubenswrapper[3562]: I1213 00:20:40.866138 3562 scope.go:117] "RemoveContainer" containerID="5f789167d275dcd1204ff8a197f1bb6105fef37330d7d9345568804614518a77" Dec 13 00:20:40 crc kubenswrapper[3562]: I1213 00:20:40.979458 3562 scope.go:117] "RemoveContainer" containerID="b867880eff1e57bbc195c400a4b45c9fcb10d6b436468e9e6eca3d50ed610080" Dec 13 00:20:41 crc kubenswrapper[3562]: I1213 00:20:41.115802 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ebc0c2b-b940-4992-830c-39d6c3021ad1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0ebc0c2b-b940-4992-830c-39d6c3021ad1" (UID: "0ebc0c2b-b940-4992-830c-39d6c3021ad1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 00:20:41 crc kubenswrapper[3562]: I1213 00:20:41.158267 3562 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ebc0c2b-b940-4992-830c-39d6c3021ad1-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 00:20:41 crc kubenswrapper[3562]: I1213 00:20:41.407331 3562 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-864b67f9b9-qzllq"] Dec 13 00:20:41 crc kubenswrapper[3562]: I1213 00:20:41.407712 3562 topology_manager.go:215] "Topology Admit Handler" podUID="c5fda24c-de7c-4868-a327-defef18d0941" podNamespace="openshift-operators" podName="obo-prometheus-operator-864b67f9b9-qzllq" Dec 13 00:20:41 crc kubenswrapper[3562]: E1213 00:20:41.408053 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="0ebc0c2b-b940-4992-830c-39d6c3021ad1" containerName="registry-server" Dec 13 00:20:41 crc kubenswrapper[3562]: I1213 00:20:41.408159 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ebc0c2b-b940-4992-830c-39d6c3021ad1" containerName="registry-server" Dec 13 00:20:41 crc kubenswrapper[3562]: E1213 00:20:41.408246 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="0ebc0c2b-b940-4992-830c-39d6c3021ad1" containerName="extract-utilities" Dec 13 00:20:41 crc kubenswrapper[3562]: I1213 00:20:41.408331 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ebc0c2b-b940-4992-830c-39d6c3021ad1" containerName="extract-utilities" Dec 13 00:20:41 crc kubenswrapper[3562]: E1213 00:20:41.408479 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="0ebc0c2b-b940-4992-830c-39d6c3021ad1" containerName="extract-content" Dec 13 00:20:41 crc kubenswrapper[3562]: I1213 00:20:41.408566 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ebc0c2b-b940-4992-830c-39d6c3021ad1" containerName="extract-content" Dec 13 00:20:41 crc kubenswrapper[3562]: I1213 00:20:41.408772 3562 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ebc0c2b-b940-4992-830c-39d6c3021ad1" containerName="registry-server" Dec 13 00:20:41 crc kubenswrapper[3562]: I1213 00:20:41.409384 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-864b67f9b9-qzllq" Dec 13 00:20:41 crc kubenswrapper[3562]: I1213 00:20:41.411642 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-q4q5f" Dec 13 00:20:41 crc kubenswrapper[3562]: I1213 00:20:41.411894 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Dec 13 00:20:41 crc kubenswrapper[3562]: I1213 00:20:41.412020 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Dec 13 00:20:41 crc kubenswrapper[3562]: I1213 00:20:41.427816 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-864b67f9b9-qzllq"] Dec 13 00:20:41 crc kubenswrapper[3562]: I1213 00:20:41.455876 3562 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r72v7"] Dec 13 00:20:41 crc kubenswrapper[3562]: I1213 00:20:41.461312 3562 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-r72v7"] Dec 13 00:20:41 crc kubenswrapper[3562]: I1213 00:20:41.510036 3562 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-84d9fcf5cc-x8nr9"] Dec 13 00:20:41 crc kubenswrapper[3562]: I1213 00:20:41.510419 3562 topology_manager.go:215] "Topology Admit Handler" podUID="e6c8b1e6-ab4b-40e7-b71b-2d36d472d126" podNamespace="openshift-operators" podName="obo-prometheus-operator-admission-webhook-84d9fcf5cc-x8nr9" Dec 13 00:20:41 crc kubenswrapper[3562]: I1213 00:20:41.511268 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-84d9fcf5cc-x8nr9" Dec 13 00:20:41 crc kubenswrapper[3562]: W1213 00:20:41.519193 3562 reflector.go:539] object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-r65mh": failed to list *v1.Secret: secrets "obo-prometheus-operator-admission-webhook-dockercfg-r65mh" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-operators": no relationship found between node 'crc' and this object Dec 13 00:20:41 crc kubenswrapper[3562]: E1213 00:20:41.519237 3562 reflector.go:147] object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-r65mh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "obo-prometheus-operator-admission-webhook-dockercfg-r65mh" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-operators": no relationship found between node 'crc' and this object Dec 13 00:20:41 crc kubenswrapper[3562]: W1213 00:20:41.519275 3562 reflector.go:539] object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert": failed to list *v1.Secret: secrets "obo-prometheus-operator-admission-webhook-service-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-operators": no relationship found between node 'crc' and this object Dec 13 00:20:41 crc kubenswrapper[3562]: E1213 00:20:41.519288 3562 reflector.go:147] object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "obo-prometheus-operator-admission-webhook-service-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-operators": no relationship found between node 'crc' and this object Dec 13 00:20:41 crc kubenswrapper[3562]: I1213 00:20:41.535208 3562 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-84d9fcf5cc-4nvf8"] Dec 13 00:20:41 crc kubenswrapper[3562]: I1213 00:20:41.535341 3562 topology_manager.go:215] "Topology Admit Handler" podUID="b1f675e1-dac7-4bc6-9d92-4b5ef234ebf0" podNamespace="openshift-operators" podName="obo-prometheus-operator-admission-webhook-84d9fcf5cc-4nvf8" Dec 13 00:20:41 crc kubenswrapper[3562]: I1213 00:20:41.536071 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-84d9fcf5cc-4nvf8" Dec 13 00:20:41 crc kubenswrapper[3562]: I1213 00:20:41.540434 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-84d9fcf5cc-x8nr9"] Dec 13 00:20:41 crc kubenswrapper[3562]: I1213 00:20:41.561329 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-84d9fcf5cc-4nvf8"] Dec 13 00:20:41 crc kubenswrapper[3562]: I1213 00:20:41.564608 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mthm6\" (UniqueName: \"kubernetes.io/projected/c5fda24c-de7c-4868-a327-defef18d0941-kube-api-access-mthm6\") pod \"obo-prometheus-operator-864b67f9b9-qzllq\" (UID: \"c5fda24c-de7c-4868-a327-defef18d0941\") " pod="openshift-operators/obo-prometheus-operator-864b67f9b9-qzllq" Dec 13 00:20:41 crc kubenswrapper[3562]: I1213 00:20:41.636550 3562 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-65df589ff7-t58nh"] Dec 13 00:20:41 crc kubenswrapper[3562]: I1213 00:20:41.636672 3562 topology_manager.go:215] "Topology Admit Handler" podUID="4d1602d6-ff5e-41fe-b6e0-e35295a45d7a" podNamespace="openshift-operators" podName="observability-operator-65df589ff7-t58nh" Dec 13 00:20:41 crc kubenswrapper[3562]: I1213 00:20:41.640441 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-65df589ff7-t58nh" Dec 13 00:20:41 crc kubenswrapper[3562]: I1213 00:20:41.654180 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Dec 13 00:20:41 crc kubenswrapper[3562]: I1213 00:20:41.654440 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-hbxhw" Dec 13 00:20:41 crc kubenswrapper[3562]: I1213 00:20:41.665542 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e6c8b1e6-ab4b-40e7-b71b-2d36d472d126-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-84d9fcf5cc-x8nr9\" (UID: \"e6c8b1e6-ab4b-40e7-b71b-2d36d472d126\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-84d9fcf5cc-x8nr9" Dec 13 00:20:41 crc kubenswrapper[3562]: I1213 00:20:41.665878 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e6c8b1e6-ab4b-40e7-b71b-2d36d472d126-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-84d9fcf5cc-x8nr9\" (UID: \"e6c8b1e6-ab4b-40e7-b71b-2d36d472d126\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-84d9fcf5cc-x8nr9" Dec 13 00:20:41 crc kubenswrapper[3562]: I1213 00:20:41.665995 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b1f675e1-dac7-4bc6-9d92-4b5ef234ebf0-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-84d9fcf5cc-4nvf8\" (UID: \"b1f675e1-dac7-4bc6-9d92-4b5ef234ebf0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-84d9fcf5cc-4nvf8" Dec 13 00:20:41 crc kubenswrapper[3562]: I1213 00:20:41.666110 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-mthm6\" (UniqueName: \"kubernetes.io/projected/c5fda24c-de7c-4868-a327-defef18d0941-kube-api-access-mthm6\") pod \"obo-prometheus-operator-864b67f9b9-qzllq\" (UID: \"c5fda24c-de7c-4868-a327-defef18d0941\") " pod="openshift-operators/obo-prometheus-operator-864b67f9b9-qzllq" Dec 13 00:20:41 crc kubenswrapper[3562]: I1213 00:20:41.666206 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b1f675e1-dac7-4bc6-9d92-4b5ef234ebf0-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-84d9fcf5cc-4nvf8\" (UID: \"b1f675e1-dac7-4bc6-9d92-4b5ef234ebf0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-84d9fcf5cc-4nvf8" Dec 13 00:20:41 crc kubenswrapper[3562]: I1213 00:20:41.687958 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-65df589ff7-t58nh"] Dec 13 00:20:41 crc kubenswrapper[3562]: I1213 00:20:41.699388 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-mthm6\" (UniqueName: \"kubernetes.io/projected/c5fda24c-de7c-4868-a327-defef18d0941-kube-api-access-mthm6\") pod \"obo-prometheus-operator-864b67f9b9-qzllq\" (UID: \"c5fda24c-de7c-4868-a327-defef18d0941\") " pod="openshift-operators/obo-prometheus-operator-864b67f9b9-qzllq" Dec 13 00:20:41 crc kubenswrapper[3562]: I1213 00:20:41.731124 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-864b67f9b9-qzllq" Dec 13 00:20:41 crc kubenswrapper[3562]: I1213 00:20:41.749216 3562 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-574fd8d65d-2l28k"] Dec 13 00:20:41 crc kubenswrapper[3562]: I1213 00:20:41.749376 3562 topology_manager.go:215] "Topology Admit Handler" podUID="f8ed63a0-9d2d-4db9-8d62-0d69963cc480" podNamespace="openshift-operators" podName="perses-operator-574fd8d65d-2l28k" Dec 13 00:20:41 crc kubenswrapper[3562]: I1213 00:20:41.753293 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-574fd8d65d-2l28k" Dec 13 00:20:41 crc kubenswrapper[3562]: I1213 00:20:41.757232 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-2fcnq" Dec 13 00:20:41 crc kubenswrapper[3562]: I1213 00:20:41.779124 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b1f675e1-dac7-4bc6-9d92-4b5ef234ebf0-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-84d9fcf5cc-4nvf8\" (UID: \"b1f675e1-dac7-4bc6-9d92-4b5ef234ebf0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-84d9fcf5cc-4nvf8" Dec 13 00:20:41 crc kubenswrapper[3562]: I1213 00:20:41.779193 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e6c8b1e6-ab4b-40e7-b71b-2d36d472d126-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-84d9fcf5cc-x8nr9\" (UID: \"e6c8b1e6-ab4b-40e7-b71b-2d36d472d126\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-84d9fcf5cc-x8nr9" Dec 13 00:20:41 crc kubenswrapper[3562]: I1213 00:20:41.779223 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mn2b\" (UniqueName: \"kubernetes.io/projected/4d1602d6-ff5e-41fe-b6e0-e35295a45d7a-kube-api-access-4mn2b\") pod \"observability-operator-65df589ff7-t58nh\" (UID: \"4d1602d6-ff5e-41fe-b6e0-e35295a45d7a\") " pod="openshift-operators/observability-operator-65df589ff7-t58nh" Dec 13 00:20:41 crc kubenswrapper[3562]: I1213 00:20:41.779247 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/4d1602d6-ff5e-41fe-b6e0-e35295a45d7a-observability-operator-tls\") pod \"observability-operator-65df589ff7-t58nh\" (UID: \"4d1602d6-ff5e-41fe-b6e0-e35295a45d7a\") " pod="openshift-operators/observability-operator-65df589ff7-t58nh" Dec 13 00:20:41 crc kubenswrapper[3562]: I1213 00:20:41.779283 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e6c8b1e6-ab4b-40e7-b71b-2d36d472d126-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-84d9fcf5cc-x8nr9\" (UID: \"e6c8b1e6-ab4b-40e7-b71b-2d36d472d126\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-84d9fcf5cc-x8nr9" Dec 13 00:20:41 crc kubenswrapper[3562]: I1213 00:20:41.779314 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b1f675e1-dac7-4bc6-9d92-4b5ef234ebf0-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-84d9fcf5cc-4nvf8\" (UID: \"b1f675e1-dac7-4bc6-9d92-4b5ef234ebf0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-84d9fcf5cc-4nvf8" Dec 13 00:20:41 crc kubenswrapper[3562]: I1213 00:20:41.781665 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-574fd8d65d-2l28k"] Dec 13 00:20:41 crc kubenswrapper[3562]: I1213 00:20:41.880519 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-255qw\" (UniqueName: \"kubernetes.io/projected/f8ed63a0-9d2d-4db9-8d62-0d69963cc480-kube-api-access-255qw\") pod \"perses-operator-574fd8d65d-2l28k\" (UID: \"f8ed63a0-9d2d-4db9-8d62-0d69963cc480\") " pod="openshift-operators/perses-operator-574fd8d65d-2l28k" Dec 13 00:20:41 crc kubenswrapper[3562]: I1213 00:20:41.880589 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/f8ed63a0-9d2d-4db9-8d62-0d69963cc480-openshift-service-ca\") pod \"perses-operator-574fd8d65d-2l28k\" (UID: \"f8ed63a0-9d2d-4db9-8d62-0d69963cc480\") " pod="openshift-operators/perses-operator-574fd8d65d-2l28k" Dec 13 00:20:41 crc kubenswrapper[3562]: I1213 00:20:41.880638 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4mn2b\" (UniqueName: \"kubernetes.io/projected/4d1602d6-ff5e-41fe-b6e0-e35295a45d7a-kube-api-access-4mn2b\") pod \"observability-operator-65df589ff7-t58nh\" (UID: \"4d1602d6-ff5e-41fe-b6e0-e35295a45d7a\") " pod="openshift-operators/observability-operator-65df589ff7-t58nh" Dec 13 00:20:41 crc kubenswrapper[3562]: I1213 00:20:41.880661 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/4d1602d6-ff5e-41fe-b6e0-e35295a45d7a-observability-operator-tls\") pod \"observability-operator-65df589ff7-t58nh\" (UID: \"4d1602d6-ff5e-41fe-b6e0-e35295a45d7a\") " pod="openshift-operators/observability-operator-65df589ff7-t58nh" Dec 13 00:20:41 crc kubenswrapper[3562]: I1213 00:20:41.886031 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/4d1602d6-ff5e-41fe-b6e0-e35295a45d7a-observability-operator-tls\") pod \"observability-operator-65df589ff7-t58nh\" (UID: \"4d1602d6-ff5e-41fe-b6e0-e35295a45d7a\") " pod="openshift-operators/observability-operator-65df589ff7-t58nh" Dec 13 00:20:41 crc kubenswrapper[3562]: I1213 00:20:41.893855 3562 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ebc0c2b-b940-4992-830c-39d6c3021ad1" path="/var/lib/kubelet/pods/0ebc0c2b-b940-4992-830c-39d6c3021ad1/volumes" Dec 13 00:20:41 crc kubenswrapper[3562]: I1213 00:20:41.906152 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mn2b\" (UniqueName: \"kubernetes.io/projected/4d1602d6-ff5e-41fe-b6e0-e35295a45d7a-kube-api-access-4mn2b\") pod \"observability-operator-65df589ff7-t58nh\" (UID: \"4d1602d6-ff5e-41fe-b6e0-e35295a45d7a\") " pod="openshift-operators/observability-operator-65df589ff7-t58nh" Dec 13 00:20:41 crc kubenswrapper[3562]: I1213 00:20:41.954918 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-65df589ff7-t58nh" Dec 13 00:20:41 crc kubenswrapper[3562]: I1213 00:20:41.990526 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/f8ed63a0-9d2d-4db9-8d62-0d69963cc480-openshift-service-ca\") pod \"perses-operator-574fd8d65d-2l28k\" (UID: \"f8ed63a0-9d2d-4db9-8d62-0d69963cc480\") " pod="openshift-operators/perses-operator-574fd8d65d-2l28k" Dec 13 00:20:41 crc kubenswrapper[3562]: I1213 00:20:41.990949 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-255qw\" (UniqueName: \"kubernetes.io/projected/f8ed63a0-9d2d-4db9-8d62-0d69963cc480-kube-api-access-255qw\") pod \"perses-operator-574fd8d65d-2l28k\" (UID: \"f8ed63a0-9d2d-4db9-8d62-0d69963cc480\") " pod="openshift-operators/perses-operator-574fd8d65d-2l28k" Dec 13 00:20:41 crc kubenswrapper[3562]: I1213 00:20:41.992828 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/f8ed63a0-9d2d-4db9-8d62-0d69963cc480-openshift-service-ca\") pod \"perses-operator-574fd8d65d-2l28k\" (UID: \"f8ed63a0-9d2d-4db9-8d62-0d69963cc480\") " pod="openshift-operators/perses-operator-574fd8d65d-2l28k" Dec 13 00:20:42 crc kubenswrapper[3562]: I1213 00:20:42.073188 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-255qw\" (UniqueName: \"kubernetes.io/projected/f8ed63a0-9d2d-4db9-8d62-0d69963cc480-kube-api-access-255qw\") pod \"perses-operator-574fd8d65d-2l28k\" (UID: \"f8ed63a0-9d2d-4db9-8d62-0d69963cc480\") " pod="openshift-operators/perses-operator-574fd8d65d-2l28k" Dec 13 00:20:42 crc kubenswrapper[3562]: I1213 00:20:42.105161 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-574fd8d65d-2l28k" Dec 13 00:20:42 crc kubenswrapper[3562]: I1213 00:20:42.242659 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-864b67f9b9-qzllq"] Dec 13 00:20:42 crc kubenswrapper[3562]: W1213 00:20:42.259161 3562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5fda24c_de7c_4868_a327_defef18d0941.slice/crio-d8ef409a0de4a7f7b7bb9572cd9df55cc6e438c2217642d7f874467a4dc5119b WatchSource:0}: Error finding container d8ef409a0de4a7f7b7bb9572cd9df55cc6e438c2217642d7f874467a4dc5119b: Status 404 returned error can't find the container with id d8ef409a0de4a7f7b7bb9572cd9df55cc6e438c2217642d7f874467a4dc5119b Dec 13 00:20:42 crc kubenswrapper[3562]: I1213 00:20:42.301420 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-65df589ff7-t58nh"] Dec 13 00:20:42 crc kubenswrapper[3562]: I1213 00:20:42.475143 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Dec 13 00:20:42 crc kubenswrapper[3562]: I1213 00:20:42.486018 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e6c8b1e6-ab4b-40e7-b71b-2d36d472d126-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-84d9fcf5cc-x8nr9\" (UID: \"e6c8b1e6-ab4b-40e7-b71b-2d36d472d126\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-84d9fcf5cc-x8nr9" Dec 13 00:20:42 crc kubenswrapper[3562]: I1213 00:20:42.486779 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b1f675e1-dac7-4bc6-9d92-4b5ef234ebf0-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-84d9fcf5cc-4nvf8\" (UID: \"b1f675e1-dac7-4bc6-9d92-4b5ef234ebf0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-84d9fcf5cc-4nvf8" Dec 13 00:20:42 crc kubenswrapper[3562]: I1213 00:20:42.488819 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b1f675e1-dac7-4bc6-9d92-4b5ef234ebf0-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-84d9fcf5cc-4nvf8\" (UID: \"b1f675e1-dac7-4bc6-9d92-4b5ef234ebf0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-84d9fcf5cc-4nvf8" Dec 13 00:20:42 crc kubenswrapper[3562]: I1213 00:20:42.489689 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e6c8b1e6-ab4b-40e7-b71b-2d36d472d126-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-84d9fcf5cc-x8nr9\" (UID: \"e6c8b1e6-ab4b-40e7-b71b-2d36d472d126\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-84d9fcf5cc-x8nr9" Dec 13 00:20:42 crc kubenswrapper[3562]: I1213 00:20:42.610757 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-r65mh" Dec 13 00:20:42 crc kubenswrapper[3562]: I1213 00:20:42.734662 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-84d9fcf5cc-x8nr9" Dec 13 00:20:42 crc kubenswrapper[3562]: I1213 00:20:42.764430 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-84d9fcf5cc-4nvf8" Dec 13 00:20:42 crc kubenswrapper[3562]: I1213 00:20:42.772399 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-574fd8d65d-2l28k"] Dec 13 00:20:42 crc kubenswrapper[3562]: W1213 00:20:42.785192 3562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf8ed63a0_9d2d_4db9_8d62_0d69963cc480.slice/crio-0d9df7f1654e32bada4e0d5dce9be35f60c667cbc0c7ceff6e0a29a9e67798a4 WatchSource:0}: Error finding container 0d9df7f1654e32bada4e0d5dce9be35f60c667cbc0c7ceff6e0a29a9e67798a4: Status 404 returned error can't find the container with id 0d9df7f1654e32bada4e0d5dce9be35f60c667cbc0c7ceff6e0a29a9e67798a4 Dec 13 00:20:42 crc kubenswrapper[3562]: I1213 00:20:42.830065 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-65df589ff7-t58nh" event={"ID":"4d1602d6-ff5e-41fe-b6e0-e35295a45d7a","Type":"ContainerStarted","Data":"9ccf15ebdcfb339eb76b1d1017fa319c2dc49723ca511198bfdc8b5e2cb148a3"} Dec 13 00:20:42 crc kubenswrapper[3562]: I1213 00:20:42.837117 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-574fd8d65d-2l28k" event={"ID":"f8ed63a0-9d2d-4db9-8d62-0d69963cc480","Type":"ContainerStarted","Data":"0d9df7f1654e32bada4e0d5dce9be35f60c667cbc0c7ceff6e0a29a9e67798a4"} Dec 13 00:20:42 crc kubenswrapper[3562]: I1213 00:20:42.840024 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-864b67f9b9-qzllq" event={"ID":"c5fda24c-de7c-4868-a327-defef18d0941","Type":"ContainerStarted","Data":"d8ef409a0de4a7f7b7bb9572cd9df55cc6e438c2217642d7f874467a4dc5119b"} Dec 13 00:20:43 crc kubenswrapper[3562]: I1213 00:20:43.055202 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-84d9fcf5cc-x8nr9"] Dec 13 00:20:43 crc kubenswrapper[3562]: I1213 00:20:43.108037 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-84d9fcf5cc-4nvf8"] Dec 13 00:20:43 crc kubenswrapper[3562]: W1213 00:20:43.116647 3562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1f675e1_dac7_4bc6_9d92_4b5ef234ebf0.slice/crio-872984a67b131f3879e3a325d32d453ec8cdfd9a034eb34834930292b524fc44 WatchSource:0}: Error finding container 872984a67b131f3879e3a325d32d453ec8cdfd9a034eb34834930292b524fc44: Status 404 returned error can't find the container with id 872984a67b131f3879e3a325d32d453ec8cdfd9a034eb34834930292b524fc44 Dec 13 00:20:43 crc kubenswrapper[3562]: I1213 00:20:43.847495 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-84d9fcf5cc-4nvf8" event={"ID":"b1f675e1-dac7-4bc6-9d92-4b5ef234ebf0","Type":"ContainerStarted","Data":"872984a67b131f3879e3a325d32d453ec8cdfd9a034eb34834930292b524fc44"} Dec 13 00:20:43 crc kubenswrapper[3562]: I1213 00:20:43.856651 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-84d9fcf5cc-x8nr9" event={"ID":"e6c8b1e6-ab4b-40e7-b71b-2d36d472d126","Type":"ContainerStarted","Data":"86a73e208ea425f2efeb2b17a27776deaba862ddcc94bcc9f4d3936bf44721ce"} Dec 13 00:20:46 crc kubenswrapper[3562]: I1213 00:20:46.332465 3562 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rdbcd"] Dec 13 00:20:46 crc kubenswrapper[3562]: I1213 00:20:46.332589 3562 topology_manager.go:215] "Topology Admit Handler" podUID="5e4287eb-4275-4bfe-90f1-854f3e6db87a" podNamespace="openshift-marketplace" podName="community-operators-rdbcd" Dec 13 00:20:46 crc kubenswrapper[3562]: I1213 00:20:46.333705 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rdbcd" Dec 13 00:20:46 crc kubenswrapper[3562]: I1213 00:20:46.351883 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rdbcd"] Dec 13 00:20:46 crc kubenswrapper[3562]: I1213 00:20:46.400928 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e4287eb-4275-4bfe-90f1-854f3e6db87a-catalog-content\") pod \"community-operators-rdbcd\" (UID: \"5e4287eb-4275-4bfe-90f1-854f3e6db87a\") " pod="openshift-marketplace/community-operators-rdbcd" Dec 13 00:20:46 crc kubenswrapper[3562]: I1213 00:20:46.400993 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e4287eb-4275-4bfe-90f1-854f3e6db87a-utilities\") pod \"community-operators-rdbcd\" (UID: \"5e4287eb-4275-4bfe-90f1-854f3e6db87a\") " pod="openshift-marketplace/community-operators-rdbcd" Dec 13 00:20:46 crc kubenswrapper[3562]: I1213 00:20:46.401167 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w87cw\" (UniqueName: \"kubernetes.io/projected/5e4287eb-4275-4bfe-90f1-854f3e6db87a-kube-api-access-w87cw\") pod \"community-operators-rdbcd\" (UID: \"5e4287eb-4275-4bfe-90f1-854f3e6db87a\") " pod="openshift-marketplace/community-operators-rdbcd" Dec 13 00:20:46 crc kubenswrapper[3562]: I1213 00:20:46.502414 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e4287eb-4275-4bfe-90f1-854f3e6db87a-utilities\") pod \"community-operators-rdbcd\" (UID: \"5e4287eb-4275-4bfe-90f1-854f3e6db87a\") " pod="openshift-marketplace/community-operators-rdbcd" Dec 13 00:20:46 crc kubenswrapper[3562]: I1213 00:20:46.502473 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-w87cw\" (UniqueName: \"kubernetes.io/projected/5e4287eb-4275-4bfe-90f1-854f3e6db87a-kube-api-access-w87cw\") pod \"community-operators-rdbcd\" (UID: \"5e4287eb-4275-4bfe-90f1-854f3e6db87a\") " pod="openshift-marketplace/community-operators-rdbcd" Dec 13 00:20:46 crc kubenswrapper[3562]: I1213 00:20:46.502512 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e4287eb-4275-4bfe-90f1-854f3e6db87a-catalog-content\") pod \"community-operators-rdbcd\" (UID: \"5e4287eb-4275-4bfe-90f1-854f3e6db87a\") " pod="openshift-marketplace/community-operators-rdbcd" Dec 13 00:20:46 crc kubenswrapper[3562]: I1213 00:20:46.502978 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e4287eb-4275-4bfe-90f1-854f3e6db87a-catalog-content\") pod \"community-operators-rdbcd\" (UID: \"5e4287eb-4275-4bfe-90f1-854f3e6db87a\") " pod="openshift-marketplace/community-operators-rdbcd" Dec 13 00:20:46 crc kubenswrapper[3562]: I1213 00:20:46.503186 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e4287eb-4275-4bfe-90f1-854f3e6db87a-utilities\") pod \"community-operators-rdbcd\" (UID: \"5e4287eb-4275-4bfe-90f1-854f3e6db87a\") " pod="openshift-marketplace/community-operators-rdbcd" Dec 13 00:20:46 crc kubenswrapper[3562]: I1213 00:20:46.526553 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-w87cw\" (UniqueName: \"kubernetes.io/projected/5e4287eb-4275-4bfe-90f1-854f3e6db87a-kube-api-access-w87cw\") pod \"community-operators-rdbcd\" (UID: \"5e4287eb-4275-4bfe-90f1-854f3e6db87a\") " pod="openshift-marketplace/community-operators-rdbcd" Dec 13 00:20:46 crc kubenswrapper[3562]: I1213 00:20:46.663254 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rdbcd" Dec 13 00:20:46 crc kubenswrapper[3562]: I1213 00:20:46.751230 3562 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5774f55cb7-p78g5"] Dec 13 00:20:46 crc kubenswrapper[3562]: I1213 00:20:46.751337 3562 topology_manager.go:215] "Topology Admit Handler" podUID="ada2581f-35f4-409c-9414-464ddb79fac9" podNamespace="cert-manager-operator" podName="cert-manager-operator-controller-manager-5774f55cb7-p78g5" Dec 13 00:20:46 crc kubenswrapper[3562]: I1213 00:20:46.751895 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-5774f55cb7-p78g5" Dec 13 00:20:46 crc kubenswrapper[3562]: I1213 00:20:46.757334 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Dec 13 00:20:46 crc kubenswrapper[3562]: I1213 00:20:46.757364 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Dec 13 00:20:46 crc kubenswrapper[3562]: I1213 00:20:46.760470 3562 reflector.go:351] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-7w5md" Dec 13 00:20:46 crc kubenswrapper[3562]: I1213 00:20:46.765896 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5774f55cb7-p78g5"] Dec 13 00:20:46 crc kubenswrapper[3562]: I1213 00:20:46.818419 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6z2j\" (UniqueName: \"kubernetes.io/projected/ada2581f-35f4-409c-9414-464ddb79fac9-kube-api-access-p6z2j\") pod \"cert-manager-operator-controller-manager-5774f55cb7-p78g5\" (UID: \"ada2581f-35f4-409c-9414-464ddb79fac9\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5774f55cb7-p78g5" Dec 13 00:20:46 crc kubenswrapper[3562]: I1213 00:20:46.919629 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-p6z2j\" (UniqueName: \"kubernetes.io/projected/ada2581f-35f4-409c-9414-464ddb79fac9-kube-api-access-p6z2j\") pod \"cert-manager-operator-controller-manager-5774f55cb7-p78g5\" (UID: \"ada2581f-35f4-409c-9414-464ddb79fac9\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5774f55cb7-p78g5" Dec 13 00:20:46 crc kubenswrapper[3562]: I1213 00:20:46.964064 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6z2j\" (UniqueName: \"kubernetes.io/projected/ada2581f-35f4-409c-9414-464ddb79fac9-kube-api-access-p6z2j\") pod \"cert-manager-operator-controller-manager-5774f55cb7-p78g5\" (UID: \"ada2581f-35f4-409c-9414-464ddb79fac9\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5774f55cb7-p78g5" Dec 13 00:20:47 crc kubenswrapper[3562]: I1213 00:20:47.075152 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-5774f55cb7-p78g5" Dec 13 00:20:47 crc kubenswrapper[3562]: I1213 00:20:47.251954 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rdbcd"] Dec 13 00:20:53 crc kubenswrapper[3562]: I1213 00:20:53.884650 3562 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 13 00:20:53 crc kubenswrapper[3562]: I1213 00:20:53.885250 3562 topology_manager.go:215] "Topology Admit Handler" podUID="fae42e72-8265-42c3-b07a-1426ce5ccfad" podNamespace="service-telemetry" podName="elasticsearch-es-default-0" Dec 13 00:20:53 crc kubenswrapper[3562]: I1213 00:20:53.886182 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Dec 13 00:20:53 crc kubenswrapper[3562]: I1213 00:20:53.888480 3562 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-internal-users" Dec 13 00:20:53 crc kubenswrapper[3562]: I1213 00:20:53.888533 3562 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-remote-ca" Dec 13 00:20:53 crc kubenswrapper[3562]: I1213 00:20:53.888868 3562 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"default-dockercfg-dm82g" Dec 13 00:20:53 crc kubenswrapper[3562]: I1213 00:20:53.888883 3562 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-default-es-config" Dec 13 00:20:53 crc kubenswrapper[3562]: I1213 00:20:53.888976 3562 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-default-es-transport-certs" Dec 13 00:20:53 crc kubenswrapper[3562]: I1213 00:20:53.889424 3562 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-http-certs-internal" Dec 13 00:20:53 crc kubenswrapper[3562]: I1213 00:20:53.893529 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"elasticsearch-es-scripts" Dec 13 00:20:53 crc kubenswrapper[3562]: I1213 00:20:53.893761 3562 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-xpack-file-realm" Dec 13 00:20:53 crc kubenswrapper[3562]: I1213 00:20:53.894018 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"elasticsearch-es-unicast-hosts" Dec 13 00:20:53 crc kubenswrapper[3562]: I1213 00:20:53.909428 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 13 00:20:54 crc kubenswrapper[3562]: I1213 00:20:54.072034 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/fae42e72-8265-42c3-b07a-1426ce5ccfad-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"fae42e72-8265-42c3-b07a-1426ce5ccfad\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 13 00:20:54 crc kubenswrapper[3562]: I1213 00:20:54.072089 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/fae42e72-8265-42c3-b07a-1426ce5ccfad-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"fae42e72-8265-42c3-b07a-1426ce5ccfad\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 13 00:20:54 crc kubenswrapper[3562]: I1213 00:20:54.072125 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/fae42e72-8265-42c3-b07a-1426ce5ccfad-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"fae42e72-8265-42c3-b07a-1426ce5ccfad\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 13 00:20:54 crc kubenswrapper[3562]: I1213 00:20:54.072144 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/fae42e72-8265-42c3-b07a-1426ce5ccfad-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"fae42e72-8265-42c3-b07a-1426ce5ccfad\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 13 00:20:54 crc kubenswrapper[3562]: I1213 00:20:54.072179 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/fae42e72-8265-42c3-b07a-1426ce5ccfad-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"fae42e72-8265-42c3-b07a-1426ce5ccfad\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 13 00:20:54 crc kubenswrapper[3562]: I1213 00:20:54.072203 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/fae42e72-8265-42c3-b07a-1426ce5ccfad-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"fae42e72-8265-42c3-b07a-1426ce5ccfad\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 13 00:20:54 crc kubenswrapper[3562]: I1213 00:20:54.072231 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/fae42e72-8265-42c3-b07a-1426ce5ccfad-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"fae42e72-8265-42c3-b07a-1426ce5ccfad\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 13 00:20:54 crc kubenswrapper[3562]: I1213 00:20:54.072256 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/fae42e72-8265-42c3-b07a-1426ce5ccfad-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"fae42e72-8265-42c3-b07a-1426ce5ccfad\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 13 00:20:54 crc kubenswrapper[3562]: I1213 00:20:54.072279 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/fae42e72-8265-42c3-b07a-1426ce5ccfad-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"fae42e72-8265-42c3-b07a-1426ce5ccfad\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 13 00:20:54 crc kubenswrapper[3562]: I1213 00:20:54.072298 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/fae42e72-8265-42c3-b07a-1426ce5ccfad-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"fae42e72-8265-42c3-b07a-1426ce5ccfad\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 13 00:20:54 crc kubenswrapper[3562]: I1213 00:20:54.072321 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/fae42e72-8265-42c3-b07a-1426ce5ccfad-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"fae42e72-8265-42c3-b07a-1426ce5ccfad\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 13 00:20:54 crc kubenswrapper[3562]: I1213 00:20:54.072341 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/fae42e72-8265-42c3-b07a-1426ce5ccfad-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"fae42e72-8265-42c3-b07a-1426ce5ccfad\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 13 00:20:54 crc kubenswrapper[3562]: I1213 00:20:54.072367 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/fae42e72-8265-42c3-b07a-1426ce5ccfad-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"fae42e72-8265-42c3-b07a-1426ce5ccfad\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 13 00:20:54 crc kubenswrapper[3562]: I1213 00:20:54.072396 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/fae42e72-8265-42c3-b07a-1426ce5ccfad-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"fae42e72-8265-42c3-b07a-1426ce5ccfad\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 13 00:20:54 crc kubenswrapper[3562]: I1213 00:20:54.072420 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/fae42e72-8265-42c3-b07a-1426ce5ccfad-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"fae42e72-8265-42c3-b07a-1426ce5ccfad\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 13 00:20:54 crc kubenswrapper[3562]: I1213 00:20:54.173359 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/fae42e72-8265-42c3-b07a-1426ce5ccfad-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"fae42e72-8265-42c3-b07a-1426ce5ccfad\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 13 00:20:54 crc kubenswrapper[3562]: I1213 00:20:54.173417 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/fae42e72-8265-42c3-b07a-1426ce5ccfad-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"fae42e72-8265-42c3-b07a-1426ce5ccfad\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 13 00:20:54 crc kubenswrapper[3562]: I1213 00:20:54.173455 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/fae42e72-8265-42c3-b07a-1426ce5ccfad-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"fae42e72-8265-42c3-b07a-1426ce5ccfad\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 13 00:20:54 crc kubenswrapper[3562]: I1213 00:20:54.173495 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/fae42e72-8265-42c3-b07a-1426ce5ccfad-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"fae42e72-8265-42c3-b07a-1426ce5ccfad\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 13 00:20:54 crc kubenswrapper[3562]: I1213 00:20:54.173541 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/fae42e72-8265-42c3-b07a-1426ce5ccfad-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"fae42e72-8265-42c3-b07a-1426ce5ccfad\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 13 00:20:54 crc kubenswrapper[3562]: I1213 00:20:54.173581 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/fae42e72-8265-42c3-b07a-1426ce5ccfad-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"fae42e72-8265-42c3-b07a-1426ce5ccfad\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 13 00:20:54 crc kubenswrapper[3562]: I1213 00:20:54.173615 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/fae42e72-8265-42c3-b07a-1426ce5ccfad-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"fae42e72-8265-42c3-b07a-1426ce5ccfad\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 13 00:20:54 crc kubenswrapper[3562]: I1213 00:20:54.173642 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/fae42e72-8265-42c3-b07a-1426ce5ccfad-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"fae42e72-8265-42c3-b07a-1426ce5ccfad\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 13 00:20:54 crc kubenswrapper[3562]: I1213 00:20:54.173677 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/fae42e72-8265-42c3-b07a-1426ce5ccfad-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"fae42e72-8265-42c3-b07a-1426ce5ccfad\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 13 00:20:54 crc kubenswrapper[3562]: I1213 00:20:54.173706 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/fae42e72-8265-42c3-b07a-1426ce5ccfad-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"fae42e72-8265-42c3-b07a-1426ce5ccfad\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 13 00:20:54 crc kubenswrapper[3562]: I1213 00:20:54.173732 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/fae42e72-8265-42c3-b07a-1426ce5ccfad-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"fae42e72-8265-42c3-b07a-1426ce5ccfad\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 13 00:20:54 crc kubenswrapper[3562]: I1213 00:20:54.173779 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/fae42e72-8265-42c3-b07a-1426ce5ccfad-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"fae42e72-8265-42c3-b07a-1426ce5ccfad\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 13 00:20:54 crc kubenswrapper[3562]: I1213 00:20:54.173813 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/fae42e72-8265-42c3-b07a-1426ce5ccfad-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"fae42e72-8265-42c3-b07a-1426ce5ccfad\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 13 00:20:54 crc kubenswrapper[3562]: I1213 00:20:54.173849 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/fae42e72-8265-42c3-b07a-1426ce5ccfad-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"fae42e72-8265-42c3-b07a-1426ce5ccfad\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 13 00:20:54 crc kubenswrapper[3562]: I1213 00:20:54.173879 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/fae42e72-8265-42c3-b07a-1426ce5ccfad-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"fae42e72-8265-42c3-b07a-1426ce5ccfad\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 13 00:20:54 crc kubenswrapper[3562]: I1213 00:20:54.174173 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/fae42e72-8265-42c3-b07a-1426ce5ccfad-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"fae42e72-8265-42c3-b07a-1426ce5ccfad\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 13 00:20:54 crc kubenswrapper[3562]: I1213 00:20:54.174959 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/fae42e72-8265-42c3-b07a-1426ce5ccfad-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"fae42e72-8265-42c3-b07a-1426ce5ccfad\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 13 00:20:54 crc kubenswrapper[3562]: I1213 00:20:54.178492 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/fae42e72-8265-42c3-b07a-1426ce5ccfad-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"fae42e72-8265-42c3-b07a-1426ce5ccfad\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 13 00:20:54 crc kubenswrapper[3562]: I1213 00:20:54.174172 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/fae42e72-8265-42c3-b07a-1426ce5ccfad-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"fae42e72-8265-42c3-b07a-1426ce5ccfad\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 13 00:20:54 crc kubenswrapper[3562]: I1213 00:20:54.178622 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/fae42e72-8265-42c3-b07a-1426ce5ccfad-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"fae42e72-8265-42c3-b07a-1426ce5ccfad\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 13 00:20:54 crc kubenswrapper[3562]: I1213 00:20:54.190809 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/fae42e72-8265-42c3-b07a-1426ce5ccfad-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"fae42e72-8265-42c3-b07a-1426ce5ccfad\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 13 00:20:54 crc kubenswrapper[3562]: I1213 00:20:54.190891 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/fae42e72-8265-42c3-b07a-1426ce5ccfad-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"fae42e72-8265-42c3-b07a-1426ce5ccfad\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 13 00:20:54 crc kubenswrapper[3562]: I1213 00:20:54.191406 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/fae42e72-8265-42c3-b07a-1426ce5ccfad-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"fae42e72-8265-42c3-b07a-1426ce5ccfad\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 13 00:20:54 crc kubenswrapper[3562]: I1213 00:20:54.191465 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/fae42e72-8265-42c3-b07a-1426ce5ccfad-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"fae42e72-8265-42c3-b07a-1426ce5ccfad\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 13 00:20:54 crc kubenswrapper[3562]: I1213 00:20:54.192052 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/fae42e72-8265-42c3-b07a-1426ce5ccfad-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"fae42e72-8265-42c3-b07a-1426ce5ccfad\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 13 00:20:54 crc kubenswrapper[3562]: I1213 00:20:54.192397 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/fae42e72-8265-42c3-b07a-1426ce5ccfad-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"fae42e72-8265-42c3-b07a-1426ce5ccfad\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 13 00:20:54 crc kubenswrapper[3562]: I1213 00:20:54.194072 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/fae42e72-8265-42c3-b07a-1426ce5ccfad-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"fae42e72-8265-42c3-b07a-1426ce5ccfad\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 13 00:20:54 crc kubenswrapper[3562]: I1213 00:20:54.200672 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/fae42e72-8265-42c3-b07a-1426ce5ccfad-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"fae42e72-8265-42c3-b07a-1426ce5ccfad\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 13 00:20:54 crc kubenswrapper[3562]: I1213 00:20:54.203286 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/fae42e72-8265-42c3-b07a-1426ce5ccfad-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"fae42e72-8265-42c3-b07a-1426ce5ccfad\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 13 00:20:54 crc kubenswrapper[3562]: I1213 00:20:54.205072 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/fae42e72-8265-42c3-b07a-1426ce5ccfad-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"fae42e72-8265-42c3-b07a-1426ce5ccfad\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 13 00:20:54 crc kubenswrapper[3562]: I1213 00:20:54.207277 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Dec 13 00:20:58 crc kubenswrapper[3562]: W1213 00:20:58.193482 3562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e4287eb_4275_4bfe_90f1_854f3e6db87a.slice/crio-d983c98de736767e8929801315aed517e8c3734da1bba401126aa2c8be3557c0 WatchSource:0}: Error finding container d983c98de736767e8929801315aed517e8c3734da1bba401126aa2c8be3557c0: Status 404 returned error can't find the container with id d983c98de736767e8929801315aed517e8c3734da1bba401126aa2c8be3557c0 Dec 13 00:20:58 crc kubenswrapper[3562]: I1213 00:20:58.962296 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rdbcd" event={"ID":"5e4287eb-4275-4bfe-90f1-854f3e6db87a","Type":"ContainerStarted","Data":"d983c98de736767e8929801315aed517e8c3734da1bba401126aa2c8be3557c0"} Dec 13 00:21:03 crc kubenswrapper[3562]: I1213 00:21:03.742776 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 13 00:21:03 crc kubenswrapper[3562]: I1213 00:21:03.743441 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 13 00:21:03 crc kubenswrapper[3562]: I1213 00:21:03.743468 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 13 00:21:03 crc kubenswrapper[3562]: I1213 00:21:03.743494 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 13 00:21:03 crc kubenswrapper[3562]: I1213 00:21:03.743534 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 13 00:21:04 crc kubenswrapper[3562]: I1213 00:21:04.100556 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 13 00:21:04 crc kubenswrapper[3562]: W1213 00:21:04.140895 3562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfae42e72_8265_42c3_b07a_1426ce5ccfad.slice/crio-9abd4e2eb76866fe2efd7ca1d29beb2bc8a37e05e22ba958a63e64376fb0b65d WatchSource:0}: Error finding container 9abd4e2eb76866fe2efd7ca1d29beb2bc8a37e05e22ba958a63e64376fb0b65d: Status 404 returned error can't find the container with id 9abd4e2eb76866fe2efd7ca1d29beb2bc8a37e05e22ba958a63e64376fb0b65d Dec 13 00:21:04 crc kubenswrapper[3562]: I1213 00:21:04.237035 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5774f55cb7-p78g5"] Dec 13 00:21:04 crc kubenswrapper[3562]: W1213 00:21:04.240565 3562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podada2581f_35f4_409c_9414_464ddb79fac9.slice/crio-51f618dc7a238c3daaba52400356cdc3d98839c68532d1547daf22cd5e6f15f2 WatchSource:0}: Error finding container 51f618dc7a238c3daaba52400356cdc3d98839c68532d1547daf22cd5e6f15f2: Status 404 returned error can't find the container with id 51f618dc7a238c3daaba52400356cdc3d98839c68532d1547daf22cd5e6f15f2 Dec 13 00:21:05 crc kubenswrapper[3562]: I1213 00:21:05.000082 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-5774f55cb7-p78g5" event={"ID":"ada2581f-35f4-409c-9414-464ddb79fac9","Type":"ContainerStarted","Data":"51f618dc7a238c3daaba52400356cdc3d98839c68532d1547daf22cd5e6f15f2"} Dec 13 00:21:05 crc kubenswrapper[3562]: I1213 00:21:05.001196 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"fae42e72-8265-42c3-b07a-1426ce5ccfad","Type":"ContainerStarted","Data":"9abd4e2eb76866fe2efd7ca1d29beb2bc8a37e05e22ba958a63e64376fb0b65d"} Dec 13 00:21:05 crc kubenswrapper[3562]: I1213 00:21:05.002747 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-84d9fcf5cc-4nvf8" event={"ID":"b1f675e1-dac7-4bc6-9d92-4b5ef234ebf0","Type":"ContainerStarted","Data":"26ad3d4fd034d710f4221e76b20ec16aaf3007a16e907333573f294ba1611eda"} Dec 13 00:21:05 crc kubenswrapper[3562]: I1213 00:21:05.004068 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-574fd8d65d-2l28k" event={"ID":"f8ed63a0-9d2d-4db9-8d62-0d69963cc480","Type":"ContainerStarted","Data":"7de0688ae34c31237bd074ad057e085d568ffe1ea1f1f1c49fdb5b6ae69e5c79"} Dec 13 00:21:05 crc kubenswrapper[3562]: I1213 00:21:05.004663 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-574fd8d65d-2l28k" Dec 13 00:21:05 crc kubenswrapper[3562]: I1213 00:21:05.006554 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-864b67f9b9-qzllq" event={"ID":"c5fda24c-de7c-4868-a327-defef18d0941","Type":"ContainerStarted","Data":"f3657b995ccf60de9e8ba98ddd8b84d121c5234634cd5e7386cc476cdb85e50a"} Dec 13 00:21:05 crc kubenswrapper[3562]: I1213 00:21:05.009029 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-65df589ff7-t58nh" event={"ID":"4d1602d6-ff5e-41fe-b6e0-e35295a45d7a","Type":"ContainerStarted","Data":"4d1a7609e6f42fbc6ab7ce26deca05e8e4d75b7edd527607c548a1f021561d02"} Dec 13 00:21:05 crc kubenswrapper[3562]: I1213 00:21:05.010720 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-65df589ff7-t58nh" Dec 13 00:21:05 crc kubenswrapper[3562]: I1213 00:21:05.011871 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-84d9fcf5cc-x8nr9" event={"ID":"e6c8b1e6-ab4b-40e7-b71b-2d36d472d126","Type":"ContainerStarted","Data":"62c416fe63ff621913a7a88a9d7ee9366b1e1516b00eedddad09357d9e605f5e"} Dec 13 00:21:05 crc kubenswrapper[3562]: I1213 00:21:05.012891 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-65df589ff7-t58nh" Dec 13 00:21:05 crc kubenswrapper[3562]: I1213 00:21:05.014188 3562 generic.go:334] "Generic (PLEG): container finished" podID="5e4287eb-4275-4bfe-90f1-854f3e6db87a" containerID="96b205db0c9236ac758f9604395c2cada6dc7600021a6471a1288ddd5a565eb8" exitCode=0 Dec 13 00:21:05 crc kubenswrapper[3562]: I1213 00:21:05.014324 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rdbcd" event={"ID":"5e4287eb-4275-4bfe-90f1-854f3e6db87a","Type":"ContainerDied","Data":"96b205db0c9236ac758f9604395c2cada6dc7600021a6471a1288ddd5a565eb8"} Dec 13 00:21:05 crc kubenswrapper[3562]: I1213 00:21:05.027533 3562 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-84d9fcf5cc-4nvf8" podStartSLOduration=3.419312823 podStartE2EDuration="24.027457184s" podCreationTimestamp="2025-12-13 00:20:41 +0000 UTC" firstStartedPulling="2025-12-13 00:20:43.118988023 +0000 UTC m=+879.602218546" lastFinishedPulling="2025-12-13 00:21:03.727132384 +0000 UTC m=+900.210362907" observedRunningTime="2025-12-13 00:21:05.022006976 +0000 UTC m=+901.505237499" watchObservedRunningTime="2025-12-13 00:21:05.027457184 +0000 UTC m=+901.510687717" Dec 13 00:21:05 crc kubenswrapper[3562]: I1213 00:21:05.060139 3562 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-operators/observability-operator-65df589ff7-t58nh" podStartSLOduration=2.659620699 podStartE2EDuration="24.060080598s" podCreationTimestamp="2025-12-13 00:20:41 +0000 UTC" firstStartedPulling="2025-12-13 00:20:42.332259946 +0000 UTC m=+878.815490469" lastFinishedPulling="2025-12-13 00:21:03.732719845 +0000 UTC m=+900.215950368" observedRunningTime="2025-12-13 00:21:05.053604533 +0000 UTC m=+901.536835076" watchObservedRunningTime="2025-12-13 00:21:05.060080598 +0000 UTC m=+901.543311121" Dec 13 00:21:05 crc kubenswrapper[3562]: I1213 00:21:05.126149 3562 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-operators/perses-operator-574fd8d65d-2l28k" podStartSLOduration=3.088842455 podStartE2EDuration="24.126085698s" podCreationTimestamp="2025-12-13 00:20:41 +0000 UTC" firstStartedPulling="2025-12-13 00:20:42.790022456 +0000 UTC m=+879.273252979" lastFinishedPulling="2025-12-13 00:21:03.827265699 +0000 UTC m=+900.310496222" observedRunningTime="2025-12-13 00:21:05.09926614 +0000 UTC m=+901.582496663" watchObservedRunningTime="2025-12-13 00:21:05.126085698 +0000 UTC m=+901.609316231" Dec 13 00:21:05 crc kubenswrapper[3562]: I1213 00:21:05.131143 3562 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-864b67f9b9-qzllq" podStartSLOduration=2.648957978 podStartE2EDuration="24.131039111s" podCreationTimestamp="2025-12-13 00:20:41 +0000 UTC" firstStartedPulling="2025-12-13 00:20:42.265758873 +0000 UTC m=+878.748989396" lastFinishedPulling="2025-12-13 00:21:03.747840006 +0000 UTC m=+900.231070529" observedRunningTime="2025-12-13 00:21:05.123478907 +0000 UTC m=+901.606709430" watchObservedRunningTime="2025-12-13 00:21:05.131039111 +0000 UTC m=+901.614269634" Dec 13 00:21:05 crc kubenswrapper[3562]: I1213 00:21:05.166374 3562 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-84d9fcf5cc-x8nr9" podStartSLOduration=3.523335292 podStartE2EDuration="24.166320518s" podCreationTimestamp="2025-12-13 00:20:41 +0000 UTC" firstStartedPulling="2025-12-13 00:20:43.07940701 +0000 UTC m=+879.562637533" lastFinishedPulling="2025-12-13 00:21:03.722392236 +0000 UTC m=+900.205622759" observedRunningTime="2025-12-13 00:21:05.162833154 +0000 UTC m=+901.646063697" watchObservedRunningTime="2025-12-13 00:21:05.166320518 +0000 UTC m=+901.649551041" Dec 13 00:21:06 crc kubenswrapper[3562]: I1213 00:21:06.039800 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rdbcd" event={"ID":"5e4287eb-4275-4bfe-90f1-854f3e6db87a","Type":"ContainerStarted","Data":"c2afd394ad02bf856bf238e02a110766b2231d0e27168332cde9c8f5114aa80d"} Dec 13 00:21:08 crc kubenswrapper[3562]: I1213 00:21:08.883497 3562 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head"] Dec 13 00:21:08 crc kubenswrapper[3562]: I1213 00:21:08.883855 3562 topology_manager.go:215] "Topology Admit Handler" podUID="5d60db70-4c8e-488d-a3a2-63fe41e4aaf7" podNamespace="service-telemetry" podName="infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Dec 13 00:21:08 crc kubenswrapper[3562]: I1213 00:21:08.884603 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Dec 13 00:21:08 crc kubenswrapper[3562]: I1213 00:21:08.887838 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-catalog-configmap-partition-1" Dec 13 00:21:08 crc kubenswrapper[3562]: I1213 00:21:08.900458 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head"] Dec 13 00:21:09 crc kubenswrapper[3562]: I1213 00:21:09.005998 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"smart-gateway-operator-catalog-configmap-partition-1-unzip\" (UniqueName: \"kubernetes.io/empty-dir/5d60db70-4c8e-488d-a3a2-63fe41e4aaf7-smart-gateway-operator-catalog-configmap-partition-1-unzip\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"5d60db70-4c8e-488d-a3a2-63fe41e4aaf7\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Dec 13 00:21:09 crc kubenswrapper[3562]: I1213 00:21:09.006071 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"smart-gateway-operator-catalog-configmap-partition-1-volume\" (UniqueName: \"kubernetes.io/configmap/5d60db70-4c8e-488d-a3a2-63fe41e4aaf7-smart-gateway-operator-catalog-configmap-partition-1-volume\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"5d60db70-4c8e-488d-a3a2-63fe41e4aaf7\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Dec 13 00:21:09 crc kubenswrapper[3562]: I1213 00:21:09.006423 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc7nz\" (UniqueName: \"kubernetes.io/projected/5d60db70-4c8e-488d-a3a2-63fe41e4aaf7-kube-api-access-qc7nz\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"5d60db70-4c8e-488d-a3a2-63fe41e4aaf7\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Dec 13 00:21:09 crc kubenswrapper[3562]: I1213 00:21:09.063148 3562 generic.go:334] "Generic (PLEG): container finished" podID="5e4287eb-4275-4bfe-90f1-854f3e6db87a" containerID="c2afd394ad02bf856bf238e02a110766b2231d0e27168332cde9c8f5114aa80d" exitCode=0 Dec 13 00:21:09 crc kubenswrapper[3562]: I1213 00:21:09.063207 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rdbcd" event={"ID":"5e4287eb-4275-4bfe-90f1-854f3e6db87a","Type":"ContainerDied","Data":"c2afd394ad02bf856bf238e02a110766b2231d0e27168332cde9c8f5114aa80d"} Dec 13 00:21:09 crc kubenswrapper[3562]: I1213 00:21:09.183370 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"smart-gateway-operator-catalog-configmap-partition-1-unzip\" (UniqueName: \"kubernetes.io/empty-dir/5d60db70-4c8e-488d-a3a2-63fe41e4aaf7-smart-gateway-operator-catalog-configmap-partition-1-unzip\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"5d60db70-4c8e-488d-a3a2-63fe41e4aaf7\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Dec 13 00:21:09 crc kubenswrapper[3562]: I1213 00:21:09.183443 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"smart-gateway-operator-catalog-configmap-partition-1-volume\" (UniqueName: \"kubernetes.io/configmap/5d60db70-4c8e-488d-a3a2-63fe41e4aaf7-smart-gateway-operator-catalog-configmap-partition-1-volume\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"5d60db70-4c8e-488d-a3a2-63fe41e4aaf7\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Dec 13 00:21:09 crc kubenswrapper[3562]: I1213 00:21:09.183503 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qc7nz\" (UniqueName: \"kubernetes.io/projected/5d60db70-4c8e-488d-a3a2-63fe41e4aaf7-kube-api-access-qc7nz\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"5d60db70-4c8e-488d-a3a2-63fe41e4aaf7\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Dec 13 00:21:09 crc kubenswrapper[3562]: I1213 00:21:09.183785 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"smart-gateway-operator-catalog-configmap-partition-1-unzip\" (UniqueName: \"kubernetes.io/empty-dir/5d60db70-4c8e-488d-a3a2-63fe41e4aaf7-smart-gateway-operator-catalog-configmap-partition-1-unzip\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"5d60db70-4c8e-488d-a3a2-63fe41e4aaf7\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Dec 13 00:21:09 crc kubenswrapper[3562]: I1213 00:21:09.184510 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"smart-gateway-operator-catalog-configmap-partition-1-volume\" (UniqueName: \"kubernetes.io/configmap/5d60db70-4c8e-488d-a3a2-63fe41e4aaf7-smart-gateway-operator-catalog-configmap-partition-1-volume\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"5d60db70-4c8e-488d-a3a2-63fe41e4aaf7\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Dec 13 00:21:09 crc kubenswrapper[3562]: I1213 00:21:09.283039 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-qc7nz\" (UniqueName: \"kubernetes.io/projected/5d60db70-4c8e-488d-a3a2-63fe41e4aaf7-kube-api-access-qc7nz\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"5d60db70-4c8e-488d-a3a2-63fe41e4aaf7\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Dec 13 00:21:09 crc kubenswrapper[3562]: I1213 00:21:09.512264 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Dec 13 00:21:12 crc kubenswrapper[3562]: I1213 00:21:12.108376 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-574fd8d65d-2l28k" Dec 13 00:21:39 crc kubenswrapper[3562]: I1213 00:21:39.315273 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head"] Dec 13 00:21:39 crc kubenswrapper[3562]: I1213 00:21:39.602877 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" event={"ID":"5d60db70-4c8e-488d-a3a2-63fe41e4aaf7","Type":"ContainerStarted","Data":"390a83c97db813b9f84187129fb292dcd9e8b3a4009703491f7167a84d828aaf"} Dec 13 00:21:39 crc kubenswrapper[3562]: I1213 00:21:39.605216 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rdbcd" event={"ID":"5e4287eb-4275-4bfe-90f1-854f3e6db87a","Type":"ContainerStarted","Data":"3c48f5550216a41059b41eb48f80108dfe1b6fb5406ea061950c9ad013d094be"} Dec 13 00:21:39 crc kubenswrapper[3562]: I1213 00:21:39.606464 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-5774f55cb7-p78g5" event={"ID":"ada2581f-35f4-409c-9414-464ddb79fac9","Type":"ContainerStarted","Data":"70573939c2aaab4f7d45059674be64e6e4dd31d4387783156205acb38b75c94f"} Dec 13 00:21:39 crc kubenswrapper[3562]: I1213 00:21:39.622809 3562 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rdbcd" podStartSLOduration=48.91782266 podStartE2EDuration="53.622753603s" podCreationTimestamp="2025-12-13 00:20:46 +0000 UTC" firstStartedPulling="2025-12-13 00:21:05.015487449 +0000 UTC m=+901.498717972" lastFinishedPulling="2025-12-13 00:21:09.720418402 +0000 UTC m=+906.203648915" observedRunningTime="2025-12-13 00:21:39.622407463 +0000 UTC m=+936.105637976" watchObservedRunningTime="2025-12-13 00:21:39.622753603 +0000 UTC m=+936.105984126" Dec 13 00:21:39 crc kubenswrapper[3562]: I1213 00:21:39.646701 3562 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-5774f55cb7-p78g5" podStartSLOduration=19.032626003 podStartE2EDuration="53.64665419s" podCreationTimestamp="2025-12-13 00:20:46 +0000 UTC" firstStartedPulling="2025-12-13 00:21:04.245549927 +0000 UTC m=+900.728780450" lastFinishedPulling="2025-12-13 00:21:38.859578114 +0000 UTC m=+935.342808637" observedRunningTime="2025-12-13 00:21:39.642980221 +0000 UTC m=+936.126210774" watchObservedRunningTime="2025-12-13 00:21:39.64665419 +0000 UTC m=+936.129884713" Dec 13 00:21:40 crc kubenswrapper[3562]: I1213 00:21:40.612042 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"fae42e72-8265-42c3-b07a-1426ce5ccfad","Type":"ContainerStarted","Data":"261d242bea214c558b7e5f5015c4cdfcf8813494de90ddbfeeb592da6800f74c"} Dec 13 00:21:40 crc kubenswrapper[3562]: I1213 00:21:40.859331 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 13 00:21:40 crc kubenswrapper[3562]: I1213 00:21:40.899861 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 13 00:21:42 crc kubenswrapper[3562]: I1213 00:21:42.638614 3562 generic.go:334] "Generic (PLEG): container finished" podID="fae42e72-8265-42c3-b07a-1426ce5ccfad" containerID="261d242bea214c558b7e5f5015c4cdfcf8813494de90ddbfeeb592da6800f74c" exitCode=0 Dec 13 00:21:42 crc kubenswrapper[3562]: I1213 00:21:42.639012 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"fae42e72-8265-42c3-b07a-1426ce5ccfad","Type":"ContainerDied","Data":"261d242bea214c558b7e5f5015c4cdfcf8813494de90ddbfeeb592da6800f74c"} Dec 13 00:21:43 crc kubenswrapper[3562]: I1213 00:21:43.053086 3562 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 00:21:43 crc kubenswrapper[3562]: I1213 00:21:43.053507 3562 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 00:21:43 crc kubenswrapper[3562]: I1213 00:21:43.657680 3562 generic.go:334] "Generic (PLEG): container finished" podID="fae42e72-8265-42c3-b07a-1426ce5ccfad" containerID="d2793d9606b8951c6ababdc08b9a9737f3d2d62126c6211183172476ad3c5bc3" exitCode=0 Dec 13 00:21:43 crc kubenswrapper[3562]: I1213 00:21:43.657727 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"fae42e72-8265-42c3-b07a-1426ce5ccfad","Type":"ContainerDied","Data":"d2793d9606b8951c6ababdc08b9a9737f3d2d62126c6211183172476ad3c5bc3"} Dec 13 00:21:44 crc kubenswrapper[3562]: I1213 00:21:44.230358 3562 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-58ffc98b58-qprqg"] Dec 13 00:21:44 crc kubenswrapper[3562]: I1213 00:21:44.230490 3562 topology_manager.go:215] "Topology Admit Handler" podUID="52bc6c73-d725-404a-b570-16dcb9610fdb" podNamespace="cert-manager" podName="cert-manager-webhook-58ffc98b58-qprqg" Dec 13 00:21:44 crc kubenswrapper[3562]: I1213 00:21:44.231269 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-58ffc98b58-qprqg" Dec 13 00:21:44 crc kubenswrapper[3562]: I1213 00:21:44.233349 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Dec 13 00:21:44 crc kubenswrapper[3562]: I1213 00:21:44.233396 3562 reflector.go:351] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-mpv4f" Dec 13 00:21:44 crc kubenswrapper[3562]: I1213 00:21:44.234771 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Dec 13 00:21:44 crc kubenswrapper[3562]: I1213 00:21:44.250023 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-58ffc98b58-qprqg"] Dec 13 00:21:44 crc kubenswrapper[3562]: I1213 00:21:44.259742 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/52bc6c73-d725-404a-b570-16dcb9610fdb-bound-sa-token\") pod \"cert-manager-webhook-58ffc98b58-qprqg\" (UID: \"52bc6c73-d725-404a-b570-16dcb9610fdb\") " pod="cert-manager/cert-manager-webhook-58ffc98b58-qprqg" Dec 13 00:21:44 crc kubenswrapper[3562]: I1213 00:21:44.259808 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h54lb\" (UniqueName: \"kubernetes.io/projected/52bc6c73-d725-404a-b570-16dcb9610fdb-kube-api-access-h54lb\") pod \"cert-manager-webhook-58ffc98b58-qprqg\" (UID: \"52bc6c73-d725-404a-b570-16dcb9610fdb\") " pod="cert-manager/cert-manager-webhook-58ffc98b58-qprqg" Dec 13 00:21:44 crc kubenswrapper[3562]: I1213 00:21:44.364392 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-h54lb\" (UniqueName: \"kubernetes.io/projected/52bc6c73-d725-404a-b570-16dcb9610fdb-kube-api-access-h54lb\") pod \"cert-manager-webhook-58ffc98b58-qprqg\" (UID: \"52bc6c73-d725-404a-b570-16dcb9610fdb\") " pod="cert-manager/cert-manager-webhook-58ffc98b58-qprqg" Dec 13 00:21:44 crc kubenswrapper[3562]: I1213 00:21:44.364511 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/52bc6c73-d725-404a-b570-16dcb9610fdb-bound-sa-token\") pod \"cert-manager-webhook-58ffc98b58-qprqg\" (UID: \"52bc6c73-d725-404a-b570-16dcb9610fdb\") " pod="cert-manager/cert-manager-webhook-58ffc98b58-qprqg" Dec 13 00:21:44 crc kubenswrapper[3562]: I1213 00:21:44.385330 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/52bc6c73-d725-404a-b570-16dcb9610fdb-bound-sa-token\") pod \"cert-manager-webhook-58ffc98b58-qprqg\" (UID: \"52bc6c73-d725-404a-b570-16dcb9610fdb\") " pod="cert-manager/cert-manager-webhook-58ffc98b58-qprqg" Dec 13 00:21:44 crc kubenswrapper[3562]: I1213 00:21:44.407449 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-h54lb\" (UniqueName: \"kubernetes.io/projected/52bc6c73-d725-404a-b570-16dcb9610fdb-kube-api-access-h54lb\") pod \"cert-manager-webhook-58ffc98b58-qprqg\" (UID: \"52bc6c73-d725-404a-b570-16dcb9610fdb\") " pod="cert-manager/cert-manager-webhook-58ffc98b58-qprqg" Dec 13 00:21:44 crc kubenswrapper[3562]: I1213 00:21:44.546999 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-58ffc98b58-qprqg" Dec 13 00:21:46 crc kubenswrapper[3562]: I1213 00:21:46.664685 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rdbcd" Dec 13 00:21:46 crc kubenswrapper[3562]: I1213 00:21:46.665050 3562 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rdbcd" Dec 13 00:21:46 crc kubenswrapper[3562]: I1213 00:21:46.785028 3562 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rdbcd" Dec 13 00:21:46 crc kubenswrapper[3562]: I1213 00:21:46.902454 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rdbcd" Dec 13 00:21:46 crc kubenswrapper[3562]: I1213 00:21:46.947194 3562 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rdbcd"] Dec 13 00:21:46 crc kubenswrapper[3562]: I1213 00:21:46.956723 3562 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-6dcc74f67d-kv7qq"] Dec 13 00:21:46 crc kubenswrapper[3562]: I1213 00:21:46.956842 3562 topology_manager.go:215] "Topology Admit Handler" podUID="84c9ced0-5209-4237-9076-801a80f3b40f" podNamespace="cert-manager" podName="cert-manager-cainjector-6dcc74f67d-kv7qq" Dec 13 00:21:46 crc kubenswrapper[3562]: I1213 00:21:46.957543 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-6dcc74f67d-kv7qq" Dec 13 00:21:46 crc kubenswrapper[3562]: I1213 00:21:46.959534 3562 reflector.go:351] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-z5nrb" Dec 13 00:21:46 crc kubenswrapper[3562]: I1213 00:21:46.961514 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-6dcc74f67d-kv7qq"] Dec 13 00:21:47 crc kubenswrapper[3562]: I1213 00:21:47.035227 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4dms\" (UniqueName: \"kubernetes.io/projected/84c9ced0-5209-4237-9076-801a80f3b40f-kube-api-access-j4dms\") pod \"cert-manager-cainjector-6dcc74f67d-kv7qq\" (UID: \"84c9ced0-5209-4237-9076-801a80f3b40f\") " pod="cert-manager/cert-manager-cainjector-6dcc74f67d-kv7qq" Dec 13 00:21:47 crc kubenswrapper[3562]: I1213 00:21:47.035282 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/84c9ced0-5209-4237-9076-801a80f3b40f-bound-sa-token\") pod \"cert-manager-cainjector-6dcc74f67d-kv7qq\" (UID: \"84c9ced0-5209-4237-9076-801a80f3b40f\") " pod="cert-manager/cert-manager-cainjector-6dcc74f67d-kv7qq" Dec 13 00:21:47 crc kubenswrapper[3562]: I1213 00:21:47.136172 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j4dms\" (UniqueName: \"kubernetes.io/projected/84c9ced0-5209-4237-9076-801a80f3b40f-kube-api-access-j4dms\") pod \"cert-manager-cainjector-6dcc74f67d-kv7qq\" (UID: \"84c9ced0-5209-4237-9076-801a80f3b40f\") " pod="cert-manager/cert-manager-cainjector-6dcc74f67d-kv7qq" Dec 13 00:21:47 crc kubenswrapper[3562]: I1213 00:21:47.136227 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/84c9ced0-5209-4237-9076-801a80f3b40f-bound-sa-token\") pod \"cert-manager-cainjector-6dcc74f67d-kv7qq\" (UID: \"84c9ced0-5209-4237-9076-801a80f3b40f\") " pod="cert-manager/cert-manager-cainjector-6dcc74f67d-kv7qq" Dec 13 00:21:47 crc kubenswrapper[3562]: I1213 00:21:47.156269 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/84c9ced0-5209-4237-9076-801a80f3b40f-bound-sa-token\") pod \"cert-manager-cainjector-6dcc74f67d-kv7qq\" (UID: \"84c9ced0-5209-4237-9076-801a80f3b40f\") " pod="cert-manager/cert-manager-cainjector-6dcc74f67d-kv7qq" Dec 13 00:21:47 crc kubenswrapper[3562]: I1213 00:21:47.156526 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4dms\" (UniqueName: \"kubernetes.io/projected/84c9ced0-5209-4237-9076-801a80f3b40f-kube-api-access-j4dms\") pod \"cert-manager-cainjector-6dcc74f67d-kv7qq\" (UID: \"84c9ced0-5209-4237-9076-801a80f3b40f\") " pod="cert-manager/cert-manager-cainjector-6dcc74f67d-kv7qq" Dec 13 00:21:47 crc kubenswrapper[3562]: I1213 00:21:47.241266 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-58ffc98b58-qprqg"] Dec 13 00:21:47 crc kubenswrapper[3562]: I1213 00:21:47.284176 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-6dcc74f67d-kv7qq" Dec 13 00:21:47 crc kubenswrapper[3562]: I1213 00:21:47.477837 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-6dcc74f67d-kv7qq"] Dec 13 00:21:47 crc kubenswrapper[3562]: W1213 00:21:47.485665 3562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84c9ced0_5209_4237_9076_801a80f3b40f.slice/crio-73db3bf9ae2a2f526a920d6a8bfd3c6a18bfef930750dc6e8b0b2f4feae2ef6d WatchSource:0}: Error finding container 73db3bf9ae2a2f526a920d6a8bfd3c6a18bfef930750dc6e8b0b2f4feae2ef6d: Status 404 returned error can't find the container with id 73db3bf9ae2a2f526a920d6a8bfd3c6a18bfef930750dc6e8b0b2f4feae2ef6d Dec 13 00:21:47 crc kubenswrapper[3562]: I1213 00:21:47.678609 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"fae42e72-8265-42c3-b07a-1426ce5ccfad","Type":"ContainerStarted","Data":"23a0a2dfe36b8b6c34232c17fc5c397fd80b563103577d32aaf3c37ee4783180"} Dec 13 00:21:47 crc kubenswrapper[3562]: I1213 00:21:47.679825 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/elasticsearch-es-default-0" Dec 13 00:21:47 crc kubenswrapper[3562]: I1213 00:21:47.680888 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-6dcc74f67d-kv7qq" event={"ID":"84c9ced0-5209-4237-9076-801a80f3b40f","Type":"ContainerStarted","Data":"73db3bf9ae2a2f526a920d6a8bfd3c6a18bfef930750dc6e8b0b2f4feae2ef6d"} Dec 13 00:21:47 crc kubenswrapper[3562]: I1213 00:21:47.681723 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-58ffc98b58-qprqg" event={"ID":"52bc6c73-d725-404a-b570-16dcb9610fdb","Type":"ContainerStarted","Data":"39dfc53936e082d1ab60edb309936e1cb7803ad2599734c87482f39d4dd73844"} Dec 13 00:21:47 crc kubenswrapper[3562]: I1213 00:21:47.682981 3562 generic.go:334] "Generic (PLEG): container finished" podID="5d60db70-4c8e-488d-a3a2-63fe41e4aaf7" containerID="7d9de4d2b3278826a4821c7529aa609f07403dd982d48ea07299642d8ad7b80d" exitCode=0 Dec 13 00:21:47 crc kubenswrapper[3562]: I1213 00:21:47.683936 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" event={"ID":"5d60db70-4c8e-488d-a3a2-63fe41e4aaf7","Type":"ContainerDied","Data":"7d9de4d2b3278826a4821c7529aa609f07403dd982d48ea07299642d8ad7b80d"} Dec 13 00:21:47 crc kubenswrapper[3562]: I1213 00:21:47.712277 3562 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=19.430985931 podStartE2EDuration="54.712230825s" podCreationTimestamp="2025-12-13 00:20:53 +0000 UTC" firstStartedPulling="2025-12-13 00:21:04.1442112 +0000 UTC m=+900.627441723" lastFinishedPulling="2025-12-13 00:21:39.425456094 +0000 UTC m=+935.908686617" observedRunningTime="2025-12-13 00:21:47.708256466 +0000 UTC m=+944.191486999" watchObservedRunningTime="2025-12-13 00:21:47.712230825 +0000 UTC m=+944.195461348" Dec 13 00:21:48 crc kubenswrapper[3562]: I1213 00:21:48.687981 3562 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rdbcd" podUID="5e4287eb-4275-4bfe-90f1-854f3e6db87a" containerName="registry-server" containerID="cri-o://3c48f5550216a41059b41eb48f80108dfe1b6fb5406ea061950c9ad013d094be" gracePeriod=2 Dec 13 00:21:49 crc kubenswrapper[3562]: I1213 00:21:49.694874 3562 generic.go:334] "Generic (PLEG): container finished" podID="5e4287eb-4275-4bfe-90f1-854f3e6db87a" containerID="3c48f5550216a41059b41eb48f80108dfe1b6fb5406ea061950c9ad013d094be" exitCode=0 Dec 13 00:21:49 crc kubenswrapper[3562]: I1213 00:21:49.695031 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rdbcd" event={"ID":"5e4287eb-4275-4bfe-90f1-854f3e6db87a","Type":"ContainerDied","Data":"3c48f5550216a41059b41eb48f80108dfe1b6fb5406ea061950c9ad013d094be"} Dec 13 00:21:51 crc kubenswrapper[3562]: I1213 00:21:51.155463 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rdbcd" Dec 13 00:21:51 crc kubenswrapper[3562]: I1213 00:21:51.204761 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w87cw\" (UniqueName: \"kubernetes.io/projected/5e4287eb-4275-4bfe-90f1-854f3e6db87a-kube-api-access-w87cw\") pod \"5e4287eb-4275-4bfe-90f1-854f3e6db87a\" (UID: \"5e4287eb-4275-4bfe-90f1-854f3e6db87a\") " Dec 13 00:21:51 crc kubenswrapper[3562]: I1213 00:21:51.205037 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e4287eb-4275-4bfe-90f1-854f3e6db87a-catalog-content\") pod \"5e4287eb-4275-4bfe-90f1-854f3e6db87a\" (UID: \"5e4287eb-4275-4bfe-90f1-854f3e6db87a\") " Dec 13 00:21:51 crc kubenswrapper[3562]: I1213 00:21:51.205139 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e4287eb-4275-4bfe-90f1-854f3e6db87a-utilities\") pod \"5e4287eb-4275-4bfe-90f1-854f3e6db87a\" (UID: \"5e4287eb-4275-4bfe-90f1-854f3e6db87a\") " Dec 13 00:21:51 crc kubenswrapper[3562]: I1213 00:21:51.206802 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e4287eb-4275-4bfe-90f1-854f3e6db87a-utilities" (OuterVolumeSpecName: "utilities") pod "5e4287eb-4275-4bfe-90f1-854f3e6db87a" (UID: "5e4287eb-4275-4bfe-90f1-854f3e6db87a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 00:21:51 crc kubenswrapper[3562]: I1213 00:21:51.307773 3562 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e4287eb-4275-4bfe-90f1-854f3e6db87a-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 00:21:51 crc kubenswrapper[3562]: I1213 00:21:51.318283 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e4287eb-4275-4bfe-90f1-854f3e6db87a-kube-api-access-w87cw" (OuterVolumeSpecName: "kube-api-access-w87cw") pod "5e4287eb-4275-4bfe-90f1-854f3e6db87a" (UID: "5e4287eb-4275-4bfe-90f1-854f3e6db87a"). InnerVolumeSpecName "kube-api-access-w87cw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 00:21:51 crc kubenswrapper[3562]: I1213 00:21:51.426983 3562 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-w87cw\" (UniqueName: \"kubernetes.io/projected/5e4287eb-4275-4bfe-90f1-854f3e6db87a-kube-api-access-w87cw\") on node \"crc\" DevicePath \"\"" Dec 13 00:21:51 crc kubenswrapper[3562]: I1213 00:21:51.761931 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rdbcd" event={"ID":"5e4287eb-4275-4bfe-90f1-854f3e6db87a","Type":"ContainerDied","Data":"d983c98de736767e8929801315aed517e8c3734da1bba401126aa2c8be3557c0"} Dec 13 00:21:51 crc kubenswrapper[3562]: I1213 00:21:51.761983 3562 scope.go:117] "RemoveContainer" containerID="3c48f5550216a41059b41eb48f80108dfe1b6fb5406ea061950c9ad013d094be" Dec 13 00:21:51 crc kubenswrapper[3562]: I1213 00:21:51.762114 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rdbcd" Dec 13 00:21:51 crc kubenswrapper[3562]: I1213 00:21:51.865200 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e4287eb-4275-4bfe-90f1-854f3e6db87a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5e4287eb-4275-4bfe-90f1-854f3e6db87a" (UID: "5e4287eb-4275-4bfe-90f1-854f3e6db87a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 00:21:51 crc kubenswrapper[3562]: I1213 00:21:51.897253 3562 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e4287eb-4275-4bfe-90f1-854f3e6db87a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 00:21:52 crc kubenswrapper[3562]: I1213 00:21:52.080163 3562 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rdbcd"] Dec 13 00:21:52 crc kubenswrapper[3562]: I1213 00:21:52.221194 3562 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rdbcd"] Dec 13 00:21:53 crc kubenswrapper[3562]: I1213 00:21:53.885033 3562 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e4287eb-4275-4bfe-90f1-854f3e6db87a" path="/var/lib/kubelet/pods/5e4287eb-4275-4bfe-90f1-854f3e6db87a/volumes" Dec 13 00:21:55 crc kubenswrapper[3562]: I1213 00:21:55.391716 3562 scope.go:117] "RemoveContainer" containerID="c2afd394ad02bf856bf238e02a110766b2231d0e27168332cde9c8f5114aa80d" Dec 13 00:21:55 crc kubenswrapper[3562]: I1213 00:21:55.717442 3562 scope.go:117] "RemoveContainer" containerID="96b205db0c9236ac758f9604395c2cada6dc7600021a6471a1288ddd5a565eb8" Dec 13 00:21:58 crc kubenswrapper[3562]: I1213 00:21:58.063743 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-6dcc74f67d-kv7qq" event={"ID":"84c9ced0-5209-4237-9076-801a80f3b40f","Type":"ContainerStarted","Data":"867991a40cc9f018943355a6211b4247f29bc6fad1b408ae8c33ef51b71616d0"} Dec 13 00:21:59 crc kubenswrapper[3562]: I1213 00:21:59.070283 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" event={"ID":"5d60db70-4c8e-488d-a3a2-63fe41e4aaf7","Type":"ContainerStarted","Data":"82d579852bd79df083b8bbb71390ce2a27e34f7f982e8c00119b5328d4a97108"} Dec 13 00:21:59 crc kubenswrapper[3562]: I1213 00:21:59.071792 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-58ffc98b58-qprqg" event={"ID":"52bc6c73-d725-404a-b570-16dcb9610fdb","Type":"ContainerStarted","Data":"4d58968301e65d8117b19e20728b567742e45813d772a642d9ca1fb981441877"} Dec 13 00:21:59 crc kubenswrapper[3562]: I1213 00:21:59.090413 3562 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" podStartSLOduration=32.145179555 podStartE2EDuration="51.090366616s" podCreationTimestamp="2025-12-13 00:21:08 +0000 UTC" firstStartedPulling="2025-12-13 00:21:39.349799873 +0000 UTC m=+935.833030396" lastFinishedPulling="2025-12-13 00:21:58.294986934 +0000 UTC m=+954.778217457" observedRunningTime="2025-12-13 00:21:59.087382275 +0000 UTC m=+955.570612798" watchObservedRunningTime="2025-12-13 00:21:59.090366616 +0000 UTC m=+955.573597149" Dec 13 00:21:59 crc kubenswrapper[3562]: I1213 00:21:59.090731 3562 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-6dcc74f67d-kv7qq" podStartSLOduration=4.732443377 podStartE2EDuration="13.090713355s" podCreationTimestamp="2025-12-13 00:21:46 +0000 UTC" firstStartedPulling="2025-12-13 00:21:47.488071188 +0000 UTC m=+943.971301711" lastFinishedPulling="2025-12-13 00:21:55.846341166 +0000 UTC m=+952.329571689" observedRunningTime="2025-12-13 00:21:58.097678716 +0000 UTC m=+954.580909239" watchObservedRunningTime="2025-12-13 00:21:59.090713355 +0000 UTC m=+955.573943878" Dec 13 00:21:59 crc kubenswrapper[3562]: I1213 00:21:59.108508 3562 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-58ffc98b58-qprqg" podStartSLOduration=5.948616298 podStartE2EDuration="15.108455976s" podCreationTimestamp="2025-12-13 00:21:44 +0000 UTC" firstStartedPulling="2025-12-13 00:21:47.260358694 +0000 UTC m=+943.743589227" lastFinishedPulling="2025-12-13 00:21:56.420198382 +0000 UTC m=+952.903428905" observedRunningTime="2025-12-13 00:21:59.10235365 +0000 UTC m=+955.585584183" watchObservedRunningTime="2025-12-13 00:21:59.108455976 +0000 UTC m=+955.591686499" Dec 13 00:21:59 crc kubenswrapper[3562]: I1213 00:21:59.357708 3562 prober.go:107] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="fae42e72-8265-42c3-b07a-1426ce5ccfad" containerName="elasticsearch" probeResult="failure" output=< Dec 13 00:21:59 crc kubenswrapper[3562]: {"timestamp": "2025-12-13T00:21:59+00:00", "message": "readiness probe failed", "curl_rc": "7"} Dec 13 00:21:59 crc kubenswrapper[3562]: > Dec 13 00:21:59 crc kubenswrapper[3562]: I1213 00:21:59.403310 3562 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad9766619stf7"] Dec 13 00:21:59 crc kubenswrapper[3562]: I1213 00:21:59.403453 3562 topology_manager.go:215] "Topology Admit Handler" podUID="3171c106-837c-4741-bd16-7966c515225e" podNamespace="service-telemetry" podName="581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad9766619stf7" Dec 13 00:21:59 crc kubenswrapper[3562]: E1213 00:21:59.403636 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="5e4287eb-4275-4bfe-90f1-854f3e6db87a" containerName="extract-utilities" Dec 13 00:21:59 crc kubenswrapper[3562]: I1213 00:21:59.403649 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e4287eb-4275-4bfe-90f1-854f3e6db87a" containerName="extract-utilities" Dec 13 00:21:59 crc kubenswrapper[3562]: E1213 00:21:59.403665 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="5e4287eb-4275-4bfe-90f1-854f3e6db87a" containerName="extract-content" Dec 13 00:21:59 crc kubenswrapper[3562]: I1213 00:21:59.403674 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e4287eb-4275-4bfe-90f1-854f3e6db87a" containerName="extract-content" Dec 13 00:21:59 crc kubenswrapper[3562]: E1213 00:21:59.403691 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="5e4287eb-4275-4bfe-90f1-854f3e6db87a" containerName="registry-server" Dec 13 00:21:59 crc kubenswrapper[3562]: I1213 00:21:59.403699 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e4287eb-4275-4bfe-90f1-854f3e6db87a" containerName="registry-server" Dec 13 00:21:59 crc kubenswrapper[3562]: I1213 00:21:59.403830 3562 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e4287eb-4275-4bfe-90f1-854f3e6db87a" containerName="registry-server" Dec 13 00:21:59 crc kubenswrapper[3562]: I1213 00:21:59.404765 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad9766619stf7" Dec 13 00:21:59 crc kubenswrapper[3562]: I1213 00:21:59.415699 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad9766619stf7"] Dec 13 00:21:59 crc kubenswrapper[3562]: I1213 00:21:59.448956 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpjz9\" (UniqueName: \"kubernetes.io/projected/3171c106-837c-4741-bd16-7966c515225e-kube-api-access-cpjz9\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad9766619stf7\" (UID: \"3171c106-837c-4741-bd16-7966c515225e\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad9766619stf7" Dec 13 00:21:59 crc kubenswrapper[3562]: I1213 00:21:59.449279 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3171c106-837c-4741-bd16-7966c515225e-util\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad9766619stf7\" (UID: \"3171c106-837c-4741-bd16-7966c515225e\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad9766619stf7" Dec 13 00:21:59 crc kubenswrapper[3562]: I1213 00:21:59.449324 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3171c106-837c-4741-bd16-7966c515225e-bundle\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad9766619stf7\" (UID: \"3171c106-837c-4741-bd16-7966c515225e\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad9766619stf7" Dec 13 00:21:59 crc kubenswrapper[3562]: I1213 00:21:59.547988 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-58ffc98b58-qprqg" Dec 13 00:21:59 crc kubenswrapper[3562]: I1213 00:21:59.550213 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3171c106-837c-4741-bd16-7966c515225e-util\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad9766619stf7\" (UID: \"3171c106-837c-4741-bd16-7966c515225e\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad9766619stf7" Dec 13 00:21:59 crc kubenswrapper[3562]: I1213 00:21:59.550283 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3171c106-837c-4741-bd16-7966c515225e-bundle\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad9766619stf7\" (UID: \"3171c106-837c-4741-bd16-7966c515225e\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad9766619stf7" Dec 13 00:21:59 crc kubenswrapper[3562]: I1213 00:21:59.550356 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-cpjz9\" (UniqueName: \"kubernetes.io/projected/3171c106-837c-4741-bd16-7966c515225e-kube-api-access-cpjz9\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad9766619stf7\" (UID: \"3171c106-837c-4741-bd16-7966c515225e\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad9766619stf7" Dec 13 00:21:59 crc kubenswrapper[3562]: I1213 00:21:59.550621 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3171c106-837c-4741-bd16-7966c515225e-util\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad9766619stf7\" (UID: \"3171c106-837c-4741-bd16-7966c515225e\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad9766619stf7" Dec 13 00:21:59 crc kubenswrapper[3562]: I1213 00:21:59.550691 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3171c106-837c-4741-bd16-7966c515225e-bundle\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad9766619stf7\" (UID: \"3171c106-837c-4741-bd16-7966c515225e\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad9766619stf7" Dec 13 00:21:59 crc kubenswrapper[3562]: I1213 00:21:59.583833 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpjz9\" (UniqueName: \"kubernetes.io/projected/3171c106-837c-4741-bd16-7966c515225e-kube-api-access-cpjz9\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad9766619stf7\" (UID: \"3171c106-837c-4741-bd16-7966c515225e\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad9766619stf7" Dec 13 00:21:59 crc kubenswrapper[3562]: I1213 00:21:59.721249 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad9766619stf7" Dec 13 00:22:00 crc kubenswrapper[3562]: I1213 00:22:00.177990 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad9766619stf7"] Dec 13 00:22:00 crc kubenswrapper[3562]: W1213 00:22:00.181204 3562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3171c106_837c_4741_bd16_7966c515225e.slice/crio-b03f0e11a6043613cfb228a8b492a954522b096afcfcccf5d927e8fdcd2da78d WatchSource:0}: Error finding container b03f0e11a6043613cfb228a8b492a954522b096afcfcccf5d927e8fdcd2da78d: Status 404 returned error can't find the container with id b03f0e11a6043613cfb228a8b492a954522b096afcfcccf5d927e8fdcd2da78d Dec 13 00:22:01 crc kubenswrapper[3562]: I1213 00:22:01.083247 3562 generic.go:334] "Generic (PLEG): container finished" podID="3171c106-837c-4741-bd16-7966c515225e" containerID="c58d46c04b51bde8fbf941aed08fd16fc5f585a1e4fc328648894e368798e0b9" exitCode=0 Dec 13 00:22:01 crc kubenswrapper[3562]: I1213 00:22:01.083330 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad9766619stf7" event={"ID":"3171c106-837c-4741-bd16-7966c515225e","Type":"ContainerDied","Data":"c58d46c04b51bde8fbf941aed08fd16fc5f585a1e4fc328648894e368798e0b9"} Dec 13 00:22:01 crc kubenswrapper[3562]: I1213 00:22:01.083646 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad9766619stf7" event={"ID":"3171c106-837c-4741-bd16-7966c515225e","Type":"ContainerStarted","Data":"b03f0e11a6043613cfb228a8b492a954522b096afcfcccf5d927e8fdcd2da78d"} Dec 13 00:22:03 crc kubenswrapper[3562]: I1213 00:22:03.096353 3562 generic.go:334] "Generic (PLEG): container finished" podID="3171c106-837c-4741-bd16-7966c515225e" containerID="d82bc30aadca7a231382d7fa235d617da77fc466b7f50d4bb4f13cd2c529c632" exitCode=0 Dec 13 00:22:03 crc kubenswrapper[3562]: I1213 00:22:03.096391 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad9766619stf7" event={"ID":"3171c106-837c-4741-bd16-7966c515225e","Type":"ContainerDied","Data":"d82bc30aadca7a231382d7fa235d617da77fc466b7f50d4bb4f13cd2c529c632"} Dec 13 00:22:03 crc kubenswrapper[3562]: I1213 00:22:03.155181 3562 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-755d7666d5-4jjtp"] Dec 13 00:22:03 crc kubenswrapper[3562]: I1213 00:22:03.155285 3562 topology_manager.go:215] "Topology Admit Handler" podUID="fc5e755b-7db7-41f9-9003-bc4641f2320d" podNamespace="cert-manager" podName="cert-manager-755d7666d5-4jjtp" Dec 13 00:22:03 crc kubenswrapper[3562]: I1213 00:22:03.155943 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-755d7666d5-4jjtp" Dec 13 00:22:03 crc kubenswrapper[3562]: I1213 00:22:03.161101 3562 reflector.go:351] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-hwzrg" Dec 13 00:22:03 crc kubenswrapper[3562]: I1213 00:22:03.169871 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-755d7666d5-4jjtp"] Dec 13 00:22:03 crc kubenswrapper[3562]: I1213 00:22:03.223378 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fc5e755b-7db7-41f9-9003-bc4641f2320d-bound-sa-token\") pod \"cert-manager-755d7666d5-4jjtp\" (UID: \"fc5e755b-7db7-41f9-9003-bc4641f2320d\") " pod="cert-manager/cert-manager-755d7666d5-4jjtp" Dec 13 00:22:03 crc kubenswrapper[3562]: I1213 00:22:03.223583 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwglz\" (UniqueName: \"kubernetes.io/projected/fc5e755b-7db7-41f9-9003-bc4641f2320d-kube-api-access-gwglz\") pod \"cert-manager-755d7666d5-4jjtp\" (UID: \"fc5e755b-7db7-41f9-9003-bc4641f2320d\") " pod="cert-manager/cert-manager-755d7666d5-4jjtp" Dec 13 00:22:03 crc kubenswrapper[3562]: I1213 00:22:03.325046 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fc5e755b-7db7-41f9-9003-bc4641f2320d-bound-sa-token\") pod \"cert-manager-755d7666d5-4jjtp\" (UID: \"fc5e755b-7db7-41f9-9003-bc4641f2320d\") " pod="cert-manager/cert-manager-755d7666d5-4jjtp" Dec 13 00:22:03 crc kubenswrapper[3562]: I1213 00:22:03.325198 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-gwglz\" (UniqueName: \"kubernetes.io/projected/fc5e755b-7db7-41f9-9003-bc4641f2320d-kube-api-access-gwglz\") pod \"cert-manager-755d7666d5-4jjtp\" (UID: \"fc5e755b-7db7-41f9-9003-bc4641f2320d\") " pod="cert-manager/cert-manager-755d7666d5-4jjtp" Dec 13 00:22:03 crc kubenswrapper[3562]: I1213 00:22:03.348555 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fc5e755b-7db7-41f9-9003-bc4641f2320d-bound-sa-token\") pod \"cert-manager-755d7666d5-4jjtp\" (UID: \"fc5e755b-7db7-41f9-9003-bc4641f2320d\") " pod="cert-manager/cert-manager-755d7666d5-4jjtp" Dec 13 00:22:03 crc kubenswrapper[3562]: I1213 00:22:03.353696 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwglz\" (UniqueName: \"kubernetes.io/projected/fc5e755b-7db7-41f9-9003-bc4641f2320d-kube-api-access-gwglz\") pod \"cert-manager-755d7666d5-4jjtp\" (UID: \"fc5e755b-7db7-41f9-9003-bc4641f2320d\") " pod="cert-manager/cert-manager-755d7666d5-4jjtp" Dec 13 00:22:03 crc kubenswrapper[3562]: I1213 00:22:03.561747 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-755d7666d5-4jjtp" Dec 13 00:22:03 crc kubenswrapper[3562]: I1213 00:22:03.745817 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 13 00:22:03 crc kubenswrapper[3562]: I1213 00:22:03.745920 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 13 00:22:03 crc kubenswrapper[3562]: I1213 00:22:03.745954 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 13 00:22:03 crc kubenswrapper[3562]: I1213 00:22:03.745990 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 13 00:22:03 crc kubenswrapper[3562]: I1213 00:22:03.746016 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 13 00:22:03 crc kubenswrapper[3562]: I1213 00:22:03.912013 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-755d7666d5-4jjtp"] Dec 13 00:22:03 crc kubenswrapper[3562]: W1213 00:22:03.918953 3562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc5e755b_7db7_41f9_9003_bc4641f2320d.slice/crio-a9a10db7ebd066e83ca48b796d6d82e3b0d3fbfc0ef2465de1aaf4097a7f4aab WatchSource:0}: Error finding container a9a10db7ebd066e83ca48b796d6d82e3b0d3fbfc0ef2465de1aaf4097a7f4aab: Status 404 returned error can't find the container with id a9a10db7ebd066e83ca48b796d6d82e3b0d3fbfc0ef2465de1aaf4097a7f4aab Dec 13 00:22:04 crc kubenswrapper[3562]: I1213 00:22:04.113508 3562 generic.go:334] "Generic (PLEG): container finished" podID="3171c106-837c-4741-bd16-7966c515225e" containerID="54521af0fa8ddfae44efeb2cd53154a8e5df747b1c51b5ffa9b8b3a7ac20ad66" exitCode=0 Dec 13 00:22:04 crc kubenswrapper[3562]: I1213 00:22:04.114927 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad9766619stf7" event={"ID":"3171c106-837c-4741-bd16-7966c515225e","Type":"ContainerDied","Data":"54521af0fa8ddfae44efeb2cd53154a8e5df747b1c51b5ffa9b8b3a7ac20ad66"} Dec 13 00:22:04 crc kubenswrapper[3562]: I1213 00:22:04.124112 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-755d7666d5-4jjtp" event={"ID":"fc5e755b-7db7-41f9-9003-bc4641f2320d","Type":"ContainerStarted","Data":"a9a10db7ebd066e83ca48b796d6d82e3b0d3fbfc0ef2465de1aaf4097a7f4aab"} Dec 13 00:22:04 crc kubenswrapper[3562]: I1213 00:22:04.391635 3562 prober.go:107] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="fae42e72-8265-42c3-b07a-1426ce5ccfad" containerName="elasticsearch" probeResult="failure" output=< Dec 13 00:22:04 crc kubenswrapper[3562]: {"timestamp": "2025-12-13T00:22:04+00:00", "message": "readiness probe failed", "curl_rc": "7"} Dec 13 00:22:04 crc kubenswrapper[3562]: > Dec 13 00:22:04 crc kubenswrapper[3562]: I1213 00:22:04.549370 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-58ffc98b58-qprqg" Dec 13 00:22:05 crc kubenswrapper[3562]: I1213 00:22:05.129696 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-755d7666d5-4jjtp" event={"ID":"fc5e755b-7db7-41f9-9003-bc4641f2320d","Type":"ContainerStarted","Data":"1dc66fab422d1314bb8d9392ed77177727639a03b3d009178c0dfe49d8212532"} Dec 13 00:22:05 crc kubenswrapper[3562]: I1213 00:22:05.155669 3562 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="cert-manager/cert-manager-755d7666d5-4jjtp" podStartSLOduration=2.155619944 podStartE2EDuration="2.155619944s" podCreationTimestamp="2025-12-13 00:22:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 00:22:05.152608242 +0000 UTC m=+961.635838785" watchObservedRunningTime="2025-12-13 00:22:05.155619944 +0000 UTC m=+961.638850467" Dec 13 00:22:05 crc kubenswrapper[3562]: I1213 00:22:05.625699 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad9766619stf7" Dec 13 00:22:05 crc kubenswrapper[3562]: I1213 00:22:05.681175 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpjz9\" (UniqueName: \"kubernetes.io/projected/3171c106-837c-4741-bd16-7966c515225e-kube-api-access-cpjz9\") pod \"3171c106-837c-4741-bd16-7966c515225e\" (UID: \"3171c106-837c-4741-bd16-7966c515225e\") " Dec 13 00:22:05 crc kubenswrapper[3562]: I1213 00:22:05.681237 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3171c106-837c-4741-bd16-7966c515225e-util\") pod \"3171c106-837c-4741-bd16-7966c515225e\" (UID: \"3171c106-837c-4741-bd16-7966c515225e\") " Dec 13 00:22:05 crc kubenswrapper[3562]: I1213 00:22:05.681272 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3171c106-837c-4741-bd16-7966c515225e-bundle\") pod \"3171c106-837c-4741-bd16-7966c515225e\" (UID: \"3171c106-837c-4741-bd16-7966c515225e\") " Dec 13 00:22:05 crc kubenswrapper[3562]: I1213 00:22:05.681894 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3171c106-837c-4741-bd16-7966c515225e-bundle" (OuterVolumeSpecName: "bundle") pod "3171c106-837c-4741-bd16-7966c515225e" (UID: "3171c106-837c-4741-bd16-7966c515225e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 00:22:05 crc kubenswrapper[3562]: I1213 00:22:05.686003 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3171c106-837c-4741-bd16-7966c515225e-kube-api-access-cpjz9" (OuterVolumeSpecName: "kube-api-access-cpjz9") pod "3171c106-837c-4741-bd16-7966c515225e" (UID: "3171c106-837c-4741-bd16-7966c515225e"). InnerVolumeSpecName "kube-api-access-cpjz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 00:22:05 crc kubenswrapper[3562]: I1213 00:22:05.697934 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3171c106-837c-4741-bd16-7966c515225e-util" (OuterVolumeSpecName: "util") pod "3171c106-837c-4741-bd16-7966c515225e" (UID: "3171c106-837c-4741-bd16-7966c515225e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 00:22:05 crc kubenswrapper[3562]: I1213 00:22:05.783375 3562 reconciler_common.go:300] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3171c106-837c-4741-bd16-7966c515225e-util\") on node \"crc\" DevicePath \"\"" Dec 13 00:22:05 crc kubenswrapper[3562]: I1213 00:22:05.783414 3562 reconciler_common.go:300] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3171c106-837c-4741-bd16-7966c515225e-bundle\") on node \"crc\" DevicePath \"\"" Dec 13 00:22:05 crc kubenswrapper[3562]: I1213 00:22:05.783428 3562 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-cpjz9\" (UniqueName: \"kubernetes.io/projected/3171c106-837c-4741-bd16-7966c515225e-kube-api-access-cpjz9\") on node \"crc\" DevicePath \"\"" Dec 13 00:22:06 crc kubenswrapper[3562]: I1213 00:22:06.135813 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad9766619stf7" event={"ID":"3171c106-837c-4741-bd16-7966c515225e","Type":"ContainerDied","Data":"b03f0e11a6043613cfb228a8b492a954522b096afcfcccf5d927e8fdcd2da78d"} Dec 13 00:22:06 crc kubenswrapper[3562]: I1213 00:22:06.135864 3562 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b03f0e11a6043613cfb228a8b492a954522b096afcfcccf5d927e8fdcd2da78d" Dec 13 00:22:06 crc kubenswrapper[3562]: I1213 00:22:06.135817 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad9766619stf7" Dec 13 00:22:09 crc kubenswrapper[3562]: I1213 00:22:09.348362 3562 prober.go:107] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="fae42e72-8265-42c3-b07a-1426ce5ccfad" containerName="elasticsearch" probeResult="failure" output=< Dec 13 00:22:09 crc kubenswrapper[3562]: {"timestamp": "2025-12-13T00:22:09+00:00", "message": "readiness probe failed", "curl_rc": "7"} Dec 13 00:22:09 crc kubenswrapper[3562]: > Dec 13 00:22:12 crc kubenswrapper[3562]: I1213 00:22:12.453488 3562 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-547794674-77bhh"] Dec 13 00:22:12 crc kubenswrapper[3562]: I1213 00:22:12.453868 3562 topology_manager.go:215] "Topology Admit Handler" podUID="fd9afdce-76cc-4b1a-bbb8-367fa6224635" podNamespace="service-telemetry" podName="smart-gateway-operator-547794674-77bhh" Dec 13 00:22:12 crc kubenswrapper[3562]: E1213 00:22:12.454033 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="3171c106-837c-4741-bd16-7966c515225e" containerName="pull" Dec 13 00:22:12 crc kubenswrapper[3562]: I1213 00:22:12.454044 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="3171c106-837c-4741-bd16-7966c515225e" containerName="pull" Dec 13 00:22:12 crc kubenswrapper[3562]: E1213 00:22:12.454057 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="3171c106-837c-4741-bd16-7966c515225e" containerName="util" Dec 13 00:22:12 crc kubenswrapper[3562]: I1213 00:22:12.454063 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="3171c106-837c-4741-bd16-7966c515225e" containerName="util" Dec 13 00:22:12 crc kubenswrapper[3562]: E1213 00:22:12.454072 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="3171c106-837c-4741-bd16-7966c515225e" containerName="extract" Dec 13 00:22:12 crc kubenswrapper[3562]: I1213 00:22:12.454078 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="3171c106-837c-4741-bd16-7966c515225e" containerName="extract" Dec 13 00:22:12 crc kubenswrapper[3562]: I1213 00:22:12.454189 3562 memory_manager.go:354] "RemoveStaleState removing state" podUID="3171c106-837c-4741-bd16-7966c515225e" containerName="extract" Dec 13 00:22:12 crc kubenswrapper[3562]: I1213 00:22:12.454539 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-547794674-77bhh" Dec 13 00:22:12 crc kubenswrapper[3562]: I1213 00:22:12.462357 3562 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-operator-dockercfg-nxn56" Dec 13 00:22:12 crc kubenswrapper[3562]: I1213 00:22:12.470350 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-547794674-77bhh"] Dec 13 00:22:12 crc kubenswrapper[3562]: I1213 00:22:12.600812 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/fd9afdce-76cc-4b1a-bbb8-367fa6224635-runner\") pod \"smart-gateway-operator-547794674-77bhh\" (UID: \"fd9afdce-76cc-4b1a-bbb8-367fa6224635\") " pod="service-telemetry/smart-gateway-operator-547794674-77bhh" Dec 13 00:22:12 crc kubenswrapper[3562]: I1213 00:22:12.600876 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9m8ls\" (UniqueName: \"kubernetes.io/projected/fd9afdce-76cc-4b1a-bbb8-367fa6224635-kube-api-access-9m8ls\") pod \"smart-gateway-operator-547794674-77bhh\" (UID: \"fd9afdce-76cc-4b1a-bbb8-367fa6224635\") " pod="service-telemetry/smart-gateway-operator-547794674-77bhh" Dec 13 00:22:12 crc kubenswrapper[3562]: I1213 00:22:12.702586 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/fd9afdce-76cc-4b1a-bbb8-367fa6224635-runner\") pod \"smart-gateway-operator-547794674-77bhh\" (UID: \"fd9afdce-76cc-4b1a-bbb8-367fa6224635\") " pod="service-telemetry/smart-gateway-operator-547794674-77bhh" Dec 13 00:22:12 crc kubenswrapper[3562]: I1213 00:22:12.702652 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9m8ls\" (UniqueName: \"kubernetes.io/projected/fd9afdce-76cc-4b1a-bbb8-367fa6224635-kube-api-access-9m8ls\") pod \"smart-gateway-operator-547794674-77bhh\" (UID: \"fd9afdce-76cc-4b1a-bbb8-367fa6224635\") " pod="service-telemetry/smart-gateway-operator-547794674-77bhh" Dec 13 00:22:12 crc kubenswrapper[3562]: I1213 00:22:12.703216 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/fd9afdce-76cc-4b1a-bbb8-367fa6224635-runner\") pod \"smart-gateway-operator-547794674-77bhh\" (UID: \"fd9afdce-76cc-4b1a-bbb8-367fa6224635\") " pod="service-telemetry/smart-gateway-operator-547794674-77bhh" Dec 13 00:22:12 crc kubenswrapper[3562]: I1213 00:22:12.732839 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-9m8ls\" (UniqueName: \"kubernetes.io/projected/fd9afdce-76cc-4b1a-bbb8-367fa6224635-kube-api-access-9m8ls\") pod \"smart-gateway-operator-547794674-77bhh\" (UID: \"fd9afdce-76cc-4b1a-bbb8-367fa6224635\") " pod="service-telemetry/smart-gateway-operator-547794674-77bhh" Dec 13 00:22:12 crc kubenswrapper[3562]: I1213 00:22:12.770749 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-547794674-77bhh" Dec 13 00:22:13 crc kubenswrapper[3562]: I1213 00:22:13.052978 3562 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 00:22:13 crc kubenswrapper[3562]: I1213 00:22:13.053042 3562 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 00:22:13 crc kubenswrapper[3562]: I1213 00:22:13.233843 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-547794674-77bhh"] Dec 13 00:22:13 crc kubenswrapper[3562]: W1213 00:22:13.239280 3562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd9afdce_76cc_4b1a_bbb8_367fa6224635.slice/crio-5e9be46b20dffac88653f067fb5c3c0abbae0e92d153d424e8e31d4141d3f9a0 WatchSource:0}: Error finding container 5e9be46b20dffac88653f067fb5c3c0abbae0e92d153d424e8e31d4141d3f9a0: Status 404 returned error can't find the container with id 5e9be46b20dffac88653f067fb5c3c0abbae0e92d153d424e8e31d4141d3f9a0 Dec 13 00:22:14 crc kubenswrapper[3562]: I1213 00:22:14.190969 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-547794674-77bhh" event={"ID":"fd9afdce-76cc-4b1a-bbb8-367fa6224635","Type":"ContainerStarted","Data":"5e9be46b20dffac88653f067fb5c3c0abbae0e92d153d424e8e31d4141d3f9a0"} Dec 13 00:22:14 crc kubenswrapper[3562]: I1213 00:22:14.666369 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Dec 13 00:22:38 crc kubenswrapper[3562]: I1213 00:22:38.334338 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-547794674-77bhh" event={"ID":"fd9afdce-76cc-4b1a-bbb8-367fa6224635","Type":"ContainerStarted","Data":"63014b40846fe96af674bfb722a4519a07117eb0ade54e4066f270bdbd95019a"} Dec 13 00:22:38 crc kubenswrapper[3562]: I1213 00:22:38.355156 3562 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-547794674-77bhh" podStartSLOduration=2.147504294 podStartE2EDuration="26.355100591s" podCreationTimestamp="2025-12-13 00:22:12 +0000 UTC" firstStartedPulling="2025-12-13 00:22:13.241079677 +0000 UTC m=+969.724310200" lastFinishedPulling="2025-12-13 00:22:37.448675974 +0000 UTC m=+993.931906497" observedRunningTime="2025-12-13 00:22:38.3502317 +0000 UTC m=+994.833462233" watchObservedRunningTime="2025-12-13 00:22:38.355100591 +0000 UTC m=+994.838331124" Dec 13 00:22:43 crc kubenswrapper[3562]: I1213 00:22:43.053289 3562 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 00:22:43 crc kubenswrapper[3562]: I1213 00:22:43.054060 3562 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 00:22:43 crc kubenswrapper[3562]: I1213 00:22:43.054119 3562 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Dec 13 00:22:43 crc kubenswrapper[3562]: I1213 00:22:43.055127 3562 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4ad64da496031e88d35a286745296696d921692888dc942adb2368e3bb1eb23a"} pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 13 00:22:43 crc kubenswrapper[3562]: I1213 00:22:43.055315 3562 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" containerID="cri-o://4ad64da496031e88d35a286745296696d921692888dc942adb2368e3bb1eb23a" gracePeriod=600 Dec 13 00:22:43 crc kubenswrapper[3562]: I1213 00:22:43.362130 3562 generic.go:334] "Generic (PLEG): container finished" podID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerID="4ad64da496031e88d35a286745296696d921692888dc942adb2368e3bb1eb23a" exitCode=0 Dec 13 00:22:43 crc kubenswrapper[3562]: I1213 00:22:43.362418 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerDied","Data":"4ad64da496031e88d35a286745296696d921692888dc942adb2368e3bb1eb23a"} Dec 13 00:22:43 crc kubenswrapper[3562]: I1213 00:22:43.362444 3562 scope.go:117] "RemoveContainer" containerID="f31a6a4a0aa2cc3bff798df54c8e89ea969afc1109ceee8230ab21aba1f5ffe2" Dec 13 00:22:44 crc kubenswrapper[3562]: I1213 00:22:44.368988 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerStarted","Data":"9a472952244bcda71b7ed2e29a85cbb61dc9bc7aa5fa043fe4877ed29fb76b92"} Dec 13 00:23:03 crc kubenswrapper[3562]: I1213 00:23:03.746948 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 13 00:23:03 crc kubenswrapper[3562]: I1213 00:23:03.747546 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 13 00:23:03 crc kubenswrapper[3562]: I1213 00:23:03.747618 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 13 00:23:03 crc kubenswrapper[3562]: I1213 00:23:03.747664 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 13 00:23:03 crc kubenswrapper[3562]: I1213 00:23:03.747715 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 13 00:23:06 crc kubenswrapper[3562]: I1213 00:23:06.705652 3562 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head"] Dec 13 00:23:06 crc kubenswrapper[3562]: I1213 00:23:06.706471 3562 topology_manager.go:215] "Topology Admit Handler" podUID="f53b7f93-7c01-4a1b-8dc6-1574b345709b" podNamespace="service-telemetry" podName="awatch-operators-service-telemetry-operator-bundle-nightly-head" Dec 13 00:23:06 crc kubenswrapper[3562]: I1213 00:23:06.707605 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Dec 13 00:23:06 crc kubenswrapper[3562]: I1213 00:23:06.712681 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-catalog-configmap-partition-1" Dec 13 00:23:06 crc kubenswrapper[3562]: I1213 00:23:06.772666 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head"] Dec 13 00:23:06 crc kubenswrapper[3562]: I1213 00:23:06.847943 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qn5fv\" (UniqueName: \"kubernetes.io/projected/f53b7f93-7c01-4a1b-8dc6-1574b345709b-kube-api-access-qn5fv\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"f53b7f93-7c01-4a1b-8dc6-1574b345709b\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Dec 13 00:23:06 crc kubenswrapper[3562]: I1213 00:23:06.848063 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-operator-catalog-configmap-partition-1-unzip\" (UniqueName: \"kubernetes.io/empty-dir/f53b7f93-7c01-4a1b-8dc6-1574b345709b-service-telemetry-operator-catalog-configmap-partition-1-unzip\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"f53b7f93-7c01-4a1b-8dc6-1574b345709b\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Dec 13 00:23:06 crc kubenswrapper[3562]: I1213 00:23:06.848346 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-operator-catalog-configmap-partition-1-volume\" (UniqueName: \"kubernetes.io/configmap/f53b7f93-7c01-4a1b-8dc6-1574b345709b-service-telemetry-operator-catalog-configmap-partition-1-volume\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"f53b7f93-7c01-4a1b-8dc6-1574b345709b\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Dec 13 00:23:06 crc kubenswrapper[3562]: I1213 00:23:06.949944 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qn5fv\" (UniqueName: \"kubernetes.io/projected/f53b7f93-7c01-4a1b-8dc6-1574b345709b-kube-api-access-qn5fv\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"f53b7f93-7c01-4a1b-8dc6-1574b345709b\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Dec 13 00:23:06 crc kubenswrapper[3562]: I1213 00:23:06.950081 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-telemetry-operator-catalog-configmap-partition-1-unzip\" (UniqueName: \"kubernetes.io/empty-dir/f53b7f93-7c01-4a1b-8dc6-1574b345709b-service-telemetry-operator-catalog-configmap-partition-1-unzip\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"f53b7f93-7c01-4a1b-8dc6-1574b345709b\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Dec 13 00:23:06 crc kubenswrapper[3562]: I1213 00:23:06.950146 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-telemetry-operator-catalog-configmap-partition-1-volume\" (UniqueName: \"kubernetes.io/configmap/f53b7f93-7c01-4a1b-8dc6-1574b345709b-service-telemetry-operator-catalog-configmap-partition-1-volume\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"f53b7f93-7c01-4a1b-8dc6-1574b345709b\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Dec 13 00:23:06 crc kubenswrapper[3562]: I1213 00:23:06.950886 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"service-telemetry-operator-catalog-configmap-partition-1-unzip\" (UniqueName: \"kubernetes.io/empty-dir/f53b7f93-7c01-4a1b-8dc6-1574b345709b-service-telemetry-operator-catalog-configmap-partition-1-unzip\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"f53b7f93-7c01-4a1b-8dc6-1574b345709b\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Dec 13 00:23:06 crc kubenswrapper[3562]: I1213 00:23:06.951223 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"service-telemetry-operator-catalog-configmap-partition-1-volume\" (UniqueName: \"kubernetes.io/configmap/f53b7f93-7c01-4a1b-8dc6-1574b345709b-service-telemetry-operator-catalog-configmap-partition-1-volume\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"f53b7f93-7c01-4a1b-8dc6-1574b345709b\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Dec 13 00:23:06 crc kubenswrapper[3562]: I1213 00:23:06.979336 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-qn5fv\" (UniqueName: \"kubernetes.io/projected/f53b7f93-7c01-4a1b-8dc6-1574b345709b-kube-api-access-qn5fv\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"f53b7f93-7c01-4a1b-8dc6-1574b345709b\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Dec 13 00:23:07 crc kubenswrapper[3562]: I1213 00:23:07.024336 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Dec 13 00:23:07 crc kubenswrapper[3562]: I1213 00:23:07.251038 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head"] Dec 13 00:23:07 crc kubenswrapper[3562]: W1213 00:23:07.262786 3562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf53b7f93_7c01_4a1b_8dc6_1574b345709b.slice/crio-ecbb01b49e31fa159ad0779799c35235edbde6bad6c234c1d382e58ff45d5083 WatchSource:0}: Error finding container ecbb01b49e31fa159ad0779799c35235edbde6bad6c234c1d382e58ff45d5083: Status 404 returned error can't find the container with id ecbb01b49e31fa159ad0779799c35235edbde6bad6c234c1d382e58ff45d5083 Dec 13 00:23:07 crc kubenswrapper[3562]: I1213 00:23:07.481836 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" event={"ID":"f53b7f93-7c01-4a1b-8dc6-1574b345709b","Type":"ContainerStarted","Data":"ecbb01b49e31fa159ad0779799c35235edbde6bad6c234c1d382e58ff45d5083"} Dec 13 00:23:09 crc kubenswrapper[3562]: I1213 00:23:09.492699 3562 generic.go:334] "Generic (PLEG): container finished" podID="f53b7f93-7c01-4a1b-8dc6-1574b345709b" containerID="8b75c5e803fa2645aed2dee1fda4b50c94915f023a3ead260e82fddafccbb3c2" exitCode=0 Dec 13 00:23:09 crc kubenswrapper[3562]: I1213 00:23:09.492830 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" event={"ID":"f53b7f93-7c01-4a1b-8dc6-1574b345709b","Type":"ContainerDied","Data":"8b75c5e803fa2645aed2dee1fda4b50c94915f023a3ead260e82fddafccbb3c2"} Dec 13 00:23:11 crc kubenswrapper[3562]: I1213 00:23:11.504490 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" event={"ID":"f53b7f93-7c01-4a1b-8dc6-1574b345709b","Type":"ContainerStarted","Data":"50b41cd4ec83864a9fcfe076d53ff079286fc7fb86730fe1940792e8fcd75c93"} Dec 13 00:23:11 crc kubenswrapper[3562]: I1213 00:23:11.520875 3562 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" podStartSLOduration=4.952913636 podStartE2EDuration="5.520833872s" podCreationTimestamp="2025-12-13 00:23:06 +0000 UTC" firstStartedPulling="2025-12-13 00:23:09.495120739 +0000 UTC m=+1025.978351262" lastFinishedPulling="2025-12-13 00:23:10.063040985 +0000 UTC m=+1026.546271498" observedRunningTime="2025-12-13 00:23:11.519149557 +0000 UTC m=+1028.002380100" watchObservedRunningTime="2025-12-13 00:23:11.520833872 +0000 UTC m=+1028.004064395" Dec 13 00:23:12 crc kubenswrapper[3562]: I1213 00:23:12.132767 3562 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9gbx7"] Dec 13 00:23:12 crc kubenswrapper[3562]: I1213 00:23:12.134369 3562 topology_manager.go:215] "Topology Admit Handler" podUID="986aa180-17f8-4424-9f97-5fe45ec05dff" podNamespace="openshift-marketplace" podName="6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9gbx7" Dec 13 00:23:12 crc kubenswrapper[3562]: I1213 00:23:12.136017 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9gbx7" Dec 13 00:23:12 crc kubenswrapper[3562]: I1213 00:23:12.138884 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-4w6pc" Dec 13 00:23:12 crc kubenswrapper[3562]: I1213 00:23:12.148137 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9gbx7"] Dec 13 00:23:12 crc kubenswrapper[3562]: I1213 00:23:12.196365 3562 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d5727cr65"] Dec 13 00:23:12 crc kubenswrapper[3562]: I1213 00:23:12.196545 3562 topology_manager.go:215] "Topology Admit Handler" podUID="cdf6d9d7-60b1-444a-90c8-d806aa30dcd7" podNamespace="service-telemetry" podName="59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d5727cr65" Dec 13 00:23:12 crc kubenswrapper[3562]: I1213 00:23:12.200005 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d5727cr65" Dec 13 00:23:12 crc kubenswrapper[3562]: I1213 00:23:12.200520 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d5727cr65"] Dec 13 00:23:12 crc kubenswrapper[3562]: I1213 00:23:12.336933 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cdf6d9d7-60b1-444a-90c8-d806aa30dcd7-util\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d5727cr65\" (UID: \"cdf6d9d7-60b1-444a-90c8-d806aa30dcd7\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d5727cr65" Dec 13 00:23:12 crc kubenswrapper[3562]: I1213 00:23:12.337199 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cdf6d9d7-60b1-444a-90c8-d806aa30dcd7-bundle\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d5727cr65\" (UID: \"cdf6d9d7-60b1-444a-90c8-d806aa30dcd7\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d5727cr65" Dec 13 00:23:12 crc kubenswrapper[3562]: I1213 00:23:12.337271 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/986aa180-17f8-4424-9f97-5fe45ec05dff-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9gbx7\" (UID: \"986aa180-17f8-4424-9f97-5fe45ec05dff\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9gbx7" Dec 13 00:23:12 crc kubenswrapper[3562]: I1213 00:23:12.337345 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bh4h\" (UniqueName: \"kubernetes.io/projected/cdf6d9d7-60b1-444a-90c8-d806aa30dcd7-kube-api-access-2bh4h\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d5727cr65\" (UID: \"cdf6d9d7-60b1-444a-90c8-d806aa30dcd7\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d5727cr65" Dec 13 00:23:12 crc kubenswrapper[3562]: I1213 00:23:12.337379 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/986aa180-17f8-4424-9f97-5fe45ec05dff-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9gbx7\" (UID: \"986aa180-17f8-4424-9f97-5fe45ec05dff\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9gbx7" Dec 13 00:23:12 crc kubenswrapper[3562]: I1213 00:23:12.337436 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dqw5\" (UniqueName: \"kubernetes.io/projected/986aa180-17f8-4424-9f97-5fe45ec05dff-kube-api-access-5dqw5\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9gbx7\" (UID: \"986aa180-17f8-4424-9f97-5fe45ec05dff\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9gbx7" Dec 13 00:23:12 crc kubenswrapper[3562]: I1213 00:23:12.438176 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cdf6d9d7-60b1-444a-90c8-d806aa30dcd7-util\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d5727cr65\" (UID: \"cdf6d9d7-60b1-444a-90c8-d806aa30dcd7\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d5727cr65" Dec 13 00:23:12 crc kubenswrapper[3562]: I1213 00:23:12.438268 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cdf6d9d7-60b1-444a-90c8-d806aa30dcd7-bundle\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d5727cr65\" (UID: \"cdf6d9d7-60b1-444a-90c8-d806aa30dcd7\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d5727cr65" Dec 13 00:23:12 crc kubenswrapper[3562]: I1213 00:23:12.438297 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/986aa180-17f8-4424-9f97-5fe45ec05dff-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9gbx7\" (UID: \"986aa180-17f8-4424-9f97-5fe45ec05dff\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9gbx7" Dec 13 00:23:12 crc kubenswrapper[3562]: I1213 00:23:12.438851 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cdf6d9d7-60b1-444a-90c8-d806aa30dcd7-util\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d5727cr65\" (UID: \"cdf6d9d7-60b1-444a-90c8-d806aa30dcd7\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d5727cr65" Dec 13 00:23:12 crc kubenswrapper[3562]: I1213 00:23:12.438733 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/986aa180-17f8-4424-9f97-5fe45ec05dff-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9gbx7\" (UID: \"986aa180-17f8-4424-9f97-5fe45ec05dff\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9gbx7" Dec 13 00:23:12 crc kubenswrapper[3562]: I1213 00:23:12.438791 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cdf6d9d7-60b1-444a-90c8-d806aa30dcd7-bundle\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d5727cr65\" (UID: \"cdf6d9d7-60b1-444a-90c8-d806aa30dcd7\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d5727cr65" Dec 13 00:23:12 crc kubenswrapper[3562]: I1213 00:23:12.438860 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2bh4h\" (UniqueName: \"kubernetes.io/projected/cdf6d9d7-60b1-444a-90c8-d806aa30dcd7-kube-api-access-2bh4h\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d5727cr65\" (UID: \"cdf6d9d7-60b1-444a-90c8-d806aa30dcd7\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d5727cr65" Dec 13 00:23:12 crc kubenswrapper[3562]: I1213 00:23:12.438969 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5dqw5\" (UniqueName: \"kubernetes.io/projected/986aa180-17f8-4424-9f97-5fe45ec05dff-kube-api-access-5dqw5\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9gbx7\" (UID: \"986aa180-17f8-4424-9f97-5fe45ec05dff\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9gbx7" Dec 13 00:23:12 crc kubenswrapper[3562]: I1213 00:23:12.438997 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/986aa180-17f8-4424-9f97-5fe45ec05dff-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9gbx7\" (UID: \"986aa180-17f8-4424-9f97-5fe45ec05dff\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9gbx7" Dec 13 00:23:12 crc kubenswrapper[3562]: I1213 00:23:12.439277 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/986aa180-17f8-4424-9f97-5fe45ec05dff-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9gbx7\" (UID: \"986aa180-17f8-4424-9f97-5fe45ec05dff\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9gbx7" Dec 13 00:23:12 crc kubenswrapper[3562]: I1213 00:23:12.460752 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bh4h\" (UniqueName: \"kubernetes.io/projected/cdf6d9d7-60b1-444a-90c8-d806aa30dcd7-kube-api-access-2bh4h\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d5727cr65\" (UID: \"cdf6d9d7-60b1-444a-90c8-d806aa30dcd7\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d5727cr65" Dec 13 00:23:12 crc kubenswrapper[3562]: I1213 00:23:12.463737 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dqw5\" (UniqueName: \"kubernetes.io/projected/986aa180-17f8-4424-9f97-5fe45ec05dff-kube-api-access-5dqw5\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9gbx7\" (UID: \"986aa180-17f8-4424-9f97-5fe45ec05dff\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9gbx7" Dec 13 00:23:12 crc kubenswrapper[3562]: I1213 00:23:12.522056 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d5727cr65" Dec 13 00:23:12 crc kubenswrapper[3562]: I1213 00:23:12.758283 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9gbx7" Dec 13 00:23:12 crc kubenswrapper[3562]: I1213 00:23:12.994751 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d5727cr65"] Dec 13 00:23:13 crc kubenswrapper[3562]: I1213 00:23:13.376444 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9gbx7"] Dec 13 00:23:13 crc kubenswrapper[3562]: W1213 00:23:13.388148 3562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod986aa180_17f8_4424_9f97_5fe45ec05dff.slice/crio-8d93febed46cfda4cb984c3fb15df896520e12fa46cabba03265a4eda3989c9c WatchSource:0}: Error finding container 8d93febed46cfda4cb984c3fb15df896520e12fa46cabba03265a4eda3989c9c: Status 404 returned error can't find the container with id 8d93febed46cfda4cb984c3fb15df896520e12fa46cabba03265a4eda3989c9c Dec 13 00:23:13 crc kubenswrapper[3562]: I1213 00:23:13.514303 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d5727cr65" event={"ID":"cdf6d9d7-60b1-444a-90c8-d806aa30dcd7","Type":"ContainerStarted","Data":"ab35e0df79d0a7147fa5148e93f0d5ef7be89e54f5f4d2872d02897369c22cc1"} Dec 13 00:23:13 crc kubenswrapper[3562]: I1213 00:23:13.515773 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9gbx7" event={"ID":"986aa180-17f8-4424-9f97-5fe45ec05dff","Type":"ContainerStarted","Data":"8d93febed46cfda4cb984c3fb15df896520e12fa46cabba03265a4eda3989c9c"} Dec 13 00:23:14 crc kubenswrapper[3562]: I1213 00:23:14.522139 3562 generic.go:334] "Generic (PLEG): container finished" podID="986aa180-17f8-4424-9f97-5fe45ec05dff" containerID="f44eb67be04c7ef3184fc1110332144b6d21bc174cf788053b0a7d5ac7b662ed" exitCode=0 Dec 13 00:23:14 crc kubenswrapper[3562]: I1213 00:23:14.522165 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9gbx7" event={"ID":"986aa180-17f8-4424-9f97-5fe45ec05dff","Type":"ContainerDied","Data":"f44eb67be04c7ef3184fc1110332144b6d21bc174cf788053b0a7d5ac7b662ed"} Dec 13 00:23:14 crc kubenswrapper[3562]: I1213 00:23:14.524134 3562 generic.go:334] "Generic (PLEG): container finished" podID="cdf6d9d7-60b1-444a-90c8-d806aa30dcd7" containerID="6a89cbb96e6a362fc1ff6a7435e518d53c5bb52ba3621794a94d04e4db2e586b" exitCode=0 Dec 13 00:23:14 crc kubenswrapper[3562]: I1213 00:23:14.524162 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d5727cr65" event={"ID":"cdf6d9d7-60b1-444a-90c8-d806aa30dcd7","Type":"ContainerDied","Data":"6a89cbb96e6a362fc1ff6a7435e518d53c5bb52ba3621794a94d04e4db2e586b"} Dec 13 00:23:17 crc kubenswrapper[3562]: I1213 00:23:17.541354 3562 generic.go:334] "Generic (PLEG): container finished" podID="cdf6d9d7-60b1-444a-90c8-d806aa30dcd7" containerID="08dd7c46122b368713f770089813de72dc88b8fb4bf9291bbd21ecde545d9eb6" exitCode=0 Dec 13 00:23:17 crc kubenswrapper[3562]: I1213 00:23:17.541425 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d5727cr65" event={"ID":"cdf6d9d7-60b1-444a-90c8-d806aa30dcd7","Type":"ContainerDied","Data":"08dd7c46122b368713f770089813de72dc88b8fb4bf9291bbd21ecde545d9eb6"} Dec 13 00:23:17 crc kubenswrapper[3562]: I1213 00:23:17.544009 3562 generic.go:334] "Generic (PLEG): container finished" podID="986aa180-17f8-4424-9f97-5fe45ec05dff" containerID="ae48b01a9cde8a1bcf3894283f4924e1a543a00b3fe87854dc9582d19f78a561" exitCode=0 Dec 13 00:23:17 crc kubenswrapper[3562]: I1213 00:23:17.544040 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9gbx7" event={"ID":"986aa180-17f8-4424-9f97-5fe45ec05dff","Type":"ContainerDied","Data":"ae48b01a9cde8a1bcf3894283f4924e1a543a00b3fe87854dc9582d19f78a561"} Dec 13 00:23:18 crc kubenswrapper[3562]: I1213 00:23:18.551954 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d5727cr65" event={"ID":"cdf6d9d7-60b1-444a-90c8-d806aa30dcd7","Type":"ContainerStarted","Data":"5f71e0a46dd7e48c9a4b503274699f4dd8f66fe3b95e71c27fa1c7775de10dd6"} Dec 13 00:23:18 crc kubenswrapper[3562]: I1213 00:23:18.554792 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9gbx7" event={"ID":"986aa180-17f8-4424-9f97-5fe45ec05dff","Type":"ContainerStarted","Data":"8a9dac86038354da32adfbc34d8d91a74b57aa44cc5e91789ec4fe1b94a9d7c6"} Dec 13 00:23:18 crc kubenswrapper[3562]: I1213 00:23:18.564111 3562 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d5727cr65" podStartSLOduration=4.516714983 podStartE2EDuration="6.56404906s" podCreationTimestamp="2025-12-13 00:23:12 +0000 UTC" firstStartedPulling="2025-12-13 00:23:14.525549442 +0000 UTC m=+1031.008779965" lastFinishedPulling="2025-12-13 00:23:16.572883509 +0000 UTC m=+1033.056114042" observedRunningTime="2025-12-13 00:23:18.5633523 +0000 UTC m=+1035.046582823" watchObservedRunningTime="2025-12-13 00:23:18.56404906 +0000 UTC m=+1035.047279573" Dec 13 00:23:18 crc kubenswrapper[3562]: I1213 00:23:18.591172 3562 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9gbx7" podStartSLOduration=4.5401103240000005 podStartE2EDuration="6.5911192s" podCreationTimestamp="2025-12-13 00:23:12 +0000 UTC" firstStartedPulling="2025-12-13 00:23:14.52326015 +0000 UTC m=+1031.006490673" lastFinishedPulling="2025-12-13 00:23:16.574269016 +0000 UTC m=+1033.057499549" observedRunningTime="2025-12-13 00:23:18.587857142 +0000 UTC m=+1035.071087665" watchObservedRunningTime="2025-12-13 00:23:18.5911192 +0000 UTC m=+1035.074349723" Dec 13 00:23:19 crc kubenswrapper[3562]: I1213 00:23:19.561281 3562 generic.go:334] "Generic (PLEG): container finished" podID="cdf6d9d7-60b1-444a-90c8-d806aa30dcd7" containerID="5f71e0a46dd7e48c9a4b503274699f4dd8f66fe3b95e71c27fa1c7775de10dd6" exitCode=0 Dec 13 00:23:19 crc kubenswrapper[3562]: I1213 00:23:19.561370 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d5727cr65" event={"ID":"cdf6d9d7-60b1-444a-90c8-d806aa30dcd7","Type":"ContainerDied","Data":"5f71e0a46dd7e48c9a4b503274699f4dd8f66fe3b95e71c27fa1c7775de10dd6"} Dec 13 00:23:19 crc kubenswrapper[3562]: I1213 00:23:19.563649 3562 generic.go:334] "Generic (PLEG): container finished" podID="986aa180-17f8-4424-9f97-5fe45ec05dff" containerID="8a9dac86038354da32adfbc34d8d91a74b57aa44cc5e91789ec4fe1b94a9d7c6" exitCode=0 Dec 13 00:23:19 crc kubenswrapper[3562]: I1213 00:23:19.563697 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9gbx7" event={"ID":"986aa180-17f8-4424-9f97-5fe45ec05dff","Type":"ContainerDied","Data":"8a9dac86038354da32adfbc34d8d91a74b57aa44cc5e91789ec4fe1b94a9d7c6"} Dec 13 00:23:20 crc kubenswrapper[3562]: I1213 00:23:20.884003 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d5727cr65" Dec 13 00:23:20 crc kubenswrapper[3562]: I1213 00:23:20.888863 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9gbx7" Dec 13 00:23:20 crc kubenswrapper[3562]: I1213 00:23:20.965086 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/986aa180-17f8-4424-9f97-5fe45ec05dff-util\") pod \"986aa180-17f8-4424-9f97-5fe45ec05dff\" (UID: \"986aa180-17f8-4424-9f97-5fe45ec05dff\") " Dec 13 00:23:20 crc kubenswrapper[3562]: I1213 00:23:20.965151 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cdf6d9d7-60b1-444a-90c8-d806aa30dcd7-bundle\") pod \"cdf6d9d7-60b1-444a-90c8-d806aa30dcd7\" (UID: \"cdf6d9d7-60b1-444a-90c8-d806aa30dcd7\") " Dec 13 00:23:20 crc kubenswrapper[3562]: I1213 00:23:20.965220 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cdf6d9d7-60b1-444a-90c8-d806aa30dcd7-util\") pod \"cdf6d9d7-60b1-444a-90c8-d806aa30dcd7\" (UID: \"cdf6d9d7-60b1-444a-90c8-d806aa30dcd7\") " Dec 13 00:23:20 crc kubenswrapper[3562]: I1213 00:23:20.965267 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/986aa180-17f8-4424-9f97-5fe45ec05dff-bundle\") pod \"986aa180-17f8-4424-9f97-5fe45ec05dff\" (UID: \"986aa180-17f8-4424-9f97-5fe45ec05dff\") " Dec 13 00:23:20 crc kubenswrapper[3562]: I1213 00:23:20.965325 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dqw5\" (UniqueName: \"kubernetes.io/projected/986aa180-17f8-4424-9f97-5fe45ec05dff-kube-api-access-5dqw5\") pod \"986aa180-17f8-4424-9f97-5fe45ec05dff\" (UID: \"986aa180-17f8-4424-9f97-5fe45ec05dff\") " Dec 13 00:23:20 crc kubenswrapper[3562]: I1213 00:23:20.965407 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bh4h\" (UniqueName: \"kubernetes.io/projected/cdf6d9d7-60b1-444a-90c8-d806aa30dcd7-kube-api-access-2bh4h\") pod \"cdf6d9d7-60b1-444a-90c8-d806aa30dcd7\" (UID: \"cdf6d9d7-60b1-444a-90c8-d806aa30dcd7\") " Dec 13 00:23:20 crc kubenswrapper[3562]: I1213 00:23:20.965703 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cdf6d9d7-60b1-444a-90c8-d806aa30dcd7-bundle" (OuterVolumeSpecName: "bundle") pod "cdf6d9d7-60b1-444a-90c8-d806aa30dcd7" (UID: "cdf6d9d7-60b1-444a-90c8-d806aa30dcd7"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 00:23:20 crc kubenswrapper[3562]: I1213 00:23:20.966119 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/986aa180-17f8-4424-9f97-5fe45ec05dff-bundle" (OuterVolumeSpecName: "bundle") pod "986aa180-17f8-4424-9f97-5fe45ec05dff" (UID: "986aa180-17f8-4424-9f97-5fe45ec05dff"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 00:23:20 crc kubenswrapper[3562]: I1213 00:23:20.971626 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdf6d9d7-60b1-444a-90c8-d806aa30dcd7-kube-api-access-2bh4h" (OuterVolumeSpecName: "kube-api-access-2bh4h") pod "cdf6d9d7-60b1-444a-90c8-d806aa30dcd7" (UID: "cdf6d9d7-60b1-444a-90c8-d806aa30dcd7"). InnerVolumeSpecName "kube-api-access-2bh4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 00:23:20 crc kubenswrapper[3562]: I1213 00:23:20.972198 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/986aa180-17f8-4424-9f97-5fe45ec05dff-kube-api-access-5dqw5" (OuterVolumeSpecName: "kube-api-access-5dqw5") pod "986aa180-17f8-4424-9f97-5fe45ec05dff" (UID: "986aa180-17f8-4424-9f97-5fe45ec05dff"). InnerVolumeSpecName "kube-api-access-5dqw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 00:23:20 crc kubenswrapper[3562]: I1213 00:23:20.986990 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/986aa180-17f8-4424-9f97-5fe45ec05dff-util" (OuterVolumeSpecName: "util") pod "986aa180-17f8-4424-9f97-5fe45ec05dff" (UID: "986aa180-17f8-4424-9f97-5fe45ec05dff"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 00:23:20 crc kubenswrapper[3562]: I1213 00:23:20.991068 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cdf6d9d7-60b1-444a-90c8-d806aa30dcd7-util" (OuterVolumeSpecName: "util") pod "cdf6d9d7-60b1-444a-90c8-d806aa30dcd7" (UID: "cdf6d9d7-60b1-444a-90c8-d806aa30dcd7"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 00:23:21 crc kubenswrapper[3562]: I1213 00:23:21.067173 3562 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-2bh4h\" (UniqueName: \"kubernetes.io/projected/cdf6d9d7-60b1-444a-90c8-d806aa30dcd7-kube-api-access-2bh4h\") on node \"crc\" DevicePath \"\"" Dec 13 00:23:21 crc kubenswrapper[3562]: I1213 00:23:21.067208 3562 reconciler_common.go:300] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/986aa180-17f8-4424-9f97-5fe45ec05dff-util\") on node \"crc\" DevicePath \"\"" Dec 13 00:23:21 crc kubenswrapper[3562]: I1213 00:23:21.067222 3562 reconciler_common.go:300] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cdf6d9d7-60b1-444a-90c8-d806aa30dcd7-bundle\") on node \"crc\" DevicePath \"\"" Dec 13 00:23:21 crc kubenswrapper[3562]: I1213 00:23:21.067285 3562 reconciler_common.go:300] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cdf6d9d7-60b1-444a-90c8-d806aa30dcd7-util\") on node \"crc\" DevicePath \"\"" Dec 13 00:23:21 crc kubenswrapper[3562]: I1213 00:23:21.067297 3562 reconciler_common.go:300] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/986aa180-17f8-4424-9f97-5fe45ec05dff-bundle\") on node \"crc\" DevicePath \"\"" Dec 13 00:23:21 crc kubenswrapper[3562]: I1213 00:23:21.067307 3562 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-5dqw5\" (UniqueName: \"kubernetes.io/projected/986aa180-17f8-4424-9f97-5fe45ec05dff-kube-api-access-5dqw5\") on node \"crc\" DevicePath \"\"" Dec 13 00:23:21 crc kubenswrapper[3562]: I1213 00:23:21.574170 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d5727cr65" event={"ID":"cdf6d9d7-60b1-444a-90c8-d806aa30dcd7","Type":"ContainerDied","Data":"ab35e0df79d0a7147fa5148e93f0d5ef7be89e54f5f4d2872d02897369c22cc1"} Dec 13 00:23:21 crc kubenswrapper[3562]: I1213 00:23:21.574226 3562 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab35e0df79d0a7147fa5148e93f0d5ef7be89e54f5f4d2872d02897369c22cc1" Dec 13 00:23:21 crc kubenswrapper[3562]: I1213 00:23:21.574325 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d5727cr65" Dec 13 00:23:21 crc kubenswrapper[3562]: I1213 00:23:21.583606 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9gbx7" event={"ID":"986aa180-17f8-4424-9f97-5fe45ec05dff","Type":"ContainerDied","Data":"8d93febed46cfda4cb984c3fb15df896520e12fa46cabba03265a4eda3989c9c"} Dec 13 00:23:21 crc kubenswrapper[3562]: I1213 00:23:21.583663 3562 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d93febed46cfda4cb984c3fb15df896520e12fa46cabba03265a4eda3989c9c" Dec 13 00:23:21 crc kubenswrapper[3562]: I1213 00:23:21.583662 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9gbx7" Dec 13 00:23:27 crc kubenswrapper[3562]: I1213 00:23:27.021458 3562 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/interconnect-operator-7b75f466d4-x9d2s"] Dec 13 00:23:27 crc kubenswrapper[3562]: I1213 00:23:27.021836 3562 topology_manager.go:215] "Topology Admit Handler" podUID="3e20f133-eb9c-4de9-9b78-e8529234b04f" podNamespace="service-telemetry" podName="interconnect-operator-7b75f466d4-x9d2s" Dec 13 00:23:27 crc kubenswrapper[3562]: E1213 00:23:27.022047 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="cdf6d9d7-60b1-444a-90c8-d806aa30dcd7" containerName="pull" Dec 13 00:23:27 crc kubenswrapper[3562]: I1213 00:23:27.022058 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdf6d9d7-60b1-444a-90c8-d806aa30dcd7" containerName="pull" Dec 13 00:23:27 crc kubenswrapper[3562]: E1213 00:23:27.022071 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="cdf6d9d7-60b1-444a-90c8-d806aa30dcd7" containerName="util" Dec 13 00:23:27 crc kubenswrapper[3562]: I1213 00:23:27.022079 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdf6d9d7-60b1-444a-90c8-d806aa30dcd7" containerName="util" Dec 13 00:23:27 crc kubenswrapper[3562]: E1213 00:23:27.022092 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="986aa180-17f8-4424-9f97-5fe45ec05dff" containerName="extract" Dec 13 00:23:27 crc kubenswrapper[3562]: I1213 00:23:27.022100 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="986aa180-17f8-4424-9f97-5fe45ec05dff" containerName="extract" Dec 13 00:23:27 crc kubenswrapper[3562]: E1213 00:23:27.022111 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="cdf6d9d7-60b1-444a-90c8-d806aa30dcd7" containerName="extract" Dec 13 00:23:27 crc kubenswrapper[3562]: I1213 00:23:27.022117 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdf6d9d7-60b1-444a-90c8-d806aa30dcd7" containerName="extract" Dec 13 00:23:27 crc kubenswrapper[3562]: E1213 00:23:27.022126 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="986aa180-17f8-4424-9f97-5fe45ec05dff" containerName="util" Dec 13 00:23:27 crc kubenswrapper[3562]: I1213 00:23:27.022132 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="986aa180-17f8-4424-9f97-5fe45ec05dff" containerName="util" Dec 13 00:23:27 crc kubenswrapper[3562]: E1213 00:23:27.022208 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="986aa180-17f8-4424-9f97-5fe45ec05dff" containerName="pull" Dec 13 00:23:27 crc kubenswrapper[3562]: I1213 00:23:27.022214 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="986aa180-17f8-4424-9f97-5fe45ec05dff" containerName="pull" Dec 13 00:23:27 crc kubenswrapper[3562]: I1213 00:23:27.022312 3562 memory_manager.go:354] "RemoveStaleState removing state" podUID="986aa180-17f8-4424-9f97-5fe45ec05dff" containerName="extract" Dec 13 00:23:27 crc kubenswrapper[3562]: I1213 00:23:27.022323 3562 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdf6d9d7-60b1-444a-90c8-d806aa30dcd7" containerName="extract" Dec 13 00:23:27 crc kubenswrapper[3562]: I1213 00:23:27.022712 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-7b75f466d4-x9d2s" Dec 13 00:23:27 crc kubenswrapper[3562]: I1213 00:23:27.025210 3562 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"interconnect-operator-dockercfg-4g22z" Dec 13 00:23:27 crc kubenswrapper[3562]: I1213 00:23:27.036637 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-7b75f466d4-x9d2s"] Dec 13 00:23:27 crc kubenswrapper[3562]: I1213 00:23:27.171716 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glqbc\" (UniqueName: \"kubernetes.io/projected/3e20f133-eb9c-4de9-9b78-e8529234b04f-kube-api-access-glqbc\") pod \"interconnect-operator-7b75f466d4-x9d2s\" (UID: \"3e20f133-eb9c-4de9-9b78-e8529234b04f\") " pod="service-telemetry/interconnect-operator-7b75f466d4-x9d2s" Dec 13 00:23:27 crc kubenswrapper[3562]: I1213 00:23:27.273144 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-glqbc\" (UniqueName: \"kubernetes.io/projected/3e20f133-eb9c-4de9-9b78-e8529234b04f-kube-api-access-glqbc\") pod \"interconnect-operator-7b75f466d4-x9d2s\" (UID: \"3e20f133-eb9c-4de9-9b78-e8529234b04f\") " pod="service-telemetry/interconnect-operator-7b75f466d4-x9d2s" Dec 13 00:23:27 crc kubenswrapper[3562]: I1213 00:23:27.310075 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-glqbc\" (UniqueName: \"kubernetes.io/projected/3e20f133-eb9c-4de9-9b78-e8529234b04f-kube-api-access-glqbc\") pod \"interconnect-operator-7b75f466d4-x9d2s\" (UID: \"3e20f133-eb9c-4de9-9b78-e8529234b04f\") " pod="service-telemetry/interconnect-operator-7b75f466d4-x9d2s" Dec 13 00:23:27 crc kubenswrapper[3562]: I1213 00:23:27.338598 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-7b75f466d4-x9d2s" Dec 13 00:23:27 crc kubenswrapper[3562]: I1213 00:23:27.562669 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-7b75f466d4-x9d2s"] Dec 13 00:23:27 crc kubenswrapper[3562]: W1213 00:23:27.566050 3562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e20f133_eb9c_4de9_9b78_e8529234b04f.slice/crio-f0d469c3f120cec7a8f9c5258dde5636600336f44fcee3d2b47812e61f67559d WatchSource:0}: Error finding container f0d469c3f120cec7a8f9c5258dde5636600336f44fcee3d2b47812e61f67559d: Status 404 returned error can't find the container with id f0d469c3f120cec7a8f9c5258dde5636600336f44fcee3d2b47812e61f67559d Dec 13 00:23:27 crc kubenswrapper[3562]: I1213 00:23:27.613195 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-7b75f466d4-x9d2s" event={"ID":"3e20f133-eb9c-4de9-9b78-e8529234b04f","Type":"ContainerStarted","Data":"f0d469c3f120cec7a8f9c5258dde5636600336f44fcee3d2b47812e61f67559d"} Dec 13 00:23:30 crc kubenswrapper[3562]: I1213 00:23:30.327349 3562 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-6cd7d78758-h9chr"] Dec 13 00:23:30 crc kubenswrapper[3562]: I1213 00:23:30.327778 3562 topology_manager.go:215] "Topology Admit Handler" podUID="2a715853-7b11-4e9c-aa69-fb56733a0ab4" podNamespace="service-telemetry" podName="service-telemetry-operator-6cd7d78758-h9chr" Dec 13 00:23:30 crc kubenswrapper[3562]: I1213 00:23:30.328857 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-6cd7d78758-h9chr" Dec 13 00:23:30 crc kubenswrapper[3562]: I1213 00:23:30.334675 3562 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"service-telemetry-operator-dockercfg-9bqjv" Dec 13 00:23:30 crc kubenswrapper[3562]: I1213 00:23:30.342035 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-6cd7d78758-h9chr"] Dec 13 00:23:30 crc kubenswrapper[3562]: I1213 00:23:30.414465 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6kcl\" (UniqueName: \"kubernetes.io/projected/2a715853-7b11-4e9c-aa69-fb56733a0ab4-kube-api-access-d6kcl\") pod \"service-telemetry-operator-6cd7d78758-h9chr\" (UID: \"2a715853-7b11-4e9c-aa69-fb56733a0ab4\") " pod="service-telemetry/service-telemetry-operator-6cd7d78758-h9chr" Dec 13 00:23:30 crc kubenswrapper[3562]: I1213 00:23:30.414641 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/2a715853-7b11-4e9c-aa69-fb56733a0ab4-runner\") pod \"service-telemetry-operator-6cd7d78758-h9chr\" (UID: \"2a715853-7b11-4e9c-aa69-fb56733a0ab4\") " pod="service-telemetry/service-telemetry-operator-6cd7d78758-h9chr" Dec 13 00:23:30 crc kubenswrapper[3562]: I1213 00:23:30.516702 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/2a715853-7b11-4e9c-aa69-fb56733a0ab4-runner\") pod \"service-telemetry-operator-6cd7d78758-h9chr\" (UID: \"2a715853-7b11-4e9c-aa69-fb56733a0ab4\") " pod="service-telemetry/service-telemetry-operator-6cd7d78758-h9chr" Dec 13 00:23:30 crc kubenswrapper[3562]: I1213 00:23:30.517145 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d6kcl\" (UniqueName: \"kubernetes.io/projected/2a715853-7b11-4e9c-aa69-fb56733a0ab4-kube-api-access-d6kcl\") pod \"service-telemetry-operator-6cd7d78758-h9chr\" (UID: \"2a715853-7b11-4e9c-aa69-fb56733a0ab4\") " pod="service-telemetry/service-telemetry-operator-6cd7d78758-h9chr" Dec 13 00:23:30 crc kubenswrapper[3562]: I1213 00:23:30.517385 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/2a715853-7b11-4e9c-aa69-fb56733a0ab4-runner\") pod \"service-telemetry-operator-6cd7d78758-h9chr\" (UID: \"2a715853-7b11-4e9c-aa69-fb56733a0ab4\") " pod="service-telemetry/service-telemetry-operator-6cd7d78758-h9chr" Dec 13 00:23:30 crc kubenswrapper[3562]: I1213 00:23:30.540514 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6kcl\" (UniqueName: \"kubernetes.io/projected/2a715853-7b11-4e9c-aa69-fb56733a0ab4-kube-api-access-d6kcl\") pod \"service-telemetry-operator-6cd7d78758-h9chr\" (UID: \"2a715853-7b11-4e9c-aa69-fb56733a0ab4\") " pod="service-telemetry/service-telemetry-operator-6cd7d78758-h9chr" Dec 13 00:23:30 crc kubenswrapper[3562]: I1213 00:23:30.648399 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-6cd7d78758-h9chr" Dec 13 00:23:30 crc kubenswrapper[3562]: I1213 00:23:30.951418 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-6cd7d78758-h9chr"] Dec 13 00:23:31 crc kubenswrapper[3562]: I1213 00:23:31.635966 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-6cd7d78758-h9chr" event={"ID":"2a715853-7b11-4e9c-aa69-fb56733a0ab4","Type":"ContainerStarted","Data":"3b7273bc3b01a0a754a337339995c31f85fb910e17a51f81d1eef3ecd7256aea"} Dec 13 00:23:41 crc kubenswrapper[3562]: I1213 00:23:41.702417 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-6cd7d78758-h9chr" event={"ID":"2a715853-7b11-4e9c-aa69-fb56733a0ab4","Type":"ContainerStarted","Data":"fbc46f5e90285adb497190489b251dbed7e336cdd80cf0efc48fe2c13d7dc608"} Dec 13 00:23:41 crc kubenswrapper[3562]: I1213 00:23:41.703524 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-7b75f466d4-x9d2s" event={"ID":"3e20f133-eb9c-4de9-9b78-e8529234b04f","Type":"ContainerStarted","Data":"dd797c74e55588ba70099687a003a0948c01acfd9fe6f232ecc09e7432fd4047"} Dec 13 00:23:41 crc kubenswrapper[3562]: I1213 00:23:41.722160 3562 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-6cd7d78758-h9chr" podStartSLOduration=1.514373565 podStartE2EDuration="11.722115169s" podCreationTimestamp="2025-12-13 00:23:30 +0000 UTC" firstStartedPulling="2025-12-13 00:23:30.97020496 +0000 UTC m=+1047.453435483" lastFinishedPulling="2025-12-13 00:23:41.177946564 +0000 UTC m=+1057.661177087" observedRunningTime="2025-12-13 00:23:41.716589799 +0000 UTC m=+1058.199820332" watchObservedRunningTime="2025-12-13 00:23:41.722115169 +0000 UTC m=+1058.205345692" Dec 13 00:23:41 crc kubenswrapper[3562]: I1213 00:23:41.739547 3562 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/interconnect-operator-7b75f466d4-x9d2s" podStartSLOduration=1.288577144 podStartE2EDuration="14.739490868s" podCreationTimestamp="2025-12-13 00:23:27 +0000 UTC" firstStartedPulling="2025-12-13 00:23:27.572046555 +0000 UTC m=+1044.055277078" lastFinishedPulling="2025-12-13 00:23:41.022960279 +0000 UTC m=+1057.506190802" observedRunningTime="2025-12-13 00:23:41.736295611 +0000 UTC m=+1058.219526134" watchObservedRunningTime="2025-12-13 00:23:41.739490868 +0000 UTC m=+1058.222721401" Dec 13 00:24:03 crc kubenswrapper[3562]: I1213 00:24:03.027083 3562 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-84dbc59cb8-qccls"] Dec 13 00:24:03 crc kubenswrapper[3562]: I1213 00:24:03.027737 3562 topology_manager.go:215] "Topology Admit Handler" podUID="15cf5f07-acc3-4ddf-8ef3-4db09075d50c" podNamespace="service-telemetry" podName="default-interconnect-84dbc59cb8-qccls" Dec 13 00:24:03 crc kubenswrapper[3562]: I1213 00:24:03.028526 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-84dbc59cb8-qccls" Dec 13 00:24:03 crc kubenswrapper[3562]: I1213 00:24:03.030995 3562 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-dockercfg-bfwb8" Dec 13 00:24:03 crc kubenswrapper[3562]: I1213 00:24:03.031445 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-interconnect-sasl-config" Dec 13 00:24:03 crc kubenswrapper[3562]: I1213 00:24:03.031775 3562 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-users" Dec 13 00:24:03 crc kubenswrapper[3562]: I1213 00:24:03.032416 3562 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-inter-router-ca" Dec 13 00:24:03 crc kubenswrapper[3562]: I1213 00:24:03.032571 3562 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-openstack-ca" Dec 13 00:24:03 crc kubenswrapper[3562]: I1213 00:24:03.035479 3562 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-openstack-credentials" Dec 13 00:24:03 crc kubenswrapper[3562]: I1213 00:24:03.044416 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-84dbc59cb8-qccls"] Dec 13 00:24:03 crc kubenswrapper[3562]: I1213 00:24:03.054767 3562 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-inter-router-credentials" Dec 13 00:24:03 crc kubenswrapper[3562]: I1213 00:24:03.227263 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/15cf5f07-acc3-4ddf-8ef3-4db09075d50c-default-interconnect-inter-router-credentials\") pod \"default-interconnect-84dbc59cb8-qccls\" (UID: \"15cf5f07-acc3-4ddf-8ef3-4db09075d50c\") " pod="service-telemetry/default-interconnect-84dbc59cb8-qccls" Dec 13 00:24:03 crc kubenswrapper[3562]: I1213 00:24:03.227332 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/15cf5f07-acc3-4ddf-8ef3-4db09075d50c-sasl-config\") pod \"default-interconnect-84dbc59cb8-qccls\" (UID: \"15cf5f07-acc3-4ddf-8ef3-4db09075d50c\") " pod="service-telemetry/default-interconnect-84dbc59cb8-qccls" Dec 13 00:24:03 crc kubenswrapper[3562]: I1213 00:24:03.227375 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/15cf5f07-acc3-4ddf-8ef3-4db09075d50c-default-interconnect-openstack-credentials\") pod \"default-interconnect-84dbc59cb8-qccls\" (UID: \"15cf5f07-acc3-4ddf-8ef3-4db09075d50c\") " pod="service-telemetry/default-interconnect-84dbc59cb8-qccls" Dec 13 00:24:03 crc kubenswrapper[3562]: I1213 00:24:03.227432 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/15cf5f07-acc3-4ddf-8ef3-4db09075d50c-default-interconnect-inter-router-ca\") pod \"default-interconnect-84dbc59cb8-qccls\" (UID: \"15cf5f07-acc3-4ddf-8ef3-4db09075d50c\") " pod="service-telemetry/default-interconnect-84dbc59cb8-qccls" Dec 13 00:24:03 crc kubenswrapper[3562]: I1213 00:24:03.227471 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/15cf5f07-acc3-4ddf-8ef3-4db09075d50c-default-interconnect-openstack-ca\") pod \"default-interconnect-84dbc59cb8-qccls\" (UID: \"15cf5f07-acc3-4ddf-8ef3-4db09075d50c\") " pod="service-telemetry/default-interconnect-84dbc59cb8-qccls" Dec 13 00:24:03 crc kubenswrapper[3562]: I1213 00:24:03.227518 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/15cf5f07-acc3-4ddf-8ef3-4db09075d50c-sasl-users\") pod \"default-interconnect-84dbc59cb8-qccls\" (UID: \"15cf5f07-acc3-4ddf-8ef3-4db09075d50c\") " pod="service-telemetry/default-interconnect-84dbc59cb8-qccls" Dec 13 00:24:03 crc kubenswrapper[3562]: I1213 00:24:03.227550 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7v92\" (UniqueName: \"kubernetes.io/projected/15cf5f07-acc3-4ddf-8ef3-4db09075d50c-kube-api-access-h7v92\") pod \"default-interconnect-84dbc59cb8-qccls\" (UID: \"15cf5f07-acc3-4ddf-8ef3-4db09075d50c\") " pod="service-telemetry/default-interconnect-84dbc59cb8-qccls" Dec 13 00:24:03 crc kubenswrapper[3562]: I1213 00:24:03.328392 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/15cf5f07-acc3-4ddf-8ef3-4db09075d50c-default-interconnect-inter-router-credentials\") pod \"default-interconnect-84dbc59cb8-qccls\" (UID: \"15cf5f07-acc3-4ddf-8ef3-4db09075d50c\") " pod="service-telemetry/default-interconnect-84dbc59cb8-qccls" Dec 13 00:24:03 crc kubenswrapper[3562]: I1213 00:24:03.328460 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/15cf5f07-acc3-4ddf-8ef3-4db09075d50c-sasl-config\") pod \"default-interconnect-84dbc59cb8-qccls\" (UID: \"15cf5f07-acc3-4ddf-8ef3-4db09075d50c\") " pod="service-telemetry/default-interconnect-84dbc59cb8-qccls" Dec 13 00:24:03 crc kubenswrapper[3562]: I1213 00:24:03.328495 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/15cf5f07-acc3-4ddf-8ef3-4db09075d50c-default-interconnect-openstack-credentials\") pod \"default-interconnect-84dbc59cb8-qccls\" (UID: \"15cf5f07-acc3-4ddf-8ef3-4db09075d50c\") " pod="service-telemetry/default-interconnect-84dbc59cb8-qccls" Dec 13 00:24:03 crc kubenswrapper[3562]: I1213 00:24:03.328533 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/15cf5f07-acc3-4ddf-8ef3-4db09075d50c-default-interconnect-inter-router-ca\") pod \"default-interconnect-84dbc59cb8-qccls\" (UID: \"15cf5f07-acc3-4ddf-8ef3-4db09075d50c\") " pod="service-telemetry/default-interconnect-84dbc59cb8-qccls" Dec 13 00:24:03 crc kubenswrapper[3562]: I1213 00:24:03.328570 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/15cf5f07-acc3-4ddf-8ef3-4db09075d50c-default-interconnect-openstack-ca\") pod \"default-interconnect-84dbc59cb8-qccls\" (UID: \"15cf5f07-acc3-4ddf-8ef3-4db09075d50c\") " pod="service-telemetry/default-interconnect-84dbc59cb8-qccls" Dec 13 00:24:03 crc kubenswrapper[3562]: I1213 00:24:03.328609 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/15cf5f07-acc3-4ddf-8ef3-4db09075d50c-sasl-users\") pod \"default-interconnect-84dbc59cb8-qccls\" (UID: \"15cf5f07-acc3-4ddf-8ef3-4db09075d50c\") " pod="service-telemetry/default-interconnect-84dbc59cb8-qccls" Dec 13 00:24:03 crc kubenswrapper[3562]: I1213 00:24:03.328632 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-h7v92\" (UniqueName: \"kubernetes.io/projected/15cf5f07-acc3-4ddf-8ef3-4db09075d50c-kube-api-access-h7v92\") pod \"default-interconnect-84dbc59cb8-qccls\" (UID: \"15cf5f07-acc3-4ddf-8ef3-4db09075d50c\") " pod="service-telemetry/default-interconnect-84dbc59cb8-qccls" Dec 13 00:24:03 crc kubenswrapper[3562]: I1213 00:24:03.331278 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/15cf5f07-acc3-4ddf-8ef3-4db09075d50c-sasl-config\") pod \"default-interconnect-84dbc59cb8-qccls\" (UID: \"15cf5f07-acc3-4ddf-8ef3-4db09075d50c\") " pod="service-telemetry/default-interconnect-84dbc59cb8-qccls" Dec 13 00:24:03 crc kubenswrapper[3562]: I1213 00:24:03.335192 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/15cf5f07-acc3-4ddf-8ef3-4db09075d50c-sasl-users\") pod \"default-interconnect-84dbc59cb8-qccls\" (UID: \"15cf5f07-acc3-4ddf-8ef3-4db09075d50c\") " pod="service-telemetry/default-interconnect-84dbc59cb8-qccls" Dec 13 00:24:03 crc kubenswrapper[3562]: I1213 00:24:03.339455 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/15cf5f07-acc3-4ddf-8ef3-4db09075d50c-default-interconnect-inter-router-ca\") pod \"default-interconnect-84dbc59cb8-qccls\" (UID: \"15cf5f07-acc3-4ddf-8ef3-4db09075d50c\") " pod="service-telemetry/default-interconnect-84dbc59cb8-qccls" Dec 13 00:24:03 crc kubenswrapper[3562]: I1213 00:24:03.339934 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/15cf5f07-acc3-4ddf-8ef3-4db09075d50c-default-interconnect-inter-router-credentials\") pod \"default-interconnect-84dbc59cb8-qccls\" (UID: \"15cf5f07-acc3-4ddf-8ef3-4db09075d50c\") " pod="service-telemetry/default-interconnect-84dbc59cb8-qccls" Dec 13 00:24:03 crc kubenswrapper[3562]: I1213 00:24:03.340603 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/15cf5f07-acc3-4ddf-8ef3-4db09075d50c-default-interconnect-openstack-credentials\") pod \"default-interconnect-84dbc59cb8-qccls\" (UID: \"15cf5f07-acc3-4ddf-8ef3-4db09075d50c\") " pod="service-telemetry/default-interconnect-84dbc59cb8-qccls" Dec 13 00:24:03 crc kubenswrapper[3562]: I1213 00:24:03.344425 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/15cf5f07-acc3-4ddf-8ef3-4db09075d50c-default-interconnect-openstack-ca\") pod \"default-interconnect-84dbc59cb8-qccls\" (UID: \"15cf5f07-acc3-4ddf-8ef3-4db09075d50c\") " pod="service-telemetry/default-interconnect-84dbc59cb8-qccls" Dec 13 00:24:03 crc kubenswrapper[3562]: I1213 00:24:03.348644 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7v92\" (UniqueName: \"kubernetes.io/projected/15cf5f07-acc3-4ddf-8ef3-4db09075d50c-kube-api-access-h7v92\") pod \"default-interconnect-84dbc59cb8-qccls\" (UID: \"15cf5f07-acc3-4ddf-8ef3-4db09075d50c\") " pod="service-telemetry/default-interconnect-84dbc59cb8-qccls" Dec 13 00:24:03 crc kubenswrapper[3562]: I1213 00:24:03.362455 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-84dbc59cb8-qccls" Dec 13 00:24:03 crc kubenswrapper[3562]: I1213 00:24:03.592743 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-84dbc59cb8-qccls"] Dec 13 00:24:03 crc kubenswrapper[3562]: I1213 00:24:03.748497 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 13 00:24:03 crc kubenswrapper[3562]: I1213 00:24:03.748581 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 13 00:24:03 crc kubenswrapper[3562]: I1213 00:24:03.748606 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 13 00:24:03 crc kubenswrapper[3562]: I1213 00:24:03.748631 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 13 00:24:03 crc kubenswrapper[3562]: I1213 00:24:03.748667 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 13 00:24:03 crc kubenswrapper[3562]: I1213 00:24:03.830416 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-84dbc59cb8-qccls" event={"ID":"15cf5f07-acc3-4ddf-8ef3-4db09075d50c","Type":"ContainerStarted","Data":"403095897c7a331e568c2e569a8ddf241d6e6f47a44ab91f5aa1fe02e253b28e"} Dec 13 00:24:12 crc kubenswrapper[3562]: I1213 00:24:12.902353 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-84dbc59cb8-qccls" event={"ID":"15cf5f07-acc3-4ddf-8ef3-4db09075d50c","Type":"ContainerStarted","Data":"5cbc281a13a75e7e5f9fa73d9c00c22dd5acb0ff16858a3633b40a7ed4492010"} Dec 13 00:24:12 crc kubenswrapper[3562]: I1213 00:24:12.928141 3562 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/default-interconnect-84dbc59cb8-qccls" podStartSLOduration=1.254938166 podStartE2EDuration="9.928083668s" podCreationTimestamp="2025-12-13 00:24:03 +0000 UTC" firstStartedPulling="2025-12-13 00:24:03.600300567 +0000 UTC m=+1080.083531090" lastFinishedPulling="2025-12-13 00:24:12.273446069 +0000 UTC m=+1088.756676592" observedRunningTime="2025-12-13 00:24:12.922188289 +0000 UTC m=+1089.405418842" watchObservedRunningTime="2025-12-13 00:24:12.928083668 +0000 UTC m=+1089.411314191" Dec 13 00:24:16 crc kubenswrapper[3562]: I1213 00:24:16.186783 3562 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-default-0"] Dec 13 00:24:16 crc kubenswrapper[3562]: I1213 00:24:16.188269 3562 topology_manager.go:215] "Topology Admit Handler" podUID="34d97b33-05e0-49fd-aa10-26387a5a9239" podNamespace="service-telemetry" podName="prometheus-default-0" Dec 13 00:24:16 crc kubenswrapper[3562]: I1213 00:24:16.189579 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Dec 13 00:24:16 crc kubenswrapper[3562]: I1213 00:24:16.194282 3562 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"default-prometheus-proxy-tls" Dec 13 00:24:16 crc kubenswrapper[3562]: I1213 00:24:16.194485 3562 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-default-tls-assets-0" Dec 13 00:24:16 crc kubenswrapper[3562]: I1213 00:24:16.194655 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"serving-certs-ca-bundle" Dec 13 00:24:16 crc kubenswrapper[3562]: I1213 00:24:16.194841 3562 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"default-session-secret" Dec 13 00:24:16 crc kubenswrapper[3562]: I1213 00:24:16.194977 3562 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-default" Dec 13 00:24:16 crc kubenswrapper[3562]: I1213 00:24:16.195081 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-default-rulefiles-0" Dec 13 00:24:16 crc kubenswrapper[3562]: I1213 00:24:16.195235 3562 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-stf-dockercfg-65qx8" Dec 13 00:24:16 crc kubenswrapper[3562]: I1213 00:24:16.195338 3562 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-default-web-config" Dec 13 00:24:16 crc kubenswrapper[3562]: I1213 00:24:16.205928 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Dec 13 00:24:16 crc kubenswrapper[3562]: I1213 00:24:16.305348 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/34d97b33-05e0-49fd-aa10-26387a5a9239-config\") pod \"prometheus-default-0\" (UID: \"34d97b33-05e0-49fd-aa10-26387a5a9239\") " pod="service-telemetry/prometheus-default-0" Dec 13 00:24:16 crc kubenswrapper[3562]: I1213 00:24:16.305437 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/34d97b33-05e0-49fd-aa10-26387a5a9239-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"34d97b33-05e0-49fd-aa10-26387a5a9239\") " pod="service-telemetry/prometheus-default-0" Dec 13 00:24:16 crc kubenswrapper[3562]: I1213 00:24:16.305472 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34d97b33-05e0-49fd-aa10-26387a5a9239-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"34d97b33-05e0-49fd-aa10-26387a5a9239\") " pod="service-telemetry/prometheus-default-0" Dec 13 00:24:16 crc kubenswrapper[3562]: I1213 00:24:16.305682 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/34d97b33-05e0-49fd-aa10-26387a5a9239-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"34d97b33-05e0-49fd-aa10-26387a5a9239\") " pod="service-telemetry/prometheus-default-0" Dec 13 00:24:16 crc kubenswrapper[3562]: I1213 00:24:16.305820 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/34d97b33-05e0-49fd-aa10-26387a5a9239-web-config\") pod \"prometheus-default-0\" (UID: \"34d97b33-05e0-49fd-aa10-26387a5a9239\") " pod="service-telemetry/prometheus-default-0" Dec 13 00:24:16 crc kubenswrapper[3562]: I1213 00:24:16.305956 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0f96dcf9-f566-4344-bac0-e31fdf0e5092\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f96dcf9-f566-4344-bac0-e31fdf0e5092\") pod \"prometheus-default-0\" (UID: \"34d97b33-05e0-49fd-aa10-26387a5a9239\") " pod="service-telemetry/prometheus-default-0" Dec 13 00:24:16 crc kubenswrapper[3562]: I1213 00:24:16.306024 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/34d97b33-05e0-49fd-aa10-26387a5a9239-tls-assets\") pod \"prometheus-default-0\" (UID: \"34d97b33-05e0-49fd-aa10-26387a5a9239\") " pod="service-telemetry/prometheus-default-0" Dec 13 00:24:16 crc kubenswrapper[3562]: I1213 00:24:16.306059 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn75r\" (UniqueName: \"kubernetes.io/projected/34d97b33-05e0-49fd-aa10-26387a5a9239-kube-api-access-gn75r\") pod \"prometheus-default-0\" (UID: \"34d97b33-05e0-49fd-aa10-26387a5a9239\") " pod="service-telemetry/prometheus-default-0" Dec 13 00:24:16 crc kubenswrapper[3562]: I1213 00:24:16.306081 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/34d97b33-05e0-49fd-aa10-26387a5a9239-config-out\") pod \"prometheus-default-0\" (UID: \"34d97b33-05e0-49fd-aa10-26387a5a9239\") " pod="service-telemetry/prometheus-default-0" Dec 13 00:24:16 crc kubenswrapper[3562]: I1213 00:24:16.306126 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/34d97b33-05e0-49fd-aa10-26387a5a9239-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"34d97b33-05e0-49fd-aa10-26387a5a9239\") " pod="service-telemetry/prometheus-default-0" Dec 13 00:24:16 crc kubenswrapper[3562]: I1213 00:24:16.407994 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/34d97b33-05e0-49fd-aa10-26387a5a9239-web-config\") pod \"prometheus-default-0\" (UID: \"34d97b33-05e0-49fd-aa10-26387a5a9239\") " pod="service-telemetry/prometheus-default-0" Dec 13 00:24:16 crc kubenswrapper[3562]: I1213 00:24:16.408051 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-0f96dcf9-f566-4344-bac0-e31fdf0e5092\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f96dcf9-f566-4344-bac0-e31fdf0e5092\") pod \"prometheus-default-0\" (UID: \"34d97b33-05e0-49fd-aa10-26387a5a9239\") " pod="service-telemetry/prometheus-default-0" Dec 13 00:24:16 crc kubenswrapper[3562]: I1213 00:24:16.408079 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/34d97b33-05e0-49fd-aa10-26387a5a9239-tls-assets\") pod \"prometheus-default-0\" (UID: \"34d97b33-05e0-49fd-aa10-26387a5a9239\") " pod="service-telemetry/prometheus-default-0" Dec 13 00:24:16 crc kubenswrapper[3562]: I1213 00:24:16.408103 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-gn75r\" (UniqueName: \"kubernetes.io/projected/34d97b33-05e0-49fd-aa10-26387a5a9239-kube-api-access-gn75r\") pod \"prometheus-default-0\" (UID: \"34d97b33-05e0-49fd-aa10-26387a5a9239\") " pod="service-telemetry/prometheus-default-0" Dec 13 00:24:16 crc kubenswrapper[3562]: I1213 00:24:16.408121 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/34d97b33-05e0-49fd-aa10-26387a5a9239-config-out\") pod \"prometheus-default-0\" (UID: \"34d97b33-05e0-49fd-aa10-26387a5a9239\") " pod="service-telemetry/prometheus-default-0" Dec 13 00:24:16 crc kubenswrapper[3562]: I1213 00:24:16.408145 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/34d97b33-05e0-49fd-aa10-26387a5a9239-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"34d97b33-05e0-49fd-aa10-26387a5a9239\") " pod="service-telemetry/prometheus-default-0" Dec 13 00:24:16 crc kubenswrapper[3562]: I1213 00:24:16.408184 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/34d97b33-05e0-49fd-aa10-26387a5a9239-config\") pod \"prometheus-default-0\" (UID: \"34d97b33-05e0-49fd-aa10-26387a5a9239\") " pod="service-telemetry/prometheus-default-0" Dec 13 00:24:16 crc kubenswrapper[3562]: I1213 00:24:16.408212 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/34d97b33-05e0-49fd-aa10-26387a5a9239-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"34d97b33-05e0-49fd-aa10-26387a5a9239\") " pod="service-telemetry/prometheus-default-0" Dec 13 00:24:16 crc kubenswrapper[3562]: I1213 00:24:16.408235 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34d97b33-05e0-49fd-aa10-26387a5a9239-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"34d97b33-05e0-49fd-aa10-26387a5a9239\") " pod="service-telemetry/prometheus-default-0" Dec 13 00:24:16 crc kubenswrapper[3562]: I1213 00:24:16.408264 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/34d97b33-05e0-49fd-aa10-26387a5a9239-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"34d97b33-05e0-49fd-aa10-26387a5a9239\") " pod="service-telemetry/prometheus-default-0" Dec 13 00:24:16 crc kubenswrapper[3562]: E1213 00:24:16.408782 3562 secret.go:194] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Dec 13 00:24:16 crc kubenswrapper[3562]: E1213 00:24:16.408872 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/34d97b33-05e0-49fd-aa10-26387a5a9239-secret-default-prometheus-proxy-tls podName:34d97b33-05e0-49fd-aa10-26387a5a9239 nodeName:}" failed. No retries permitted until 2025-12-13 00:24:16.908851073 +0000 UTC m=+1093.392081596 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/34d97b33-05e0-49fd-aa10-26387a5a9239-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "34d97b33-05e0-49fd-aa10-26387a5a9239") : secret "default-prometheus-proxy-tls" not found Dec 13 00:24:16 crc kubenswrapper[3562]: I1213 00:24:16.409461 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/34d97b33-05e0-49fd-aa10-26387a5a9239-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"34d97b33-05e0-49fd-aa10-26387a5a9239\") " pod="service-telemetry/prometheus-default-0" Dec 13 00:24:16 crc kubenswrapper[3562]: I1213 00:24:16.409785 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34d97b33-05e0-49fd-aa10-26387a5a9239-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"34d97b33-05e0-49fd-aa10-26387a5a9239\") " pod="service-telemetry/prometheus-default-0" Dec 13 00:24:16 crc kubenswrapper[3562]: I1213 00:24:16.411406 3562 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 13 00:24:16 crc kubenswrapper[3562]: I1213 00:24:16.411466 3562 operation_generator.go:664] "MountVolume.MountDevice succeeded for volume \"pvc-0f96dcf9-f566-4344-bac0-e31fdf0e5092\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f96dcf9-f566-4344-bac0-e31fdf0e5092\") pod \"prometheus-default-0\" (UID: \"34d97b33-05e0-49fd-aa10-26387a5a9239\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3eceffbda7d8458a5b6d884b070662817c63fb969cc971c946d19f50cc320a0d/globalmount\"" pod="service-telemetry/prometheus-default-0" Dec 13 00:24:16 crc kubenswrapper[3562]: I1213 00:24:16.413689 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/34d97b33-05e0-49fd-aa10-26387a5a9239-tls-assets\") pod \"prometheus-default-0\" (UID: \"34d97b33-05e0-49fd-aa10-26387a5a9239\") " pod="service-telemetry/prometheus-default-0" Dec 13 00:24:16 crc kubenswrapper[3562]: I1213 00:24:16.414162 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/34d97b33-05e0-49fd-aa10-26387a5a9239-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"34d97b33-05e0-49fd-aa10-26387a5a9239\") " pod="service-telemetry/prometheus-default-0" Dec 13 00:24:16 crc kubenswrapper[3562]: I1213 00:24:16.414683 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/34d97b33-05e0-49fd-aa10-26387a5a9239-config\") pod \"prometheus-default-0\" (UID: \"34d97b33-05e0-49fd-aa10-26387a5a9239\") " pod="service-telemetry/prometheus-default-0" Dec 13 00:24:16 crc kubenswrapper[3562]: I1213 00:24:16.415321 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/34d97b33-05e0-49fd-aa10-26387a5a9239-web-config\") pod \"prometheus-default-0\" (UID: \"34d97b33-05e0-49fd-aa10-26387a5a9239\") " pod="service-telemetry/prometheus-default-0" Dec 13 00:24:16 crc kubenswrapper[3562]: I1213 00:24:16.425186 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/34d97b33-05e0-49fd-aa10-26387a5a9239-config-out\") pod \"prometheus-default-0\" (UID: \"34d97b33-05e0-49fd-aa10-26387a5a9239\") " pod="service-telemetry/prometheus-default-0" Dec 13 00:24:16 crc kubenswrapper[3562]: I1213 00:24:16.427710 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn75r\" (UniqueName: \"kubernetes.io/projected/34d97b33-05e0-49fd-aa10-26387a5a9239-kube-api-access-gn75r\") pod \"prometheus-default-0\" (UID: \"34d97b33-05e0-49fd-aa10-26387a5a9239\") " pod="service-telemetry/prometheus-default-0" Dec 13 00:24:16 crc kubenswrapper[3562]: I1213 00:24:16.451687 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"pvc-0f96dcf9-f566-4344-bac0-e31fdf0e5092\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f96dcf9-f566-4344-bac0-e31fdf0e5092\") pod \"prometheus-default-0\" (UID: \"34d97b33-05e0-49fd-aa10-26387a5a9239\") " pod="service-telemetry/prometheus-default-0" Dec 13 00:24:16 crc kubenswrapper[3562]: I1213 00:24:16.917296 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/34d97b33-05e0-49fd-aa10-26387a5a9239-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"34d97b33-05e0-49fd-aa10-26387a5a9239\") " pod="service-telemetry/prometheus-default-0" Dec 13 00:24:16 crc kubenswrapper[3562]: E1213 00:24:16.917474 3562 secret.go:194] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Dec 13 00:24:16 crc kubenswrapper[3562]: E1213 00:24:16.917674 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/34d97b33-05e0-49fd-aa10-26387a5a9239-secret-default-prometheus-proxy-tls podName:34d97b33-05e0-49fd-aa10-26387a5a9239 nodeName:}" failed. No retries permitted until 2025-12-13 00:24:17.917656014 +0000 UTC m=+1094.400886537 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/34d97b33-05e0-49fd-aa10-26387a5a9239-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "34d97b33-05e0-49fd-aa10-26387a5a9239") : secret "default-prometheus-proxy-tls" not found Dec 13 00:24:17 crc kubenswrapper[3562]: I1213 00:24:17.933134 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/34d97b33-05e0-49fd-aa10-26387a5a9239-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"34d97b33-05e0-49fd-aa10-26387a5a9239\") " pod="service-telemetry/prometheus-default-0" Dec 13 00:24:17 crc kubenswrapper[3562]: I1213 00:24:17.940142 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/34d97b33-05e0-49fd-aa10-26387a5a9239-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"34d97b33-05e0-49fd-aa10-26387a5a9239\") " pod="service-telemetry/prometheus-default-0" Dec 13 00:24:18 crc kubenswrapper[3562]: I1213 00:24:18.007256 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Dec 13 00:24:18 crc kubenswrapper[3562]: I1213 00:24:18.493357 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Dec 13 00:24:18 crc kubenswrapper[3562]: I1213 00:24:18.931614 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"34d97b33-05e0-49fd-aa10-26387a5a9239","Type":"ContainerStarted","Data":"f832c24a96a8e0ea8589990fa8a8919e2a6a095e5652993762ab7a6e1984e3bb"} Dec 13 00:24:26 crc kubenswrapper[3562]: I1213 00:24:26.894504 3562 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/default-snmp-webhook-789c959b9-hx8c4"] Dec 13 00:24:26 crc kubenswrapper[3562]: I1213 00:24:26.895334 3562 topology_manager.go:215] "Topology Admit Handler" podUID="0ece4845-0358-42bb-a4b5-a2d319c728ae" podNamespace="service-telemetry" podName="default-snmp-webhook-789c959b9-hx8c4" Dec 13 00:24:26 crc kubenswrapper[3562]: I1213 00:24:26.896216 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-789c959b9-hx8c4" Dec 13 00:24:26 crc kubenswrapper[3562]: I1213 00:24:26.911163 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-789c959b9-hx8c4"] Dec 13 00:24:26 crc kubenswrapper[3562]: I1213 00:24:26.982527 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"34d97b33-05e0-49fd-aa10-26387a5a9239","Type":"ContainerStarted","Data":"bbaf29fe10e418c6b808bb220f51ce81e23c841487076d55205d7c1349db1597"} Dec 13 00:24:27 crc kubenswrapper[3562]: I1213 00:24:27.061548 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbp42\" (UniqueName: \"kubernetes.io/projected/0ece4845-0358-42bb-a4b5-a2d319c728ae-kube-api-access-bbp42\") pod \"default-snmp-webhook-789c959b9-hx8c4\" (UID: \"0ece4845-0358-42bb-a4b5-a2d319c728ae\") " pod="service-telemetry/default-snmp-webhook-789c959b9-hx8c4" Dec 13 00:24:27 crc kubenswrapper[3562]: I1213 00:24:27.163190 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bbp42\" (UniqueName: \"kubernetes.io/projected/0ece4845-0358-42bb-a4b5-a2d319c728ae-kube-api-access-bbp42\") pod \"default-snmp-webhook-789c959b9-hx8c4\" (UID: \"0ece4845-0358-42bb-a4b5-a2d319c728ae\") " pod="service-telemetry/default-snmp-webhook-789c959b9-hx8c4" Dec 13 00:24:27 crc kubenswrapper[3562]: I1213 00:24:27.182445 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbp42\" (UniqueName: \"kubernetes.io/projected/0ece4845-0358-42bb-a4b5-a2d319c728ae-kube-api-access-bbp42\") pod \"default-snmp-webhook-789c959b9-hx8c4\" (UID: \"0ece4845-0358-42bb-a4b5-a2d319c728ae\") " pod="service-telemetry/default-snmp-webhook-789c959b9-hx8c4" Dec 13 00:24:27 crc kubenswrapper[3562]: I1213 00:24:27.223148 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-789c959b9-hx8c4" Dec 13 00:24:27 crc kubenswrapper[3562]: I1213 00:24:27.625152 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-789c959b9-hx8c4"] Dec 13 00:24:27 crc kubenswrapper[3562]: I1213 00:24:27.992540 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-789c959b9-hx8c4" event={"ID":"0ece4845-0358-42bb-a4b5-a2d319c728ae","Type":"ContainerStarted","Data":"f96200b87a19812eb9652c05f447f06755aa947e1f0e52489f8c919bdc1d40a7"} Dec 13 00:24:30 crc kubenswrapper[3562]: I1213 00:24:30.746943 3562 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/alertmanager-default-0"] Dec 13 00:24:30 crc kubenswrapper[3562]: I1213 00:24:30.747329 3562 topology_manager.go:215] "Topology Admit Handler" podUID="81bb4d64-c9f8-4d5a-a8c1-e920df8350c9" podNamespace="service-telemetry" podName="alertmanager-default-0" Dec 13 00:24:30 crc kubenswrapper[3562]: I1213 00:24:30.748673 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Dec 13 00:24:30 crc kubenswrapper[3562]: I1213 00:24:30.751046 3562 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-cluster-tls-config" Dec 13 00:24:30 crc kubenswrapper[3562]: I1213 00:24:30.754316 3562 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-tls-assets-0" Dec 13 00:24:30 crc kubenswrapper[3562]: I1213 00:24:30.754488 3562 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-web-config" Dec 13 00:24:30 crc kubenswrapper[3562]: I1213 00:24:30.754586 3562 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-stf-dockercfg-27q7s" Dec 13 00:24:30 crc kubenswrapper[3562]: I1213 00:24:30.755100 3562 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-generated" Dec 13 00:24:30 crc kubenswrapper[3562]: I1213 00:24:30.755201 3562 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"default-alertmanager-proxy-tls" Dec 13 00:24:30 crc kubenswrapper[3562]: I1213 00:24:30.773250 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Dec 13 00:24:30 crc kubenswrapper[3562]: I1213 00:24:30.915841 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/81bb4d64-c9f8-4d5a-a8c1-e920df8350c9-tls-assets\") pod \"alertmanager-default-0\" (UID: \"81bb4d64-c9f8-4d5a-a8c1-e920df8350c9\") " pod="service-telemetry/alertmanager-default-0" Dec 13 00:24:30 crc kubenswrapper[3562]: I1213 00:24:30.915929 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/81bb4d64-c9f8-4d5a-a8c1-e920df8350c9-web-config\") pod \"alertmanager-default-0\" (UID: \"81bb4d64-c9f8-4d5a-a8c1-e920df8350c9\") " pod="service-telemetry/alertmanager-default-0" Dec 13 00:24:30 crc kubenswrapper[3562]: I1213 00:24:30.916047 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/81bb4d64-c9f8-4d5a-a8c1-e920df8350c9-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"81bb4d64-c9f8-4d5a-a8c1-e920df8350c9\") " pod="service-telemetry/alertmanager-default-0" Dec 13 00:24:30 crc kubenswrapper[3562]: I1213 00:24:30.916106 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/81bb4d64-c9f8-4d5a-a8c1-e920df8350c9-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"81bb4d64-c9f8-4d5a-a8c1-e920df8350c9\") " pod="service-telemetry/alertmanager-default-0" Dec 13 00:24:30 crc kubenswrapper[3562]: I1213 00:24:30.916227 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/81bb4d64-c9f8-4d5a-a8c1-e920df8350c9-config-out\") pod \"alertmanager-default-0\" (UID: \"81bb4d64-c9f8-4d5a-a8c1-e920df8350c9\") " pod="service-telemetry/alertmanager-default-0" Dec 13 00:24:30 crc kubenswrapper[3562]: I1213 00:24:30.916294 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/81bb4d64-c9f8-4d5a-a8c1-e920df8350c9-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"81bb4d64-c9f8-4d5a-a8c1-e920df8350c9\") " pod="service-telemetry/alertmanager-default-0" Dec 13 00:24:30 crc kubenswrapper[3562]: I1213 00:24:30.916329 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm4zc\" (UniqueName: \"kubernetes.io/projected/81bb4d64-c9f8-4d5a-a8c1-e920df8350c9-kube-api-access-bm4zc\") pod \"alertmanager-default-0\" (UID: \"81bb4d64-c9f8-4d5a-a8c1-e920df8350c9\") " pod="service-telemetry/alertmanager-default-0" Dec 13 00:24:30 crc kubenswrapper[3562]: I1213 00:24:30.916356 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c4c27555-dbdd-4a57-a91c-5ad02e54eb63\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c4c27555-dbdd-4a57-a91c-5ad02e54eb63\") pod \"alertmanager-default-0\" (UID: \"81bb4d64-c9f8-4d5a-a8c1-e920df8350c9\") " pod="service-telemetry/alertmanager-default-0" Dec 13 00:24:30 crc kubenswrapper[3562]: I1213 00:24:30.916429 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/81bb4d64-c9f8-4d5a-a8c1-e920df8350c9-config-volume\") pod \"alertmanager-default-0\" (UID: \"81bb4d64-c9f8-4d5a-a8c1-e920df8350c9\") " pod="service-telemetry/alertmanager-default-0" Dec 13 00:24:31 crc kubenswrapper[3562]: I1213 00:24:31.018200 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/81bb4d64-c9f8-4d5a-a8c1-e920df8350c9-config-out\") pod \"alertmanager-default-0\" (UID: \"81bb4d64-c9f8-4d5a-a8c1-e920df8350c9\") " pod="service-telemetry/alertmanager-default-0" Dec 13 00:24:31 crc kubenswrapper[3562]: I1213 00:24:31.018280 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/81bb4d64-c9f8-4d5a-a8c1-e920df8350c9-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"81bb4d64-c9f8-4d5a-a8c1-e920df8350c9\") " pod="service-telemetry/alertmanager-default-0" Dec 13 00:24:31 crc kubenswrapper[3562]: I1213 00:24:31.018354 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bm4zc\" (UniqueName: \"kubernetes.io/projected/81bb4d64-c9f8-4d5a-a8c1-e920df8350c9-kube-api-access-bm4zc\") pod \"alertmanager-default-0\" (UID: \"81bb4d64-c9f8-4d5a-a8c1-e920df8350c9\") " pod="service-telemetry/alertmanager-default-0" Dec 13 00:24:31 crc kubenswrapper[3562]: I1213 00:24:31.018395 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-c4c27555-dbdd-4a57-a91c-5ad02e54eb63\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c4c27555-dbdd-4a57-a91c-5ad02e54eb63\") pod \"alertmanager-default-0\" (UID: \"81bb4d64-c9f8-4d5a-a8c1-e920df8350c9\") " pod="service-telemetry/alertmanager-default-0" Dec 13 00:24:31 crc kubenswrapper[3562]: I1213 00:24:31.018453 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/81bb4d64-c9f8-4d5a-a8c1-e920df8350c9-config-volume\") pod \"alertmanager-default-0\" (UID: \"81bb4d64-c9f8-4d5a-a8c1-e920df8350c9\") " pod="service-telemetry/alertmanager-default-0" Dec 13 00:24:31 crc kubenswrapper[3562]: I1213 00:24:31.018485 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/81bb4d64-c9f8-4d5a-a8c1-e920df8350c9-tls-assets\") pod \"alertmanager-default-0\" (UID: \"81bb4d64-c9f8-4d5a-a8c1-e920df8350c9\") " pod="service-telemetry/alertmanager-default-0" Dec 13 00:24:31 crc kubenswrapper[3562]: E1213 00:24:31.018491 3562 secret.go:194] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Dec 13 00:24:31 crc kubenswrapper[3562]: I1213 00:24:31.018529 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/81bb4d64-c9f8-4d5a-a8c1-e920df8350c9-web-config\") pod \"alertmanager-default-0\" (UID: \"81bb4d64-c9f8-4d5a-a8c1-e920df8350c9\") " pod="service-telemetry/alertmanager-default-0" Dec 13 00:24:31 crc kubenswrapper[3562]: E1213 00:24:31.018558 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/81bb4d64-c9f8-4d5a-a8c1-e920df8350c9-secret-default-alertmanager-proxy-tls podName:81bb4d64-c9f8-4d5a-a8c1-e920df8350c9 nodeName:}" failed. No retries permitted until 2025-12-13 00:24:31.5185407 +0000 UTC m=+1108.001771223 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/81bb4d64-c9f8-4d5a-a8c1-e920df8350c9-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "81bb4d64-c9f8-4d5a-a8c1-e920df8350c9") : secret "default-alertmanager-proxy-tls" not found Dec 13 00:24:31 crc kubenswrapper[3562]: I1213 00:24:31.018590 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/81bb4d64-c9f8-4d5a-a8c1-e920df8350c9-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"81bb4d64-c9f8-4d5a-a8c1-e920df8350c9\") " pod="service-telemetry/alertmanager-default-0" Dec 13 00:24:31 crc kubenswrapper[3562]: I1213 00:24:31.018619 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/81bb4d64-c9f8-4d5a-a8c1-e920df8350c9-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"81bb4d64-c9f8-4d5a-a8c1-e920df8350c9\") " pod="service-telemetry/alertmanager-default-0" Dec 13 00:24:31 crc kubenswrapper[3562]: I1213 00:24:31.024821 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/81bb4d64-c9f8-4d5a-a8c1-e920df8350c9-config-volume\") pod \"alertmanager-default-0\" (UID: \"81bb4d64-c9f8-4d5a-a8c1-e920df8350c9\") " pod="service-telemetry/alertmanager-default-0" Dec 13 00:24:31 crc kubenswrapper[3562]: I1213 00:24:31.024923 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/81bb4d64-c9f8-4d5a-a8c1-e920df8350c9-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"81bb4d64-c9f8-4d5a-a8c1-e920df8350c9\") " pod="service-telemetry/alertmanager-default-0" Dec 13 00:24:31 crc kubenswrapper[3562]: I1213 00:24:31.025399 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/81bb4d64-c9f8-4d5a-a8c1-e920df8350c9-tls-assets\") pod \"alertmanager-default-0\" (UID: \"81bb4d64-c9f8-4d5a-a8c1-e920df8350c9\") " pod="service-telemetry/alertmanager-default-0" Dec 13 00:24:31 crc kubenswrapper[3562]: I1213 00:24:31.025438 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/81bb4d64-c9f8-4d5a-a8c1-e920df8350c9-web-config\") pod \"alertmanager-default-0\" (UID: \"81bb4d64-c9f8-4d5a-a8c1-e920df8350c9\") " pod="service-telemetry/alertmanager-default-0" Dec 13 00:24:31 crc kubenswrapper[3562]: I1213 00:24:31.031699 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/81bb4d64-c9f8-4d5a-a8c1-e920df8350c9-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"81bb4d64-c9f8-4d5a-a8c1-e920df8350c9\") " pod="service-telemetry/alertmanager-default-0" Dec 13 00:24:31 crc kubenswrapper[3562]: I1213 00:24:31.032052 3562 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 13 00:24:31 crc kubenswrapper[3562]: I1213 00:24:31.032098 3562 operation_generator.go:664] "MountVolume.MountDevice succeeded for volume \"pvc-c4c27555-dbdd-4a57-a91c-5ad02e54eb63\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c4c27555-dbdd-4a57-a91c-5ad02e54eb63\") pod \"alertmanager-default-0\" (UID: \"81bb4d64-c9f8-4d5a-a8c1-e920df8350c9\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/bfce48173f7e58b650a1067dfb501018afe162d1d51f3be1b3eeb076d3b19ff5/globalmount\"" pod="service-telemetry/alertmanager-default-0" Dec 13 00:24:31 crc kubenswrapper[3562]: I1213 00:24:31.038220 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-bm4zc\" (UniqueName: \"kubernetes.io/projected/81bb4d64-c9f8-4d5a-a8c1-e920df8350c9-kube-api-access-bm4zc\") pod \"alertmanager-default-0\" (UID: \"81bb4d64-c9f8-4d5a-a8c1-e920df8350c9\") " pod="service-telemetry/alertmanager-default-0" Dec 13 00:24:31 crc kubenswrapper[3562]: I1213 00:24:31.056249 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"pvc-c4c27555-dbdd-4a57-a91c-5ad02e54eb63\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c4c27555-dbdd-4a57-a91c-5ad02e54eb63\") pod \"alertmanager-default-0\" (UID: \"81bb4d64-c9f8-4d5a-a8c1-e920df8350c9\") " pod="service-telemetry/alertmanager-default-0" Dec 13 00:24:31 crc kubenswrapper[3562]: I1213 00:24:31.062754 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/81bb4d64-c9f8-4d5a-a8c1-e920df8350c9-config-out\") pod \"alertmanager-default-0\" (UID: \"81bb4d64-c9f8-4d5a-a8c1-e920df8350c9\") " pod="service-telemetry/alertmanager-default-0" Dec 13 00:24:31 crc kubenswrapper[3562]: I1213 00:24:31.526588 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/81bb4d64-c9f8-4d5a-a8c1-e920df8350c9-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"81bb4d64-c9f8-4d5a-a8c1-e920df8350c9\") " pod="service-telemetry/alertmanager-default-0" Dec 13 00:24:31 crc kubenswrapper[3562]: E1213 00:24:31.526790 3562 secret.go:194] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Dec 13 00:24:31 crc kubenswrapper[3562]: E1213 00:24:31.527196 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/81bb4d64-c9f8-4d5a-a8c1-e920df8350c9-secret-default-alertmanager-proxy-tls podName:81bb4d64-c9f8-4d5a-a8c1-e920df8350c9 nodeName:}" failed. No retries permitted until 2025-12-13 00:24:32.527175861 +0000 UTC m=+1109.010406384 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/81bb4d64-c9f8-4d5a-a8c1-e920df8350c9-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "81bb4d64-c9f8-4d5a-a8c1-e920df8350c9") : secret "default-alertmanager-proxy-tls" not found Dec 13 00:24:32 crc kubenswrapper[3562]: I1213 00:24:32.541934 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/81bb4d64-c9f8-4d5a-a8c1-e920df8350c9-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"81bb4d64-c9f8-4d5a-a8c1-e920df8350c9\") " pod="service-telemetry/alertmanager-default-0" Dec 13 00:24:32 crc kubenswrapper[3562]: E1213 00:24:32.542090 3562 secret.go:194] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Dec 13 00:24:32 crc kubenswrapper[3562]: E1213 00:24:32.542148 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/81bb4d64-c9f8-4d5a-a8c1-e920df8350c9-secret-default-alertmanager-proxy-tls podName:81bb4d64-c9f8-4d5a-a8c1-e920df8350c9 nodeName:}" failed. No retries permitted until 2025-12-13 00:24:34.542130337 +0000 UTC m=+1111.025360860 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/81bb4d64-c9f8-4d5a-a8c1-e920df8350c9-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "81bb4d64-c9f8-4d5a-a8c1-e920df8350c9") : secret "default-alertmanager-proxy-tls" not found Dec 13 00:24:33 crc kubenswrapper[3562]: I1213 00:24:33.020526 3562 generic.go:334] "Generic (PLEG): container finished" podID="34d97b33-05e0-49fd-aa10-26387a5a9239" containerID="bbaf29fe10e418c6b808bb220f51ce81e23c841487076d55205d7c1349db1597" exitCode=0 Dec 13 00:24:33 crc kubenswrapper[3562]: I1213 00:24:33.020666 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"34d97b33-05e0-49fd-aa10-26387a5a9239","Type":"ContainerDied","Data":"bbaf29fe10e418c6b808bb220f51ce81e23c841487076d55205d7c1349db1597"} Dec 13 00:24:34 crc kubenswrapper[3562]: I1213 00:24:34.585823 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/81bb4d64-c9f8-4d5a-a8c1-e920df8350c9-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"81bb4d64-c9f8-4d5a-a8c1-e920df8350c9\") " pod="service-telemetry/alertmanager-default-0" Dec 13 00:24:34 crc kubenswrapper[3562]: E1213 00:24:34.586055 3562 secret.go:194] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Dec 13 00:24:34 crc kubenswrapper[3562]: E1213 00:24:34.586343 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/81bb4d64-c9f8-4d5a-a8c1-e920df8350c9-secret-default-alertmanager-proxy-tls podName:81bb4d64-c9f8-4d5a-a8c1-e920df8350c9 nodeName:}" failed. No retries permitted until 2025-12-13 00:24:38.586319452 +0000 UTC m=+1115.069549975 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/81bb4d64-c9f8-4d5a-a8c1-e920df8350c9-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "81bb4d64-c9f8-4d5a-a8c1-e920df8350c9") : secret "default-alertmanager-proxy-tls" not found Dec 13 00:24:38 crc kubenswrapper[3562]: I1213 00:24:38.646749 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/81bb4d64-c9f8-4d5a-a8c1-e920df8350c9-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"81bb4d64-c9f8-4d5a-a8c1-e920df8350c9\") " pod="service-telemetry/alertmanager-default-0" Dec 13 00:24:38 crc kubenswrapper[3562]: I1213 00:24:38.666128 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/81bb4d64-c9f8-4d5a-a8c1-e920df8350c9-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"81bb4d64-c9f8-4d5a-a8c1-e920df8350c9\") " pod="service-telemetry/alertmanager-default-0" Dec 13 00:24:38 crc kubenswrapper[3562]: I1213 00:24:38.888793 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Dec 13 00:24:39 crc kubenswrapper[3562]: I1213 00:24:39.364769 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Dec 13 00:24:39 crc kubenswrapper[3562]: W1213 00:24:39.367685 3562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81bb4d64_c9f8_4d5a_a8c1_e920df8350c9.slice/crio-9e535f50d25e0db3489edc3a3bee66a21ff34006c34ac2cc033475568c007fb2 WatchSource:0}: Error finding container 9e535f50d25e0db3489edc3a3bee66a21ff34006c34ac2cc033475568c007fb2: Status 404 returned error can't find the container with id 9e535f50d25e0db3489edc3a3bee66a21ff34006c34ac2cc033475568c007fb2 Dec 13 00:24:40 crc kubenswrapper[3562]: I1213 00:24:40.063264 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"81bb4d64-c9f8-4d5a-a8c1-e920df8350c9","Type":"ContainerStarted","Data":"9e535f50d25e0db3489edc3a3bee66a21ff34006c34ac2cc033475568c007fb2"} Dec 13 00:24:41 crc kubenswrapper[3562]: I1213 00:24:41.069964 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-789c959b9-hx8c4" event={"ID":"0ece4845-0358-42bb-a4b5-a2d319c728ae","Type":"ContainerStarted","Data":"ba40e1f6a88dc1a7b6e5b686907850c299a3a45dde816a4cb66a446a599a4302"} Dec 13 00:24:41 crc kubenswrapper[3562]: I1213 00:24:41.087598 3562 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/default-snmp-webhook-789c959b9-hx8c4" podStartSLOduration=3.92697413 podStartE2EDuration="15.087553728s" podCreationTimestamp="2025-12-13 00:24:26 +0000 UTC" firstStartedPulling="2025-12-13 00:24:27.631226607 +0000 UTC m=+1104.114457140" lastFinishedPulling="2025-12-13 00:24:38.791806215 +0000 UTC m=+1115.275036738" observedRunningTime="2025-12-13 00:24:41.08621141 +0000 UTC m=+1117.569441933" watchObservedRunningTime="2025-12-13 00:24:41.087553728 +0000 UTC m=+1117.570784251" Dec 13 00:24:43 crc kubenswrapper[3562]: I1213 00:24:43.052614 3562 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 00:24:43 crc kubenswrapper[3562]: I1213 00:24:43.053047 3562 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 00:24:44 crc kubenswrapper[3562]: I1213 00:24:44.087611 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"81bb4d64-c9f8-4d5a-a8c1-e920df8350c9","Type":"ContainerStarted","Data":"2b33ec4ac26569be048a42b1680517507f02a2359bec225fe9c6af174dc92738"} Dec 13 00:24:47 crc kubenswrapper[3562]: I1213 00:24:47.622425 3562 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-777b58f8fb-gzckp"] Dec 13 00:24:47 crc kubenswrapper[3562]: I1213 00:24:47.622544 3562 topology_manager.go:215] "Topology Admit Handler" podUID="150cc378-bfa8-4da4-95e4-2f3565086ece" podNamespace="service-telemetry" podName="default-cloud1-coll-meter-smartgateway-777b58f8fb-gzckp" Dec 13 00:24:47 crc kubenswrapper[3562]: I1213 00:24:47.623667 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-777b58f8fb-gzckp" Dec 13 00:24:47 crc kubenswrapper[3562]: I1213 00:24:47.625761 3562 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"default-cloud1-coll-meter-proxy-tls" Dec 13 00:24:47 crc kubenswrapper[3562]: I1213 00:24:47.628388 3562 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-session-secret" Dec 13 00:24:47 crc kubenswrapper[3562]: I1213 00:24:47.628518 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-coll-meter-sg-core-configmap" Dec 13 00:24:47 crc kubenswrapper[3562]: I1213 00:24:47.629658 3562 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-dockercfg-mjcgp" Dec 13 00:24:47 crc kubenswrapper[3562]: I1213 00:24:47.651584 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-777b58f8fb-gzckp"] Dec 13 00:24:47 crc kubenswrapper[3562]: I1213 00:24:47.775715 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/150cc378-bfa8-4da4-95e4-2f3565086ece-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-777b58f8fb-gzckp\" (UID: \"150cc378-bfa8-4da4-95e4-2f3565086ece\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-777b58f8fb-gzckp" Dec 13 00:24:47 crc kubenswrapper[3562]: I1213 00:24:47.775780 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/150cc378-bfa8-4da4-95e4-2f3565086ece-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-777b58f8fb-gzckp\" (UID: \"150cc378-bfa8-4da4-95e4-2f3565086ece\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-777b58f8fb-gzckp" Dec 13 00:24:47 crc kubenswrapper[3562]: I1213 00:24:47.775819 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/150cc378-bfa8-4da4-95e4-2f3565086ece-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-777b58f8fb-gzckp\" (UID: \"150cc378-bfa8-4da4-95e4-2f3565086ece\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-777b58f8fb-gzckp" Dec 13 00:24:47 crc kubenswrapper[3562]: I1213 00:24:47.776276 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/150cc378-bfa8-4da4-95e4-2f3565086ece-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-777b58f8fb-gzckp\" (UID: \"150cc378-bfa8-4da4-95e4-2f3565086ece\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-777b58f8fb-gzckp" Dec 13 00:24:47 crc kubenswrapper[3562]: I1213 00:24:47.776435 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqq8g\" (UniqueName: \"kubernetes.io/projected/150cc378-bfa8-4da4-95e4-2f3565086ece-kube-api-access-tqq8g\") pod \"default-cloud1-coll-meter-smartgateway-777b58f8fb-gzckp\" (UID: \"150cc378-bfa8-4da4-95e4-2f3565086ece\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-777b58f8fb-gzckp" Dec 13 00:24:47 crc kubenswrapper[3562]: I1213 00:24:47.877637 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tqq8g\" (UniqueName: \"kubernetes.io/projected/150cc378-bfa8-4da4-95e4-2f3565086ece-kube-api-access-tqq8g\") pod \"default-cloud1-coll-meter-smartgateway-777b58f8fb-gzckp\" (UID: \"150cc378-bfa8-4da4-95e4-2f3565086ece\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-777b58f8fb-gzckp" Dec 13 00:24:47 crc kubenswrapper[3562]: I1213 00:24:47.877707 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/150cc378-bfa8-4da4-95e4-2f3565086ece-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-777b58f8fb-gzckp\" (UID: \"150cc378-bfa8-4da4-95e4-2f3565086ece\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-777b58f8fb-gzckp" Dec 13 00:24:47 crc kubenswrapper[3562]: I1213 00:24:47.877733 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/150cc378-bfa8-4da4-95e4-2f3565086ece-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-777b58f8fb-gzckp\" (UID: \"150cc378-bfa8-4da4-95e4-2f3565086ece\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-777b58f8fb-gzckp" Dec 13 00:24:47 crc kubenswrapper[3562]: I1213 00:24:47.877767 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/150cc378-bfa8-4da4-95e4-2f3565086ece-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-777b58f8fb-gzckp\" (UID: \"150cc378-bfa8-4da4-95e4-2f3565086ece\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-777b58f8fb-gzckp" Dec 13 00:24:47 crc kubenswrapper[3562]: I1213 00:24:47.877824 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/150cc378-bfa8-4da4-95e4-2f3565086ece-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-777b58f8fb-gzckp\" (UID: \"150cc378-bfa8-4da4-95e4-2f3565086ece\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-777b58f8fb-gzckp" Dec 13 00:24:47 crc kubenswrapper[3562]: E1213 00:24:47.877973 3562 secret.go:194] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Dec 13 00:24:47 crc kubenswrapper[3562]: E1213 00:24:47.878050 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/150cc378-bfa8-4da4-95e4-2f3565086ece-default-cloud1-coll-meter-proxy-tls podName:150cc378-bfa8-4da4-95e4-2f3565086ece nodeName:}" failed. No retries permitted until 2025-12-13 00:24:48.378031102 +0000 UTC m=+1124.861261625 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/150cc378-bfa8-4da4-95e4-2f3565086ece-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-777b58f8fb-gzckp" (UID: "150cc378-bfa8-4da4-95e4-2f3565086ece") : secret "default-cloud1-coll-meter-proxy-tls" not found Dec 13 00:24:47 crc kubenswrapper[3562]: I1213 00:24:47.878229 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/150cc378-bfa8-4da4-95e4-2f3565086ece-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-777b58f8fb-gzckp\" (UID: \"150cc378-bfa8-4da4-95e4-2f3565086ece\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-777b58f8fb-gzckp" Dec 13 00:24:47 crc kubenswrapper[3562]: I1213 00:24:47.878824 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/150cc378-bfa8-4da4-95e4-2f3565086ece-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-777b58f8fb-gzckp\" (UID: \"150cc378-bfa8-4da4-95e4-2f3565086ece\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-777b58f8fb-gzckp" Dec 13 00:24:47 crc kubenswrapper[3562]: I1213 00:24:47.894606 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/150cc378-bfa8-4da4-95e4-2f3565086ece-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-777b58f8fb-gzckp\" (UID: \"150cc378-bfa8-4da4-95e4-2f3565086ece\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-777b58f8fb-gzckp" Dec 13 00:24:47 crc kubenswrapper[3562]: I1213 00:24:47.934671 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqq8g\" (UniqueName: \"kubernetes.io/projected/150cc378-bfa8-4da4-95e4-2f3565086ece-kube-api-access-tqq8g\") pod \"default-cloud1-coll-meter-smartgateway-777b58f8fb-gzckp\" (UID: \"150cc378-bfa8-4da4-95e4-2f3565086ece\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-777b58f8fb-gzckp" Dec 13 00:24:48 crc kubenswrapper[3562]: I1213 00:24:48.385611 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/150cc378-bfa8-4da4-95e4-2f3565086ece-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-777b58f8fb-gzckp\" (UID: \"150cc378-bfa8-4da4-95e4-2f3565086ece\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-777b58f8fb-gzckp" Dec 13 00:24:48 crc kubenswrapper[3562]: E1213 00:24:48.385732 3562 secret.go:194] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Dec 13 00:24:48 crc kubenswrapper[3562]: E1213 00:24:48.385784 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/150cc378-bfa8-4da4-95e4-2f3565086ece-default-cloud1-coll-meter-proxy-tls podName:150cc378-bfa8-4da4-95e4-2f3565086ece nodeName:}" failed. No retries permitted until 2025-12-13 00:24:49.385770007 +0000 UTC m=+1125.869000520 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/150cc378-bfa8-4da4-95e4-2f3565086ece-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-777b58f8fb-gzckp" (UID: "150cc378-bfa8-4da4-95e4-2f3565086ece") : secret "default-cloud1-coll-meter-proxy-tls" not found Dec 13 00:24:49 crc kubenswrapper[3562]: I1213 00:24:49.121244 3562 generic.go:334] "Generic (PLEG): container finished" podID="81bb4d64-c9f8-4d5a-a8c1-e920df8350c9" containerID="2b33ec4ac26569be048a42b1680517507f02a2359bec225fe9c6af174dc92738" exitCode=0 Dec 13 00:24:49 crc kubenswrapper[3562]: I1213 00:24:49.121290 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"81bb4d64-c9f8-4d5a-a8c1-e920df8350c9","Type":"ContainerDied","Data":"2b33ec4ac26569be048a42b1680517507f02a2359bec225fe9c6af174dc92738"} Dec 13 00:24:49 crc kubenswrapper[3562]: I1213 00:24:49.133572 3562 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 13 00:24:49 crc kubenswrapper[3562]: I1213 00:24:49.400701 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/150cc378-bfa8-4da4-95e4-2f3565086ece-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-777b58f8fb-gzckp\" (UID: \"150cc378-bfa8-4da4-95e4-2f3565086ece\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-777b58f8fb-gzckp" Dec 13 00:24:49 crc kubenswrapper[3562]: I1213 00:24:49.407080 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/150cc378-bfa8-4da4-95e4-2f3565086ece-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-777b58f8fb-gzckp\" (UID: \"150cc378-bfa8-4da4-95e4-2f3565086ece\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-777b58f8fb-gzckp" Dec 13 00:24:49 crc kubenswrapper[3562]: I1213 00:24:49.447406 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-777b58f8fb-gzckp" Dec 13 00:24:49 crc kubenswrapper[3562]: I1213 00:24:49.920953 3562 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-7db4658b87-fxcl5"] Dec 13 00:24:49 crc kubenswrapper[3562]: I1213 00:24:49.921325 3562 topology_manager.go:215] "Topology Admit Handler" podUID="287cbd43-c207-4358-969d-3ccbc72f0186" podNamespace="service-telemetry" podName="default-cloud1-ceil-meter-smartgateway-7db4658b87-fxcl5" Dec 13 00:24:49 crc kubenswrapper[3562]: I1213 00:24:49.922383 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7db4658b87-fxcl5" Dec 13 00:24:49 crc kubenswrapper[3562]: I1213 00:24:49.924623 3562 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"default-cloud1-ceil-meter-proxy-tls" Dec 13 00:24:49 crc kubenswrapper[3562]: I1213 00:24:49.924629 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-ceil-meter-sg-core-configmap" Dec 13 00:24:49 crc kubenswrapper[3562]: I1213 00:24:49.933000 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-7db4658b87-fxcl5"] Dec 13 00:24:50 crc kubenswrapper[3562]: I1213 00:24:50.011685 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/287cbd43-c207-4358-969d-3ccbc72f0186-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-7db4658b87-fxcl5\" (UID: \"287cbd43-c207-4358-969d-3ccbc72f0186\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7db4658b87-fxcl5" Dec 13 00:24:50 crc kubenswrapper[3562]: I1213 00:24:50.011733 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/287cbd43-c207-4358-969d-3ccbc72f0186-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-7db4658b87-fxcl5\" (UID: \"287cbd43-c207-4358-969d-3ccbc72f0186\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7db4658b87-fxcl5" Dec 13 00:24:50 crc kubenswrapper[3562]: I1213 00:24:50.011774 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/287cbd43-c207-4358-969d-3ccbc72f0186-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-7db4658b87-fxcl5\" (UID: \"287cbd43-c207-4358-969d-3ccbc72f0186\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7db4658b87-fxcl5" Dec 13 00:24:50 crc kubenswrapper[3562]: I1213 00:24:50.011815 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-664n5\" (UniqueName: \"kubernetes.io/projected/287cbd43-c207-4358-969d-3ccbc72f0186-kube-api-access-664n5\") pod \"default-cloud1-ceil-meter-smartgateway-7db4658b87-fxcl5\" (UID: \"287cbd43-c207-4358-969d-3ccbc72f0186\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7db4658b87-fxcl5" Dec 13 00:24:50 crc kubenswrapper[3562]: I1213 00:24:50.011868 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/287cbd43-c207-4358-969d-3ccbc72f0186-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-7db4658b87-fxcl5\" (UID: \"287cbd43-c207-4358-969d-3ccbc72f0186\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7db4658b87-fxcl5" Dec 13 00:24:50 crc kubenswrapper[3562]: I1213 00:24:50.113353 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/287cbd43-c207-4358-969d-3ccbc72f0186-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-7db4658b87-fxcl5\" (UID: \"287cbd43-c207-4358-969d-3ccbc72f0186\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7db4658b87-fxcl5" Dec 13 00:24:50 crc kubenswrapper[3562]: I1213 00:24:50.113660 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/287cbd43-c207-4358-969d-3ccbc72f0186-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-7db4658b87-fxcl5\" (UID: \"287cbd43-c207-4358-969d-3ccbc72f0186\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7db4658b87-fxcl5" Dec 13 00:24:50 crc kubenswrapper[3562]: I1213 00:24:50.113688 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/287cbd43-c207-4358-969d-3ccbc72f0186-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-7db4658b87-fxcl5\" (UID: \"287cbd43-c207-4358-969d-3ccbc72f0186\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7db4658b87-fxcl5" Dec 13 00:24:50 crc kubenswrapper[3562]: I1213 00:24:50.113714 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-664n5\" (UniqueName: \"kubernetes.io/projected/287cbd43-c207-4358-969d-3ccbc72f0186-kube-api-access-664n5\") pod \"default-cloud1-ceil-meter-smartgateway-7db4658b87-fxcl5\" (UID: \"287cbd43-c207-4358-969d-3ccbc72f0186\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7db4658b87-fxcl5" Dec 13 00:24:50 crc kubenswrapper[3562]: I1213 00:24:50.113758 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/287cbd43-c207-4358-969d-3ccbc72f0186-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-7db4658b87-fxcl5\" (UID: \"287cbd43-c207-4358-969d-3ccbc72f0186\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7db4658b87-fxcl5" Dec 13 00:24:50 crc kubenswrapper[3562]: E1213 00:24:50.113887 3562 secret.go:194] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Dec 13 00:24:50 crc kubenswrapper[3562]: E1213 00:24:50.113962 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/287cbd43-c207-4358-969d-3ccbc72f0186-default-cloud1-ceil-meter-proxy-tls podName:287cbd43-c207-4358-969d-3ccbc72f0186 nodeName:}" failed. No retries permitted until 2025-12-13 00:24:50.61393943 +0000 UTC m=+1127.097169953 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/287cbd43-c207-4358-969d-3ccbc72f0186-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-7db4658b87-fxcl5" (UID: "287cbd43-c207-4358-969d-3ccbc72f0186") : secret "default-cloud1-ceil-meter-proxy-tls" not found Dec 13 00:24:50 crc kubenswrapper[3562]: I1213 00:24:50.114753 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/287cbd43-c207-4358-969d-3ccbc72f0186-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-7db4658b87-fxcl5\" (UID: \"287cbd43-c207-4358-969d-3ccbc72f0186\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7db4658b87-fxcl5" Dec 13 00:24:50 crc kubenswrapper[3562]: I1213 00:24:50.114803 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/287cbd43-c207-4358-969d-3ccbc72f0186-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-7db4658b87-fxcl5\" (UID: \"287cbd43-c207-4358-969d-3ccbc72f0186\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7db4658b87-fxcl5" Dec 13 00:24:50 crc kubenswrapper[3562]: I1213 00:24:50.121179 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/287cbd43-c207-4358-969d-3ccbc72f0186-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-7db4658b87-fxcl5\" (UID: \"287cbd43-c207-4358-969d-3ccbc72f0186\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7db4658b87-fxcl5" Dec 13 00:24:50 crc kubenswrapper[3562]: I1213 00:24:50.132387 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-664n5\" (UniqueName: \"kubernetes.io/projected/287cbd43-c207-4358-969d-3ccbc72f0186-kube-api-access-664n5\") pod \"default-cloud1-ceil-meter-smartgateway-7db4658b87-fxcl5\" (UID: \"287cbd43-c207-4358-969d-3ccbc72f0186\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7db4658b87-fxcl5" Dec 13 00:24:50 crc kubenswrapper[3562]: I1213 00:24:50.351019 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-777b58f8fb-gzckp"] Dec 13 00:24:50 crc kubenswrapper[3562]: W1213 00:24:50.364649 3562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod150cc378_bfa8_4da4_95e4_2f3565086ece.slice/crio-2d385ab80654f43b0eb4a823288c00bc6cf9a033f5490339ed0df25b0cbdd1bd WatchSource:0}: Error finding container 2d385ab80654f43b0eb4a823288c00bc6cf9a033f5490339ed0df25b0cbdd1bd: Status 404 returned error can't find the container with id 2d385ab80654f43b0eb4a823288c00bc6cf9a033f5490339ed0df25b0cbdd1bd Dec 13 00:24:50 crc kubenswrapper[3562]: I1213 00:24:50.620443 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/287cbd43-c207-4358-969d-3ccbc72f0186-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-7db4658b87-fxcl5\" (UID: \"287cbd43-c207-4358-969d-3ccbc72f0186\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7db4658b87-fxcl5" Dec 13 00:24:50 crc kubenswrapper[3562]: E1213 00:24:50.620660 3562 secret.go:194] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Dec 13 00:24:50 crc kubenswrapper[3562]: E1213 00:24:50.621030 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/287cbd43-c207-4358-969d-3ccbc72f0186-default-cloud1-ceil-meter-proxy-tls podName:287cbd43-c207-4358-969d-3ccbc72f0186 nodeName:}" failed. No retries permitted until 2025-12-13 00:24:51.621010577 +0000 UTC m=+1128.104241100 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/287cbd43-c207-4358-969d-3ccbc72f0186-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-7db4658b87-fxcl5" (UID: "287cbd43-c207-4358-969d-3ccbc72f0186") : secret "default-cloud1-ceil-meter-proxy-tls" not found Dec 13 00:24:51 crc kubenswrapper[3562]: I1213 00:24:51.138880 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"34d97b33-05e0-49fd-aa10-26387a5a9239","Type":"ContainerStarted","Data":"9e262ec8ff8f27e2d71dade7da43847ba3a9974c370ab0be0568b4b1314246ed"} Dec 13 00:24:51 crc kubenswrapper[3562]: I1213 00:24:51.140658 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-777b58f8fb-gzckp" event={"ID":"150cc378-bfa8-4da4-95e4-2f3565086ece","Type":"ContainerStarted","Data":"2d385ab80654f43b0eb4a823288c00bc6cf9a033f5490339ed0df25b0cbdd1bd"} Dec 13 00:24:51 crc kubenswrapper[3562]: I1213 00:24:51.636661 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/287cbd43-c207-4358-969d-3ccbc72f0186-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-7db4658b87-fxcl5\" (UID: \"287cbd43-c207-4358-969d-3ccbc72f0186\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7db4658b87-fxcl5" Dec 13 00:24:51 crc kubenswrapper[3562]: I1213 00:24:51.659673 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/287cbd43-c207-4358-969d-3ccbc72f0186-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-7db4658b87-fxcl5\" (UID: \"287cbd43-c207-4358-969d-3ccbc72f0186\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7db4658b87-fxcl5" Dec 13 00:24:51 crc kubenswrapper[3562]: I1213 00:24:51.760928 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7db4658b87-fxcl5" Dec 13 00:24:52 crc kubenswrapper[3562]: I1213 00:24:52.528722 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-7db4658b87-fxcl5"] Dec 13 00:24:52 crc kubenswrapper[3562]: W1213 00:24:52.557946 3562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod287cbd43_c207_4358_969d_3ccbc72f0186.slice/crio-40457ed9b9decd62ccd59aa8c106d8ee4d88c3fddbe5c990dcec82d0ba584c7e WatchSource:0}: Error finding container 40457ed9b9decd62ccd59aa8c106d8ee4d88c3fddbe5c990dcec82d0ba584c7e: Status 404 returned error can't find the container with id 40457ed9b9decd62ccd59aa8c106d8ee4d88c3fddbe5c990dcec82d0ba584c7e Dec 13 00:24:53 crc kubenswrapper[3562]: I1213 00:24:53.167452 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"81bb4d64-c9f8-4d5a-a8c1-e920df8350c9","Type":"ContainerStarted","Data":"533862b1cc3abd769b31a2ecce59a36701f24a5b9aadae61695157de97b0d3cd"} Dec 13 00:24:53 crc kubenswrapper[3562]: I1213 00:24:53.169345 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7db4658b87-fxcl5" event={"ID":"287cbd43-c207-4358-969d-3ccbc72f0186","Type":"ContainerStarted","Data":"40457ed9b9decd62ccd59aa8c106d8ee4d88c3fddbe5c990dcec82d0ba584c7e"} Dec 13 00:24:54 crc kubenswrapper[3562]: I1213 00:24:54.767862 3562 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-5c844d475c-tbx6l"] Dec 13 00:24:54 crc kubenswrapper[3562]: I1213 00:24:54.768281 3562 topology_manager.go:215] "Topology Admit Handler" podUID="41908f98-a23e-41a8-bde3-404d2f858c1f" podNamespace="service-telemetry" podName="default-cloud1-sens-meter-smartgateway-5c844d475c-tbx6l" Dec 13 00:24:54 crc kubenswrapper[3562]: I1213 00:24:54.769347 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5c844d475c-tbx6l" Dec 13 00:24:54 crc kubenswrapper[3562]: I1213 00:24:54.810721 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-sens-meter-sg-core-configmap" Dec 13 00:24:54 crc kubenswrapper[3562]: I1213 00:24:54.811025 3562 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"default-cloud1-sens-meter-proxy-tls" Dec 13 00:24:54 crc kubenswrapper[3562]: I1213 00:24:54.819950 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-5c844d475c-tbx6l"] Dec 13 00:24:54 crc kubenswrapper[3562]: I1213 00:24:54.906009 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/41908f98-a23e-41a8-bde3-404d2f858c1f-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-5c844d475c-tbx6l\" (UID: \"41908f98-a23e-41a8-bde3-404d2f858c1f\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5c844d475c-tbx6l" Dec 13 00:24:54 crc kubenswrapper[3562]: I1213 00:24:54.906081 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/41908f98-a23e-41a8-bde3-404d2f858c1f-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-5c844d475c-tbx6l\" (UID: \"41908f98-a23e-41a8-bde3-404d2f858c1f\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5c844d475c-tbx6l" Dec 13 00:24:54 crc kubenswrapper[3562]: I1213 00:24:54.906123 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/41908f98-a23e-41a8-bde3-404d2f858c1f-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5c844d475c-tbx6l\" (UID: \"41908f98-a23e-41a8-bde3-404d2f858c1f\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5c844d475c-tbx6l" Dec 13 00:24:54 crc kubenswrapper[3562]: I1213 00:24:54.906215 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/41908f98-a23e-41a8-bde3-404d2f858c1f-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-5c844d475c-tbx6l\" (UID: \"41908f98-a23e-41a8-bde3-404d2f858c1f\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5c844d475c-tbx6l" Dec 13 00:24:54 crc kubenswrapper[3562]: I1213 00:24:54.906258 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rx5p8\" (UniqueName: \"kubernetes.io/projected/41908f98-a23e-41a8-bde3-404d2f858c1f-kube-api-access-rx5p8\") pod \"default-cloud1-sens-meter-smartgateway-5c844d475c-tbx6l\" (UID: \"41908f98-a23e-41a8-bde3-404d2f858c1f\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5c844d475c-tbx6l" Dec 13 00:24:55 crc kubenswrapper[3562]: I1213 00:24:55.008043 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/41908f98-a23e-41a8-bde3-404d2f858c1f-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-5c844d475c-tbx6l\" (UID: \"41908f98-a23e-41a8-bde3-404d2f858c1f\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5c844d475c-tbx6l" Dec 13 00:24:55 crc kubenswrapper[3562]: I1213 00:24:55.008102 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rx5p8\" (UniqueName: \"kubernetes.io/projected/41908f98-a23e-41a8-bde3-404d2f858c1f-kube-api-access-rx5p8\") pod \"default-cloud1-sens-meter-smartgateway-5c844d475c-tbx6l\" (UID: \"41908f98-a23e-41a8-bde3-404d2f858c1f\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5c844d475c-tbx6l" Dec 13 00:24:55 crc kubenswrapper[3562]: I1213 00:24:55.008127 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/41908f98-a23e-41a8-bde3-404d2f858c1f-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-5c844d475c-tbx6l\" (UID: \"41908f98-a23e-41a8-bde3-404d2f858c1f\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5c844d475c-tbx6l" Dec 13 00:24:55 crc kubenswrapper[3562]: I1213 00:24:55.008179 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/41908f98-a23e-41a8-bde3-404d2f858c1f-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-5c844d475c-tbx6l\" (UID: \"41908f98-a23e-41a8-bde3-404d2f858c1f\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5c844d475c-tbx6l" Dec 13 00:24:55 crc kubenswrapper[3562]: I1213 00:24:55.008203 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/41908f98-a23e-41a8-bde3-404d2f858c1f-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5c844d475c-tbx6l\" (UID: \"41908f98-a23e-41a8-bde3-404d2f858c1f\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5c844d475c-tbx6l" Dec 13 00:24:55 crc kubenswrapper[3562]: E1213 00:24:55.009177 3562 secret.go:194] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Dec 13 00:24:55 crc kubenswrapper[3562]: E1213 00:24:55.009273 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41908f98-a23e-41a8-bde3-404d2f858c1f-default-cloud1-sens-meter-proxy-tls podName:41908f98-a23e-41a8-bde3-404d2f858c1f nodeName:}" failed. No retries permitted until 2025-12-13 00:24:55.509249881 +0000 UTC m=+1131.992480404 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/41908f98-a23e-41a8-bde3-404d2f858c1f-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-5c844d475c-tbx6l" (UID: "41908f98-a23e-41a8-bde3-404d2f858c1f") : secret "default-cloud1-sens-meter-proxy-tls" not found Dec 13 00:24:55 crc kubenswrapper[3562]: I1213 00:24:55.009478 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/41908f98-a23e-41a8-bde3-404d2f858c1f-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-5c844d475c-tbx6l\" (UID: \"41908f98-a23e-41a8-bde3-404d2f858c1f\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5c844d475c-tbx6l" Dec 13 00:24:55 crc kubenswrapper[3562]: I1213 00:24:55.010123 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/41908f98-a23e-41a8-bde3-404d2f858c1f-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-5c844d475c-tbx6l\" (UID: \"41908f98-a23e-41a8-bde3-404d2f858c1f\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5c844d475c-tbx6l" Dec 13 00:24:55 crc kubenswrapper[3562]: I1213 00:24:55.015674 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/41908f98-a23e-41a8-bde3-404d2f858c1f-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-5c844d475c-tbx6l\" (UID: \"41908f98-a23e-41a8-bde3-404d2f858c1f\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5c844d475c-tbx6l" Dec 13 00:24:55 crc kubenswrapper[3562]: I1213 00:24:55.027288 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-rx5p8\" (UniqueName: \"kubernetes.io/projected/41908f98-a23e-41a8-bde3-404d2f858c1f-kube-api-access-rx5p8\") pod \"default-cloud1-sens-meter-smartgateway-5c844d475c-tbx6l\" (UID: \"41908f98-a23e-41a8-bde3-404d2f858c1f\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5c844d475c-tbx6l" Dec 13 00:24:55 crc kubenswrapper[3562]: I1213 00:24:55.197881 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"34d97b33-05e0-49fd-aa10-26387a5a9239","Type":"ContainerStarted","Data":"b1a7a1c95c9f4c86721ee01674035e409b1196950abeacd84cd0cec92991aa4b"} Dec 13 00:24:55 crc kubenswrapper[3562]: I1213 00:24:55.515668 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/41908f98-a23e-41a8-bde3-404d2f858c1f-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5c844d475c-tbx6l\" (UID: \"41908f98-a23e-41a8-bde3-404d2f858c1f\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5c844d475c-tbx6l" Dec 13 00:24:55 crc kubenswrapper[3562]: E1213 00:24:55.515836 3562 secret.go:194] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Dec 13 00:24:55 crc kubenswrapper[3562]: E1213 00:24:55.515884 3562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41908f98-a23e-41a8-bde3-404d2f858c1f-default-cloud1-sens-meter-proxy-tls podName:41908f98-a23e-41a8-bde3-404d2f858c1f nodeName:}" failed. No retries permitted until 2025-12-13 00:24:56.515869986 +0000 UTC m=+1132.999100509 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/41908f98-a23e-41a8-bde3-404d2f858c1f-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-5c844d475c-tbx6l" (UID: "41908f98-a23e-41a8-bde3-404d2f858c1f") : secret "default-cloud1-sens-meter-proxy-tls" not found Dec 13 00:24:56 crc kubenswrapper[3562]: I1213 00:24:56.531886 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/41908f98-a23e-41a8-bde3-404d2f858c1f-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5c844d475c-tbx6l\" (UID: \"41908f98-a23e-41a8-bde3-404d2f858c1f\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5c844d475c-tbx6l" Dec 13 00:24:56 crc kubenswrapper[3562]: I1213 00:24:56.538518 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/41908f98-a23e-41a8-bde3-404d2f858c1f-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5c844d475c-tbx6l\" (UID: \"41908f98-a23e-41a8-bde3-404d2f858c1f\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5c844d475c-tbx6l" Dec 13 00:24:56 crc kubenswrapper[3562]: I1213 00:24:56.621430 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5c844d475c-tbx6l" Dec 13 00:24:57 crc kubenswrapper[3562]: I1213 00:24:57.212745 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"81bb4d64-c9f8-4d5a-a8c1-e920df8350c9","Type":"ContainerStarted","Data":"fdc2861c17ceb0fba6ab2212940018bea93174ec51332f82f0ad6642a846fe6a"} Dec 13 00:25:03 crc kubenswrapper[3562]: I1213 00:25:03.749967 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 13 00:25:03 crc kubenswrapper[3562]: I1213 00:25:03.750287 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 13 00:25:03 crc kubenswrapper[3562]: I1213 00:25:03.750314 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 13 00:25:03 crc kubenswrapper[3562]: I1213 00:25:03.750343 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 13 00:25:03 crc kubenswrapper[3562]: I1213 00:25:03.750386 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 13 00:25:04 crc kubenswrapper[3562]: I1213 00:25:04.288754 3562 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-644cdd7cd-dtrrk"] Dec 13 00:25:04 crc kubenswrapper[3562]: I1213 00:25:04.289199 3562 topology_manager.go:215] "Topology Admit Handler" podUID="23d6b871-3611-4291-91d7-f444af791b10" podNamespace="service-telemetry" podName="default-cloud1-coll-event-smartgateway-644cdd7cd-dtrrk" Dec 13 00:25:04 crc kubenswrapper[3562]: I1213 00:25:04.290193 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-644cdd7cd-dtrrk" Dec 13 00:25:04 crc kubenswrapper[3562]: I1213 00:25:04.298575 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-644cdd7cd-dtrrk"] Dec 13 00:25:04 crc kubenswrapper[3562]: I1213 00:25:04.298937 3562 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-cert" Dec 13 00:25:04 crc kubenswrapper[3562]: I1213 00:25:04.299163 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-coll-event-sg-core-configmap" Dec 13 00:25:04 crc kubenswrapper[3562]: I1213 00:25:04.375822 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/23d6b871-3611-4291-91d7-f444af791b10-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-644cdd7cd-dtrrk\" (UID: \"23d6b871-3611-4291-91d7-f444af791b10\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-644cdd7cd-dtrrk" Dec 13 00:25:04 crc kubenswrapper[3562]: I1213 00:25:04.375953 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/23d6b871-3611-4291-91d7-f444af791b10-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-644cdd7cd-dtrrk\" (UID: \"23d6b871-3611-4291-91d7-f444af791b10\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-644cdd7cd-dtrrk" Dec 13 00:25:04 crc kubenswrapper[3562]: I1213 00:25:04.376050 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/23d6b871-3611-4291-91d7-f444af791b10-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-644cdd7cd-dtrrk\" (UID: \"23d6b871-3611-4291-91d7-f444af791b10\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-644cdd7cd-dtrrk" Dec 13 00:25:04 crc kubenswrapper[3562]: I1213 00:25:04.376093 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7gvh\" (UniqueName: \"kubernetes.io/projected/23d6b871-3611-4291-91d7-f444af791b10-kube-api-access-d7gvh\") pod \"default-cloud1-coll-event-smartgateway-644cdd7cd-dtrrk\" (UID: \"23d6b871-3611-4291-91d7-f444af791b10\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-644cdd7cd-dtrrk" Dec 13 00:25:04 crc kubenswrapper[3562]: I1213 00:25:04.477211 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d7gvh\" (UniqueName: \"kubernetes.io/projected/23d6b871-3611-4291-91d7-f444af791b10-kube-api-access-d7gvh\") pod \"default-cloud1-coll-event-smartgateway-644cdd7cd-dtrrk\" (UID: \"23d6b871-3611-4291-91d7-f444af791b10\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-644cdd7cd-dtrrk" Dec 13 00:25:04 crc kubenswrapper[3562]: I1213 00:25:04.477302 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/23d6b871-3611-4291-91d7-f444af791b10-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-644cdd7cd-dtrrk\" (UID: \"23d6b871-3611-4291-91d7-f444af791b10\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-644cdd7cd-dtrrk" Dec 13 00:25:04 crc kubenswrapper[3562]: I1213 00:25:04.477335 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/23d6b871-3611-4291-91d7-f444af791b10-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-644cdd7cd-dtrrk\" (UID: \"23d6b871-3611-4291-91d7-f444af791b10\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-644cdd7cd-dtrrk" Dec 13 00:25:04 crc kubenswrapper[3562]: I1213 00:25:04.477393 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/23d6b871-3611-4291-91d7-f444af791b10-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-644cdd7cd-dtrrk\" (UID: \"23d6b871-3611-4291-91d7-f444af791b10\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-644cdd7cd-dtrrk" Dec 13 00:25:04 crc kubenswrapper[3562]: I1213 00:25:04.478004 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/23d6b871-3611-4291-91d7-f444af791b10-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-644cdd7cd-dtrrk\" (UID: \"23d6b871-3611-4291-91d7-f444af791b10\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-644cdd7cd-dtrrk" Dec 13 00:25:04 crc kubenswrapper[3562]: I1213 00:25:04.478699 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/23d6b871-3611-4291-91d7-f444af791b10-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-644cdd7cd-dtrrk\" (UID: \"23d6b871-3611-4291-91d7-f444af791b10\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-644cdd7cd-dtrrk" Dec 13 00:25:04 crc kubenswrapper[3562]: I1213 00:25:04.485896 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/23d6b871-3611-4291-91d7-f444af791b10-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-644cdd7cd-dtrrk\" (UID: \"23d6b871-3611-4291-91d7-f444af791b10\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-644cdd7cd-dtrrk" Dec 13 00:25:04 crc kubenswrapper[3562]: I1213 00:25:04.508979 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7gvh\" (UniqueName: \"kubernetes.io/projected/23d6b871-3611-4291-91d7-f444af791b10-kube-api-access-d7gvh\") pod \"default-cloud1-coll-event-smartgateway-644cdd7cd-dtrrk\" (UID: \"23d6b871-3611-4291-91d7-f444af791b10\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-644cdd7cd-dtrrk" Dec 13 00:25:04 crc kubenswrapper[3562]: I1213 00:25:04.629648 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-644cdd7cd-dtrrk" Dec 13 00:25:05 crc kubenswrapper[3562]: I1213 00:25:05.431319 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-5c844d475c-tbx6l"] Dec 13 00:25:05 crc kubenswrapper[3562]: W1213 00:25:05.439403 3562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41908f98_a23e_41a8_bde3_404d2f858c1f.slice/crio-adc7ed7e174c1cb9593b497ec30b107499e785e55e099ba0d5fa4777389a8209 WatchSource:0}: Error finding container adc7ed7e174c1cb9593b497ec30b107499e785e55e099ba0d5fa4777389a8209: Status 404 returned error can't find the container with id adc7ed7e174c1cb9593b497ec30b107499e785e55e099ba0d5fa4777389a8209 Dec 13 00:25:05 crc kubenswrapper[3562]: W1213 00:25:05.569779 3562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod23d6b871_3611_4291_91d7_f444af791b10.slice/crio-8f181f9d8c39deb74f87cee9f3ac70382a733e3b5381579cfd617f7b4af24cf1 WatchSource:0}: Error finding container 8f181f9d8c39deb74f87cee9f3ac70382a733e3b5381579cfd617f7b4af24cf1: Status 404 returned error can't find the container with id 8f181f9d8c39deb74f87cee9f3ac70382a733e3b5381579cfd617f7b4af24cf1 Dec 13 00:25:05 crc kubenswrapper[3562]: I1213 00:25:05.571029 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-644cdd7cd-dtrrk"] Dec 13 00:25:06 crc kubenswrapper[3562]: I1213 00:25:06.290669 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-644cdd7cd-dtrrk" event={"ID":"23d6b871-3611-4291-91d7-f444af791b10","Type":"ContainerStarted","Data":"8f181f9d8c39deb74f87cee9f3ac70382a733e3b5381579cfd617f7b4af24cf1"} Dec 13 00:25:06 crc kubenswrapper[3562]: I1213 00:25:06.292416 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-777b58f8fb-gzckp" event={"ID":"150cc378-bfa8-4da4-95e4-2f3565086ece","Type":"ContainerStarted","Data":"fe6b64558d11a900ebeb7ed5d7561ca698e3099a2bba895f53bc0f1f3c066824"} Dec 13 00:25:06 crc kubenswrapper[3562]: I1213 00:25:06.295583 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"81bb4d64-c9f8-4d5a-a8c1-e920df8350c9","Type":"ContainerStarted","Data":"d4a101757cf29e337bbeb3a92c5e1ed7e6d0ec1f0a42797dd3c2160089f0e1d1"} Dec 13 00:25:06 crc kubenswrapper[3562]: I1213 00:25:06.300862 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"34d97b33-05e0-49fd-aa10-26387a5a9239","Type":"ContainerStarted","Data":"eb15edeb0f9599b6c489b2655728dc7ec86bcd1649b2a030f961539e2a875f6c"} Dec 13 00:25:06 crc kubenswrapper[3562]: I1213 00:25:06.305817 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5c844d475c-tbx6l" event={"ID":"41908f98-a23e-41a8-bde3-404d2f858c1f","Type":"ContainerStarted","Data":"a96987e16b96c3f1725a2b2f8900ba3cfe3d5ee3f907db05fbae46724b95c74f"} Dec 13 00:25:06 crc kubenswrapper[3562]: I1213 00:25:06.305861 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5c844d475c-tbx6l" event={"ID":"41908f98-a23e-41a8-bde3-404d2f858c1f","Type":"ContainerStarted","Data":"adc7ed7e174c1cb9593b497ec30b107499e785e55e099ba0d5fa4777389a8209"} Dec 13 00:25:06 crc kubenswrapper[3562]: I1213 00:25:06.307080 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7db4658b87-fxcl5" event={"ID":"287cbd43-c207-4358-969d-3ccbc72f0186","Type":"ContainerStarted","Data":"42e21e5c1e2090c38bb5dffd218e3dcb69e47bb1cd12f10eae24c3a6dc7b5f1e"} Dec 13 00:25:06 crc kubenswrapper[3562]: I1213 00:25:06.322477 3562 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/alertmanager-default-0" podStartSLOduration=21.333999112 podStartE2EDuration="37.322435515s" podCreationTimestamp="2025-12-13 00:24:29 +0000 UTC" firstStartedPulling="2025-12-13 00:24:49.133359441 +0000 UTC m=+1125.616589964" lastFinishedPulling="2025-12-13 00:25:05.121795844 +0000 UTC m=+1141.605026367" observedRunningTime="2025-12-13 00:25:06.318979997 +0000 UTC m=+1142.802210520" watchObservedRunningTime="2025-12-13 00:25:06.322435515 +0000 UTC m=+1142.805666038" Dec 13 00:25:06 crc kubenswrapper[3562]: I1213 00:25:06.361724 3562 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/prometheus-default-0" podStartSLOduration=4.746174937 podStartE2EDuration="51.361682549s" podCreationTimestamp="2025-12-13 00:24:15 +0000 UTC" firstStartedPulling="2025-12-13 00:24:18.498917954 +0000 UTC m=+1094.982148477" lastFinishedPulling="2025-12-13 00:25:05.114425566 +0000 UTC m=+1141.597656089" observedRunningTime="2025-12-13 00:25:06.361201915 +0000 UTC m=+1142.844432438" watchObservedRunningTime="2025-12-13 00:25:06.361682549 +0000 UTC m=+1142.844913072" Dec 13 00:25:08 crc kubenswrapper[3562]: I1213 00:25:08.008324 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/prometheus-default-0" Dec 13 00:25:08 crc kubenswrapper[3562]: I1213 00:25:08.711976 3562 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-7b967fcf7b-w9fzm"] Dec 13 00:25:08 crc kubenswrapper[3562]: I1213 00:25:08.712119 3562 topology_manager.go:215] "Topology Admit Handler" podUID="94d04030-7e00-4f80-8d77-df7d10806ed8" podNamespace="service-telemetry" podName="default-cloud1-ceil-event-smartgateway-7b967fcf7b-w9fzm" Dec 13 00:25:08 crc kubenswrapper[3562]: I1213 00:25:08.713249 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7b967fcf7b-w9fzm" Dec 13 00:25:08 crc kubenswrapper[3562]: I1213 00:25:08.718707 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-ceil-event-sg-core-configmap" Dec 13 00:25:08 crc kubenswrapper[3562]: I1213 00:25:08.728470 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-7b967fcf7b-w9fzm"] Dec 13 00:25:08 crc kubenswrapper[3562]: I1213 00:25:08.863828 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46p88\" (UniqueName: \"kubernetes.io/projected/94d04030-7e00-4f80-8d77-df7d10806ed8-kube-api-access-46p88\") pod \"default-cloud1-ceil-event-smartgateway-7b967fcf7b-w9fzm\" (UID: \"94d04030-7e00-4f80-8d77-df7d10806ed8\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7b967fcf7b-w9fzm" Dec 13 00:25:08 crc kubenswrapper[3562]: I1213 00:25:08.863886 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/94d04030-7e00-4f80-8d77-df7d10806ed8-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-7b967fcf7b-w9fzm\" (UID: \"94d04030-7e00-4f80-8d77-df7d10806ed8\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7b967fcf7b-w9fzm" Dec 13 00:25:08 crc kubenswrapper[3562]: I1213 00:25:08.864059 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/94d04030-7e00-4f80-8d77-df7d10806ed8-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-7b967fcf7b-w9fzm\" (UID: \"94d04030-7e00-4f80-8d77-df7d10806ed8\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7b967fcf7b-w9fzm" Dec 13 00:25:08 crc kubenswrapper[3562]: I1213 00:25:08.864084 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/94d04030-7e00-4f80-8d77-df7d10806ed8-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-7b967fcf7b-w9fzm\" (UID: \"94d04030-7e00-4f80-8d77-df7d10806ed8\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7b967fcf7b-w9fzm" Dec 13 00:25:08 crc kubenswrapper[3562]: I1213 00:25:08.966789 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-46p88\" (UniqueName: \"kubernetes.io/projected/94d04030-7e00-4f80-8d77-df7d10806ed8-kube-api-access-46p88\") pod \"default-cloud1-ceil-event-smartgateway-7b967fcf7b-w9fzm\" (UID: \"94d04030-7e00-4f80-8d77-df7d10806ed8\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7b967fcf7b-w9fzm" Dec 13 00:25:08 crc kubenswrapper[3562]: I1213 00:25:08.966841 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/94d04030-7e00-4f80-8d77-df7d10806ed8-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-7b967fcf7b-w9fzm\" (UID: \"94d04030-7e00-4f80-8d77-df7d10806ed8\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7b967fcf7b-w9fzm" Dec 13 00:25:08 crc kubenswrapper[3562]: I1213 00:25:08.966863 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/94d04030-7e00-4f80-8d77-df7d10806ed8-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-7b967fcf7b-w9fzm\" (UID: \"94d04030-7e00-4f80-8d77-df7d10806ed8\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7b967fcf7b-w9fzm" Dec 13 00:25:08 crc kubenswrapper[3562]: I1213 00:25:08.966887 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/94d04030-7e00-4f80-8d77-df7d10806ed8-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-7b967fcf7b-w9fzm\" (UID: \"94d04030-7e00-4f80-8d77-df7d10806ed8\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7b967fcf7b-w9fzm" Dec 13 00:25:08 crc kubenswrapper[3562]: I1213 00:25:08.967727 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/94d04030-7e00-4f80-8d77-df7d10806ed8-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-7b967fcf7b-w9fzm\" (UID: \"94d04030-7e00-4f80-8d77-df7d10806ed8\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7b967fcf7b-w9fzm" Dec 13 00:25:08 crc kubenswrapper[3562]: I1213 00:25:08.968128 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/94d04030-7e00-4f80-8d77-df7d10806ed8-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-7b967fcf7b-w9fzm\" (UID: \"94d04030-7e00-4f80-8d77-df7d10806ed8\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7b967fcf7b-w9fzm" Dec 13 00:25:08 crc kubenswrapper[3562]: I1213 00:25:08.976881 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/94d04030-7e00-4f80-8d77-df7d10806ed8-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-7b967fcf7b-w9fzm\" (UID: \"94d04030-7e00-4f80-8d77-df7d10806ed8\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7b967fcf7b-w9fzm" Dec 13 00:25:08 crc kubenswrapper[3562]: I1213 00:25:08.988547 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-46p88\" (UniqueName: \"kubernetes.io/projected/94d04030-7e00-4f80-8d77-df7d10806ed8-kube-api-access-46p88\") pod \"default-cloud1-ceil-event-smartgateway-7b967fcf7b-w9fzm\" (UID: \"94d04030-7e00-4f80-8d77-df7d10806ed8\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7b967fcf7b-w9fzm" Dec 13 00:25:09 crc kubenswrapper[3562]: I1213 00:25:09.053378 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7b967fcf7b-w9fzm" Dec 13 00:25:09 crc kubenswrapper[3562]: I1213 00:25:09.640180 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-7b967fcf7b-w9fzm"] Dec 13 00:25:09 crc kubenswrapper[3562]: W1213 00:25:09.641488 3562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod94d04030_7e00_4f80_8d77_df7d10806ed8.slice/crio-40db6cb54a1b80d6942153b744045270ea36042502d032f436de90206c4ba4f8 WatchSource:0}: Error finding container 40db6cb54a1b80d6942153b744045270ea36042502d032f436de90206c4ba4f8: Status 404 returned error can't find the container with id 40db6cb54a1b80d6942153b744045270ea36042502d032f436de90206c4ba4f8 Dec 13 00:25:10 crc kubenswrapper[3562]: I1213 00:25:10.337766 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7b967fcf7b-w9fzm" event={"ID":"94d04030-7e00-4f80-8d77-df7d10806ed8","Type":"ContainerStarted","Data":"40db6cb54a1b80d6942153b744045270ea36042502d032f436de90206c4ba4f8"} Dec 13 00:25:13 crc kubenswrapper[3562]: I1213 00:25:13.053123 3562 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 00:25:13 crc kubenswrapper[3562]: I1213 00:25:13.053509 3562 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 00:25:18 crc kubenswrapper[3562]: I1213 00:25:18.008496 3562 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/prometheus-default-0" Dec 13 00:25:18 crc kubenswrapper[3562]: I1213 00:25:18.115966 3562 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/prometheus-default-0" Dec 13 00:25:18 crc kubenswrapper[3562]: I1213 00:25:18.392135 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-644cdd7cd-dtrrk" event={"ID":"23d6b871-3611-4291-91d7-f444af791b10","Type":"ContainerStarted","Data":"3d00cade02b567b00fd9e203d129d353ea8853d254bc717ca402c176293d2d1b"} Dec 13 00:25:18 crc kubenswrapper[3562]: I1213 00:25:18.399945 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5c844d475c-tbx6l" event={"ID":"41908f98-a23e-41a8-bde3-404d2f858c1f","Type":"ContainerStarted","Data":"c7a5bd26ef4aa79d09683a035feace13f629116ba01b754ff2354733364e39de"} Dec 13 00:25:18 crc kubenswrapper[3562]: I1213 00:25:18.403043 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7db4658b87-fxcl5" event={"ID":"287cbd43-c207-4358-969d-3ccbc72f0186","Type":"ContainerStarted","Data":"3f02f20595351e694c1ebc09bb93093e9d79924c9174593b511504bc7df9a733"} Dec 13 00:25:18 crc kubenswrapper[3562]: I1213 00:25:18.504483 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/prometheus-default-0" Dec 13 00:25:19 crc kubenswrapper[3562]: I1213 00:25:19.411710 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-777b58f8fb-gzckp" event={"ID":"150cc378-bfa8-4da4-95e4-2f3565086ece","Type":"ContainerStarted","Data":"311e9fe2966feddf39306069efdcdc8bc44705de0bae03d000ed9c2092566c69"} Dec 13 00:25:19 crc kubenswrapper[3562]: I1213 00:25:19.414164 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7b967fcf7b-w9fzm" event={"ID":"94d04030-7e00-4f80-8d77-df7d10806ed8","Type":"ContainerStarted","Data":"26be18a5b73a9a83246e53a406f46ba8b3a5ecf921894a2ebea8eca522976c4b"} Dec 13 00:25:24 crc kubenswrapper[3562]: I1213 00:25:24.190450 3562 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-84dbc59cb8-qccls"] Dec 13 00:25:24 crc kubenswrapper[3562]: I1213 00:25:24.191297 3562 kuberuntime_container.go:770] "Killing container with a grace period" pod="service-telemetry/default-interconnect-84dbc59cb8-qccls" podUID="15cf5f07-acc3-4ddf-8ef3-4db09075d50c" containerName="default-interconnect" containerID="cri-o://5cbc281a13a75e7e5f9fa73d9c00c22dd5acb0ff16858a3633b40a7ed4492010" gracePeriod=30 Dec 13 00:25:24 crc kubenswrapper[3562]: I1213 00:25:24.455116 3562 generic.go:334] "Generic (PLEG): container finished" podID="41908f98-a23e-41a8-bde3-404d2f858c1f" containerID="c7a5bd26ef4aa79d09683a035feace13f629116ba01b754ff2354733364e39de" exitCode=0 Dec 13 00:25:24 crc kubenswrapper[3562]: I1213 00:25:24.455196 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5c844d475c-tbx6l" event={"ID":"41908f98-a23e-41a8-bde3-404d2f858c1f","Type":"ContainerDied","Data":"c7a5bd26ef4aa79d09683a035feace13f629116ba01b754ff2354733364e39de"} Dec 13 00:25:24 crc kubenswrapper[3562]: I1213 00:25:24.456320 3562 generic.go:334] "Generic (PLEG): container finished" podID="23d6b871-3611-4291-91d7-f444af791b10" containerID="3d00cade02b567b00fd9e203d129d353ea8853d254bc717ca402c176293d2d1b" exitCode=0 Dec 13 00:25:24 crc kubenswrapper[3562]: I1213 00:25:24.456376 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-644cdd7cd-dtrrk" event={"ID":"23d6b871-3611-4291-91d7-f444af791b10","Type":"ContainerDied","Data":"3d00cade02b567b00fd9e203d129d353ea8853d254bc717ca402c176293d2d1b"} Dec 13 00:25:24 crc kubenswrapper[3562]: I1213 00:25:24.458594 3562 generic.go:334] "Generic (PLEG): container finished" podID="15cf5f07-acc3-4ddf-8ef3-4db09075d50c" containerID="5cbc281a13a75e7e5f9fa73d9c00c22dd5acb0ff16858a3633b40a7ed4492010" exitCode=0 Dec 13 00:25:24 crc kubenswrapper[3562]: I1213 00:25:24.458628 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-84dbc59cb8-qccls" event={"ID":"15cf5f07-acc3-4ddf-8ef3-4db09075d50c","Type":"ContainerDied","Data":"5cbc281a13a75e7e5f9fa73d9c00c22dd5acb0ff16858a3633b40a7ed4492010"} Dec 13 00:25:25 crc kubenswrapper[3562]: I1213 00:25:25.468156 3562 generic.go:334] "Generic (PLEG): container finished" podID="150cc378-bfa8-4da4-95e4-2f3565086ece" containerID="311e9fe2966feddf39306069efdcdc8bc44705de0bae03d000ed9c2092566c69" exitCode=0 Dec 13 00:25:25 crc kubenswrapper[3562]: I1213 00:25:25.468348 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-777b58f8fb-gzckp" event={"ID":"150cc378-bfa8-4da4-95e4-2f3565086ece","Type":"ContainerDied","Data":"311e9fe2966feddf39306069efdcdc8bc44705de0bae03d000ed9c2092566c69"} Dec 13 00:25:25 crc kubenswrapper[3562]: I1213 00:25:25.470453 3562 generic.go:334] "Generic (PLEG): container finished" podID="94d04030-7e00-4f80-8d77-df7d10806ed8" containerID="26be18a5b73a9a83246e53a406f46ba8b3a5ecf921894a2ebea8eca522976c4b" exitCode=0 Dec 13 00:25:25 crc kubenswrapper[3562]: I1213 00:25:25.470525 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7b967fcf7b-w9fzm" event={"ID":"94d04030-7e00-4f80-8d77-df7d10806ed8","Type":"ContainerDied","Data":"26be18a5b73a9a83246e53a406f46ba8b3a5ecf921894a2ebea8eca522976c4b"} Dec 13 00:25:25 crc kubenswrapper[3562]: I1213 00:25:25.471952 3562 generic.go:334] "Generic (PLEG): container finished" podID="287cbd43-c207-4358-969d-3ccbc72f0186" containerID="3f02f20595351e694c1ebc09bb93093e9d79924c9174593b511504bc7df9a733" exitCode=0 Dec 13 00:25:25 crc kubenswrapper[3562]: I1213 00:25:25.471986 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7db4658b87-fxcl5" event={"ID":"287cbd43-c207-4358-969d-3ccbc72f0186","Type":"ContainerDied","Data":"3f02f20595351e694c1ebc09bb93093e9d79924c9174593b511504bc7df9a733"} Dec 13 00:25:25 crc kubenswrapper[3562]: I1213 00:25:25.999362 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-84dbc59cb8-qccls" Dec 13 00:25:26 crc kubenswrapper[3562]: I1213 00:25:26.043798 3562 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-84dbc59cb8-5xww6"] Dec 13 00:25:26 crc kubenswrapper[3562]: I1213 00:25:26.044384 3562 topology_manager.go:215] "Topology Admit Handler" podUID="afc29778-9317-41fc-8041-68516abc5994" podNamespace="service-telemetry" podName="default-interconnect-84dbc59cb8-5xww6" Dec 13 00:25:26 crc kubenswrapper[3562]: E1213 00:25:26.044744 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="15cf5f07-acc3-4ddf-8ef3-4db09075d50c" containerName="default-interconnect" Dec 13 00:25:26 crc kubenswrapper[3562]: I1213 00:25:26.045126 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="15cf5f07-acc3-4ddf-8ef3-4db09075d50c" containerName="default-interconnect" Dec 13 00:25:26 crc kubenswrapper[3562]: I1213 00:25:26.045406 3562 memory_manager.go:354] "RemoveStaleState removing state" podUID="15cf5f07-acc3-4ddf-8ef3-4db09075d50c" containerName="default-interconnect" Dec 13 00:25:26 crc kubenswrapper[3562]: I1213 00:25:26.046335 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-84dbc59cb8-5xww6" Dec 13 00:25:26 crc kubenswrapper[3562]: I1213 00:25:26.055550 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-84dbc59cb8-5xww6"] Dec 13 00:25:26 crc kubenswrapper[3562]: I1213 00:25:26.152288 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/15cf5f07-acc3-4ddf-8ef3-4db09075d50c-sasl-users\") pod \"15cf5f07-acc3-4ddf-8ef3-4db09075d50c\" (UID: \"15cf5f07-acc3-4ddf-8ef3-4db09075d50c\") " Dec 13 00:25:26 crc kubenswrapper[3562]: I1213 00:25:26.152957 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/15cf5f07-acc3-4ddf-8ef3-4db09075d50c-default-interconnect-openstack-credentials\") pod \"15cf5f07-acc3-4ddf-8ef3-4db09075d50c\" (UID: \"15cf5f07-acc3-4ddf-8ef3-4db09075d50c\") " Dec 13 00:25:26 crc kubenswrapper[3562]: I1213 00:25:26.156039 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/15cf5f07-acc3-4ddf-8ef3-4db09075d50c-sasl-config\") pod \"15cf5f07-acc3-4ddf-8ef3-4db09075d50c\" (UID: \"15cf5f07-acc3-4ddf-8ef3-4db09075d50c\") " Dec 13 00:25:26 crc kubenswrapper[3562]: I1213 00:25:26.156087 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/15cf5f07-acc3-4ddf-8ef3-4db09075d50c-default-interconnect-inter-router-credentials\") pod \"15cf5f07-acc3-4ddf-8ef3-4db09075d50c\" (UID: \"15cf5f07-acc3-4ddf-8ef3-4db09075d50c\") " Dec 13 00:25:26 crc kubenswrapper[3562]: I1213 00:25:26.156119 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/15cf5f07-acc3-4ddf-8ef3-4db09075d50c-default-interconnect-inter-router-ca\") pod \"15cf5f07-acc3-4ddf-8ef3-4db09075d50c\" (UID: \"15cf5f07-acc3-4ddf-8ef3-4db09075d50c\") " Dec 13 00:25:26 crc kubenswrapper[3562]: I1213 00:25:26.156166 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/15cf5f07-acc3-4ddf-8ef3-4db09075d50c-default-interconnect-openstack-ca\") pod \"15cf5f07-acc3-4ddf-8ef3-4db09075d50c\" (UID: \"15cf5f07-acc3-4ddf-8ef3-4db09075d50c\") " Dec 13 00:25:26 crc kubenswrapper[3562]: I1213 00:25:26.156198 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7v92\" (UniqueName: \"kubernetes.io/projected/15cf5f07-acc3-4ddf-8ef3-4db09075d50c-kube-api-access-h7v92\") pod \"15cf5f07-acc3-4ddf-8ef3-4db09075d50c\" (UID: \"15cf5f07-acc3-4ddf-8ef3-4db09075d50c\") " Dec 13 00:25:26 crc kubenswrapper[3562]: I1213 00:25:26.156461 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/afc29778-9317-41fc-8041-68516abc5994-sasl-config\") pod \"default-interconnect-84dbc59cb8-5xww6\" (UID: \"afc29778-9317-41fc-8041-68516abc5994\") " pod="service-telemetry/default-interconnect-84dbc59cb8-5xww6" Dec 13 00:25:26 crc kubenswrapper[3562]: I1213 00:25:26.156525 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnkps\" (UniqueName: \"kubernetes.io/projected/afc29778-9317-41fc-8041-68516abc5994-kube-api-access-pnkps\") pod \"default-interconnect-84dbc59cb8-5xww6\" (UID: \"afc29778-9317-41fc-8041-68516abc5994\") " pod="service-telemetry/default-interconnect-84dbc59cb8-5xww6" Dec 13 00:25:26 crc kubenswrapper[3562]: I1213 00:25:26.156548 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/afc29778-9317-41fc-8041-68516abc5994-sasl-users\") pod \"default-interconnect-84dbc59cb8-5xww6\" (UID: \"afc29778-9317-41fc-8041-68516abc5994\") " pod="service-telemetry/default-interconnect-84dbc59cb8-5xww6" Dec 13 00:25:26 crc kubenswrapper[3562]: I1213 00:25:26.156574 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/afc29778-9317-41fc-8041-68516abc5994-default-interconnect-inter-router-ca\") pod \"default-interconnect-84dbc59cb8-5xww6\" (UID: \"afc29778-9317-41fc-8041-68516abc5994\") " pod="service-telemetry/default-interconnect-84dbc59cb8-5xww6" Dec 13 00:25:26 crc kubenswrapper[3562]: I1213 00:25:26.156598 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/afc29778-9317-41fc-8041-68516abc5994-default-interconnect-openstack-credentials\") pod \"default-interconnect-84dbc59cb8-5xww6\" (UID: \"afc29778-9317-41fc-8041-68516abc5994\") " pod="service-telemetry/default-interconnect-84dbc59cb8-5xww6" Dec 13 00:25:26 crc kubenswrapper[3562]: I1213 00:25:26.156625 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/afc29778-9317-41fc-8041-68516abc5994-default-interconnect-openstack-ca\") pod \"default-interconnect-84dbc59cb8-5xww6\" (UID: \"afc29778-9317-41fc-8041-68516abc5994\") " pod="service-telemetry/default-interconnect-84dbc59cb8-5xww6" Dec 13 00:25:26 crc kubenswrapper[3562]: I1213 00:25:26.156659 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/afc29778-9317-41fc-8041-68516abc5994-default-interconnect-inter-router-credentials\") pod \"default-interconnect-84dbc59cb8-5xww6\" (UID: \"afc29778-9317-41fc-8041-68516abc5994\") " pod="service-telemetry/default-interconnect-84dbc59cb8-5xww6" Dec 13 00:25:26 crc kubenswrapper[3562]: I1213 00:25:26.156936 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15cf5f07-acc3-4ddf-8ef3-4db09075d50c-sasl-config" (OuterVolumeSpecName: "sasl-config") pod "15cf5f07-acc3-4ddf-8ef3-4db09075d50c" (UID: "15cf5f07-acc3-4ddf-8ef3-4db09075d50c"). InnerVolumeSpecName "sasl-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 00:25:26 crc kubenswrapper[3562]: I1213 00:25:26.159163 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15cf5f07-acc3-4ddf-8ef3-4db09075d50c-default-interconnect-openstack-credentials" (OuterVolumeSpecName: "default-interconnect-openstack-credentials") pod "15cf5f07-acc3-4ddf-8ef3-4db09075d50c" (UID: "15cf5f07-acc3-4ddf-8ef3-4db09075d50c"). InnerVolumeSpecName "default-interconnect-openstack-credentials". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 00:25:26 crc kubenswrapper[3562]: I1213 00:25:26.165236 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15cf5f07-acc3-4ddf-8ef3-4db09075d50c-default-interconnect-inter-router-ca" (OuterVolumeSpecName: "default-interconnect-inter-router-ca") pod "15cf5f07-acc3-4ddf-8ef3-4db09075d50c" (UID: "15cf5f07-acc3-4ddf-8ef3-4db09075d50c"). InnerVolumeSpecName "default-interconnect-inter-router-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 00:25:26 crc kubenswrapper[3562]: I1213 00:25:26.165988 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15cf5f07-acc3-4ddf-8ef3-4db09075d50c-sasl-users" (OuterVolumeSpecName: "sasl-users") pod "15cf5f07-acc3-4ddf-8ef3-4db09075d50c" (UID: "15cf5f07-acc3-4ddf-8ef3-4db09075d50c"). InnerVolumeSpecName "sasl-users". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 00:25:26 crc kubenswrapper[3562]: I1213 00:25:26.167448 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15cf5f07-acc3-4ddf-8ef3-4db09075d50c-default-interconnect-inter-router-credentials" (OuterVolumeSpecName: "default-interconnect-inter-router-credentials") pod "15cf5f07-acc3-4ddf-8ef3-4db09075d50c" (UID: "15cf5f07-acc3-4ddf-8ef3-4db09075d50c"). InnerVolumeSpecName "default-interconnect-inter-router-credentials". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 00:25:26 crc kubenswrapper[3562]: I1213 00:25:26.168132 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15cf5f07-acc3-4ddf-8ef3-4db09075d50c-kube-api-access-h7v92" (OuterVolumeSpecName: "kube-api-access-h7v92") pod "15cf5f07-acc3-4ddf-8ef3-4db09075d50c" (UID: "15cf5f07-acc3-4ddf-8ef3-4db09075d50c"). InnerVolumeSpecName "kube-api-access-h7v92". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 00:25:26 crc kubenswrapper[3562]: I1213 00:25:26.168848 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15cf5f07-acc3-4ddf-8ef3-4db09075d50c-default-interconnect-openstack-ca" (OuterVolumeSpecName: "default-interconnect-openstack-ca") pod "15cf5f07-acc3-4ddf-8ef3-4db09075d50c" (UID: "15cf5f07-acc3-4ddf-8ef3-4db09075d50c"). InnerVolumeSpecName "default-interconnect-openstack-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 00:25:26 crc kubenswrapper[3562]: I1213 00:25:26.257861 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/afc29778-9317-41fc-8041-68516abc5994-default-interconnect-inter-router-credentials\") pod \"default-interconnect-84dbc59cb8-5xww6\" (UID: \"afc29778-9317-41fc-8041-68516abc5994\") " pod="service-telemetry/default-interconnect-84dbc59cb8-5xww6" Dec 13 00:25:26 crc kubenswrapper[3562]: I1213 00:25:26.257995 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/afc29778-9317-41fc-8041-68516abc5994-sasl-config\") pod \"default-interconnect-84dbc59cb8-5xww6\" (UID: \"afc29778-9317-41fc-8041-68516abc5994\") " pod="service-telemetry/default-interconnect-84dbc59cb8-5xww6" Dec 13 00:25:26 crc kubenswrapper[3562]: I1213 00:25:26.258043 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pnkps\" (UniqueName: \"kubernetes.io/projected/afc29778-9317-41fc-8041-68516abc5994-kube-api-access-pnkps\") pod \"default-interconnect-84dbc59cb8-5xww6\" (UID: \"afc29778-9317-41fc-8041-68516abc5994\") " pod="service-telemetry/default-interconnect-84dbc59cb8-5xww6" Dec 13 00:25:26 crc kubenswrapper[3562]: I1213 00:25:26.258068 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/afc29778-9317-41fc-8041-68516abc5994-sasl-users\") pod \"default-interconnect-84dbc59cb8-5xww6\" (UID: \"afc29778-9317-41fc-8041-68516abc5994\") " pod="service-telemetry/default-interconnect-84dbc59cb8-5xww6" Dec 13 00:25:26 crc kubenswrapper[3562]: I1213 00:25:26.258091 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/afc29778-9317-41fc-8041-68516abc5994-default-interconnect-inter-router-ca\") pod \"default-interconnect-84dbc59cb8-5xww6\" (UID: \"afc29778-9317-41fc-8041-68516abc5994\") " pod="service-telemetry/default-interconnect-84dbc59cb8-5xww6" Dec 13 00:25:26 crc kubenswrapper[3562]: I1213 00:25:26.258113 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/afc29778-9317-41fc-8041-68516abc5994-default-interconnect-openstack-credentials\") pod \"default-interconnect-84dbc59cb8-5xww6\" (UID: \"afc29778-9317-41fc-8041-68516abc5994\") " pod="service-telemetry/default-interconnect-84dbc59cb8-5xww6" Dec 13 00:25:26 crc kubenswrapper[3562]: I1213 00:25:26.258139 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/afc29778-9317-41fc-8041-68516abc5994-default-interconnect-openstack-ca\") pod \"default-interconnect-84dbc59cb8-5xww6\" (UID: \"afc29778-9317-41fc-8041-68516abc5994\") " pod="service-telemetry/default-interconnect-84dbc59cb8-5xww6" Dec 13 00:25:26 crc kubenswrapper[3562]: I1213 00:25:26.258194 3562 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-h7v92\" (UniqueName: \"kubernetes.io/projected/15cf5f07-acc3-4ddf-8ef3-4db09075d50c-kube-api-access-h7v92\") on node \"crc\" DevicePath \"\"" Dec 13 00:25:26 crc kubenswrapper[3562]: I1213 00:25:26.258214 3562 reconciler_common.go:300] "Volume detached for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/15cf5f07-acc3-4ddf-8ef3-4db09075d50c-sasl-users\") on node \"crc\" DevicePath \"\"" Dec 13 00:25:26 crc kubenswrapper[3562]: I1213 00:25:26.258228 3562 reconciler_common.go:300] "Volume detached for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/15cf5f07-acc3-4ddf-8ef3-4db09075d50c-default-interconnect-openstack-credentials\") on node \"crc\" DevicePath \"\"" Dec 13 00:25:26 crc kubenswrapper[3562]: I1213 00:25:26.258238 3562 reconciler_common.go:300] "Volume detached for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/15cf5f07-acc3-4ddf-8ef3-4db09075d50c-sasl-config\") on node \"crc\" DevicePath \"\"" Dec 13 00:25:26 crc kubenswrapper[3562]: I1213 00:25:26.258249 3562 reconciler_common.go:300] "Volume detached for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/15cf5f07-acc3-4ddf-8ef3-4db09075d50c-default-interconnect-inter-router-credentials\") on node \"crc\" DevicePath \"\"" Dec 13 00:25:26 crc kubenswrapper[3562]: I1213 00:25:26.258259 3562 reconciler_common.go:300] "Volume detached for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/15cf5f07-acc3-4ddf-8ef3-4db09075d50c-default-interconnect-inter-router-ca\") on node \"crc\" DevicePath \"\"" Dec 13 00:25:26 crc kubenswrapper[3562]: I1213 00:25:26.258271 3562 reconciler_common.go:300] "Volume detached for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/15cf5f07-acc3-4ddf-8ef3-4db09075d50c-default-interconnect-openstack-ca\") on node \"crc\" DevicePath \"\"" Dec 13 00:25:26 crc kubenswrapper[3562]: I1213 00:25:26.260728 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/afc29778-9317-41fc-8041-68516abc5994-sasl-config\") pod \"default-interconnect-84dbc59cb8-5xww6\" (UID: \"afc29778-9317-41fc-8041-68516abc5994\") " pod="service-telemetry/default-interconnect-84dbc59cb8-5xww6" Dec 13 00:25:26 crc kubenswrapper[3562]: I1213 00:25:26.263871 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/afc29778-9317-41fc-8041-68516abc5994-sasl-users\") pod \"default-interconnect-84dbc59cb8-5xww6\" (UID: \"afc29778-9317-41fc-8041-68516abc5994\") " pod="service-telemetry/default-interconnect-84dbc59cb8-5xww6" Dec 13 00:25:26 crc kubenswrapper[3562]: I1213 00:25:26.264143 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/afc29778-9317-41fc-8041-68516abc5994-default-interconnect-inter-router-credentials\") pod \"default-interconnect-84dbc59cb8-5xww6\" (UID: \"afc29778-9317-41fc-8041-68516abc5994\") " pod="service-telemetry/default-interconnect-84dbc59cb8-5xww6" Dec 13 00:25:26 crc kubenswrapper[3562]: I1213 00:25:26.264495 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/afc29778-9317-41fc-8041-68516abc5994-default-interconnect-openstack-ca\") pod \"default-interconnect-84dbc59cb8-5xww6\" (UID: \"afc29778-9317-41fc-8041-68516abc5994\") " pod="service-telemetry/default-interconnect-84dbc59cb8-5xww6" Dec 13 00:25:26 crc kubenswrapper[3562]: I1213 00:25:26.266054 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/afc29778-9317-41fc-8041-68516abc5994-default-interconnect-inter-router-ca\") pod \"default-interconnect-84dbc59cb8-5xww6\" (UID: \"afc29778-9317-41fc-8041-68516abc5994\") " pod="service-telemetry/default-interconnect-84dbc59cb8-5xww6" Dec 13 00:25:26 crc kubenswrapper[3562]: I1213 00:25:26.267507 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/afc29778-9317-41fc-8041-68516abc5994-default-interconnect-openstack-credentials\") pod \"default-interconnect-84dbc59cb8-5xww6\" (UID: \"afc29778-9317-41fc-8041-68516abc5994\") " pod="service-telemetry/default-interconnect-84dbc59cb8-5xww6" Dec 13 00:25:26 crc kubenswrapper[3562]: I1213 00:25:26.281471 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnkps\" (UniqueName: \"kubernetes.io/projected/afc29778-9317-41fc-8041-68516abc5994-kube-api-access-pnkps\") pod \"default-interconnect-84dbc59cb8-5xww6\" (UID: \"afc29778-9317-41fc-8041-68516abc5994\") " pod="service-telemetry/default-interconnect-84dbc59cb8-5xww6" Dec 13 00:25:26 crc kubenswrapper[3562]: I1213 00:25:26.389484 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-84dbc59cb8-5xww6" Dec 13 00:25:26 crc kubenswrapper[3562]: I1213 00:25:26.520717 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-84dbc59cb8-qccls" event={"ID":"15cf5f07-acc3-4ddf-8ef3-4db09075d50c","Type":"ContainerDied","Data":"403095897c7a331e568c2e569a8ddf241d6e6f47a44ab91f5aa1fe02e253b28e"} Dec 13 00:25:26 crc kubenswrapper[3562]: I1213 00:25:26.521076 3562 scope.go:117] "RemoveContainer" containerID="5cbc281a13a75e7e5f9fa73d9c00c22dd5acb0ff16858a3633b40a7ed4492010" Dec 13 00:25:26 crc kubenswrapper[3562]: I1213 00:25:26.521226 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-84dbc59cb8-qccls" Dec 13 00:25:26 crc kubenswrapper[3562]: I1213 00:25:26.563403 3562 scope.go:117] "RemoveContainer" containerID="26be18a5b73a9a83246e53a406f46ba8b3a5ecf921894a2ebea8eca522976c4b" Dec 13 00:25:26 crc kubenswrapper[3562]: I1213 00:25:26.575537 3562 scope.go:117] "RemoveContainer" containerID="3f02f20595351e694c1ebc09bb93093e9d79924c9174593b511504bc7df9a733" Dec 13 00:25:26 crc kubenswrapper[3562]: I1213 00:25:26.650253 3562 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-84dbc59cb8-qccls"] Dec 13 00:25:26 crc kubenswrapper[3562]: I1213 00:25:26.667148 3562 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["service-telemetry/default-interconnect-84dbc59cb8-qccls"] Dec 13 00:25:26 crc kubenswrapper[3562]: I1213 00:25:26.902177 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-84dbc59cb8-5xww6"] Dec 13 00:25:27 crc kubenswrapper[3562]: I1213 00:25:27.576280 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-777b58f8fb-gzckp" event={"ID":"150cc378-bfa8-4da4-95e4-2f3565086ece","Type":"ContainerStarted","Data":"7ae458959a14fc7c6592afd51b5695392747d7a7e22f0a25964b061db8410296"} Dec 13 00:25:27 crc kubenswrapper[3562]: I1213 00:25:27.577307 3562 scope.go:117] "RemoveContainer" containerID="311e9fe2966feddf39306069efdcdc8bc44705de0bae03d000ed9c2092566c69" Dec 13 00:25:27 crc kubenswrapper[3562]: I1213 00:25:27.580698 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7b967fcf7b-w9fzm" event={"ID":"94d04030-7e00-4f80-8d77-df7d10806ed8","Type":"ContainerStarted","Data":"aab7d3dc8bd87d24402a92af13e6c58c91562775b35d5e87564a21c36d48953d"} Dec 13 00:25:27 crc kubenswrapper[3562]: I1213 00:25:27.580749 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7b967fcf7b-w9fzm" event={"ID":"94d04030-7e00-4f80-8d77-df7d10806ed8","Type":"ContainerStarted","Data":"d1fa38ce9e8ab87c8e3f99fb56711f2f9fb5cbeb500216ff74caa2ae85ffc73b"} Dec 13 00:25:27 crc kubenswrapper[3562]: I1213 00:25:27.583639 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5c844d475c-tbx6l" event={"ID":"41908f98-a23e-41a8-bde3-404d2f858c1f","Type":"ContainerStarted","Data":"cd01afb8bc46649c8290d0722fe8344dddc03ca9ba22e43e15ae1c47b6c83397"} Dec 13 00:25:27 crc kubenswrapper[3562]: I1213 00:25:27.584430 3562 scope.go:117] "RemoveContainer" containerID="c7a5bd26ef4aa79d09683a035feace13f629116ba01b754ff2354733364e39de" Dec 13 00:25:27 crc kubenswrapper[3562]: I1213 00:25:27.585042 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-84dbc59cb8-5xww6" event={"ID":"afc29778-9317-41fc-8041-68516abc5994","Type":"ContainerStarted","Data":"6574686b9098315dd1abbfcd61258ad7e1c3f39372f605a1651972a31ba61c40"} Dec 13 00:25:27 crc kubenswrapper[3562]: I1213 00:25:27.585080 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-84dbc59cb8-5xww6" event={"ID":"afc29778-9317-41fc-8041-68516abc5994","Type":"ContainerStarted","Data":"5072f1400909b748e45edfbeb1e789f94b6cba40eae33f7702307fd4839a90c6"} Dec 13 00:25:27 crc kubenswrapper[3562]: I1213 00:25:27.589761 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7db4658b87-fxcl5" event={"ID":"287cbd43-c207-4358-969d-3ccbc72f0186","Type":"ContainerStarted","Data":"00d5783b95d100afdd3424e5e3cb5844a81eff43b1400473bf7b75a797ea381b"} Dec 13 00:25:27 crc kubenswrapper[3562]: I1213 00:25:27.589799 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7db4658b87-fxcl5" event={"ID":"287cbd43-c207-4358-969d-3ccbc72f0186","Type":"ContainerStarted","Data":"53917da509155d64eea5094187fe3aeec7b29cc8a2545ec0d2b5f81c050d9071"} Dec 13 00:25:27 crc kubenswrapper[3562]: I1213 00:25:27.591940 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-644cdd7cd-dtrrk" event={"ID":"23d6b871-3611-4291-91d7-f444af791b10","Type":"ContainerStarted","Data":"ceb9acf5f72dc3b35c1953e039db30ea93c7108652ad8613ea3c75d8e7940356"} Dec 13 00:25:27 crc kubenswrapper[3562]: I1213 00:25:27.592216 3562 scope.go:117] "RemoveContainer" containerID="3d00cade02b567b00fd9e203d129d353ea8853d254bc717ca402c176293d2d1b" Dec 13 00:25:27 crc kubenswrapper[3562]: I1213 00:25:27.619283 3562 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7b967fcf7b-w9fzm" podStartSLOduration=2.167415146 podStartE2EDuration="19.619239551s" podCreationTimestamp="2025-12-13 00:25:08 +0000 UTC" firstStartedPulling="2025-12-13 00:25:09.644819772 +0000 UTC m=+1146.128050295" lastFinishedPulling="2025-12-13 00:25:27.096644177 +0000 UTC m=+1163.579874700" observedRunningTime="2025-12-13 00:25:27.615448585 +0000 UTC m=+1164.098679108" watchObservedRunningTime="2025-12-13 00:25:27.619239551 +0000 UTC m=+1164.102470074" Dec 13 00:25:27 crc kubenswrapper[3562]: I1213 00:25:27.636135 3562 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7db4658b87-fxcl5" podStartSLOduration=4.221769029 podStartE2EDuration="38.635879669s" podCreationTimestamp="2025-12-13 00:24:49 +0000 UTC" firstStartedPulling="2025-12-13 00:24:52.580140397 +0000 UTC m=+1129.063370920" lastFinishedPulling="2025-12-13 00:25:26.994251037 +0000 UTC m=+1163.477481560" observedRunningTime="2025-12-13 00:25:27.632559506 +0000 UTC m=+1164.115790029" watchObservedRunningTime="2025-12-13 00:25:27.635879669 +0000 UTC m=+1164.119110192" Dec 13 00:25:27 crc kubenswrapper[3562]: I1213 00:25:27.655169 3562 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/default-interconnect-84dbc59cb8-5xww6" podStartSLOduration=3.6551297910000002 podStartE2EDuration="3.655129791s" podCreationTimestamp="2025-12-13 00:25:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 00:25:27.654113062 +0000 UTC m=+1164.137343595" watchObservedRunningTime="2025-12-13 00:25:27.655129791 +0000 UTC m=+1164.138360314" Dec 13 00:25:27 crc kubenswrapper[3562]: I1213 00:25:27.915236 3562 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15cf5f07-acc3-4ddf-8ef3-4db09075d50c" path="/var/lib/kubelet/pods/15cf5f07-acc3-4ddf-8ef3-4db09075d50c/volumes" Dec 13 00:25:28 crc kubenswrapper[3562]: I1213 00:25:28.606129 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5c844d475c-tbx6l" event={"ID":"41908f98-a23e-41a8-bde3-404d2f858c1f","Type":"ContainerStarted","Data":"de0d9c16a8824557d8a8d4410dca555abce7b061516923e08bbbdaa9d05e27c7"} Dec 13 00:25:28 crc kubenswrapper[3562]: I1213 00:25:28.608621 3562 generic.go:334] "Generic (PLEG): container finished" podID="287cbd43-c207-4358-969d-3ccbc72f0186" containerID="53917da509155d64eea5094187fe3aeec7b29cc8a2545ec0d2b5f81c050d9071" exitCode=0 Dec 13 00:25:28 crc kubenswrapper[3562]: I1213 00:25:28.608670 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7db4658b87-fxcl5" event={"ID":"287cbd43-c207-4358-969d-3ccbc72f0186","Type":"ContainerDied","Data":"53917da509155d64eea5094187fe3aeec7b29cc8a2545ec0d2b5f81c050d9071"} Dec 13 00:25:28 crc kubenswrapper[3562]: I1213 00:25:28.608702 3562 scope.go:117] "RemoveContainer" containerID="3f02f20595351e694c1ebc09bb93093e9d79924c9174593b511504bc7df9a733" Dec 13 00:25:28 crc kubenswrapper[3562]: I1213 00:25:28.609689 3562 scope.go:117] "RemoveContainer" containerID="53917da509155d64eea5094187fe3aeec7b29cc8a2545ec0d2b5f81c050d9071" Dec 13 00:25:28 crc kubenswrapper[3562]: E1213 00:25:28.610259 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-meter-smartgateway-7db4658b87-fxcl5_service-telemetry(287cbd43-c207-4358-969d-3ccbc72f0186)\"" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7db4658b87-fxcl5" podUID="287cbd43-c207-4358-969d-3ccbc72f0186" Dec 13 00:25:28 crc kubenswrapper[3562]: I1213 00:25:28.612581 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-644cdd7cd-dtrrk" event={"ID":"23d6b871-3611-4291-91d7-f444af791b10","Type":"ContainerStarted","Data":"90386c5fd14c4c8609fa49670145caac084da325c291f624373c60c96eb26a56"} Dec 13 00:25:28 crc kubenswrapper[3562]: I1213 00:25:28.617329 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-777b58f8fb-gzckp" event={"ID":"150cc378-bfa8-4da4-95e4-2f3565086ece","Type":"ContainerStarted","Data":"0b31f94fee43e307e87f8b6fe5e17992720b0fe523c21c424b4a213a818b844c"} Dec 13 00:25:28 crc kubenswrapper[3562]: I1213 00:25:28.634131 3562 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5c844d475c-tbx6l" podStartSLOduration=12.117226241 podStartE2EDuration="34.633791446s" podCreationTimestamp="2025-12-13 00:24:54 +0000 UTC" firstStartedPulling="2025-12-13 00:25:05.442752484 +0000 UTC m=+1141.925983007" lastFinishedPulling="2025-12-13 00:25:27.959317689 +0000 UTC m=+1164.442548212" observedRunningTime="2025-12-13 00:25:28.631775959 +0000 UTC m=+1165.115006482" watchObservedRunningTime="2025-12-13 00:25:28.633791446 +0000 UTC m=+1165.117021969" Dec 13 00:25:28 crc kubenswrapper[3562]: I1213 00:25:28.671334 3562 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-777b58f8fb-gzckp" podStartSLOduration=4.038947384 podStartE2EDuration="41.671050944s" podCreationTimestamp="2025-12-13 00:24:47 +0000 UTC" firstStartedPulling="2025-12-13 00:24:50.367286057 +0000 UTC m=+1126.850516580" lastFinishedPulling="2025-12-13 00:25:27.999389617 +0000 UTC m=+1164.482620140" observedRunningTime="2025-12-13 00:25:28.665633581 +0000 UTC m=+1165.148864114" watchObservedRunningTime="2025-12-13 00:25:28.671050944 +0000 UTC m=+1165.154281467" Dec 13 00:25:28 crc kubenswrapper[3562]: I1213 00:25:28.731853 3562 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-event-smartgateway-644cdd7cd-dtrrk" podStartSLOduration=2.294264542 podStartE2EDuration="24.731768112s" podCreationTimestamp="2025-12-13 00:25:04 +0000 UTC" firstStartedPulling="2025-12-13 00:25:05.57124326 +0000 UTC m=+1142.054473783" lastFinishedPulling="2025-12-13 00:25:28.00874683 +0000 UTC m=+1164.491977353" observedRunningTime="2025-12-13 00:25:28.726139524 +0000 UTC m=+1165.209370047" watchObservedRunningTime="2025-12-13 00:25:28.731768112 +0000 UTC m=+1165.214998635" Dec 13 00:25:29 crc kubenswrapper[3562]: I1213 00:25:29.626295 3562 generic.go:334] "Generic (PLEG): container finished" podID="94d04030-7e00-4f80-8d77-df7d10806ed8" containerID="d1fa38ce9e8ab87c8e3f99fb56711f2f9fb5cbeb500216ff74caa2ae85ffc73b" exitCode=0 Dec 13 00:25:29 crc kubenswrapper[3562]: I1213 00:25:29.626355 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7b967fcf7b-w9fzm" event={"ID":"94d04030-7e00-4f80-8d77-df7d10806ed8","Type":"ContainerDied","Data":"d1fa38ce9e8ab87c8e3f99fb56711f2f9fb5cbeb500216ff74caa2ae85ffc73b"} Dec 13 00:25:29 crc kubenswrapper[3562]: I1213 00:25:29.627325 3562 scope.go:117] "RemoveContainer" containerID="26be18a5b73a9a83246e53a406f46ba8b3a5ecf921894a2ebea8eca522976c4b" Dec 13 00:25:29 crc kubenswrapper[3562]: I1213 00:25:29.628231 3562 scope.go:117] "RemoveContainer" containerID="d1fa38ce9e8ab87c8e3f99fb56711f2f9fb5cbeb500216ff74caa2ae85ffc73b" Dec 13 00:25:29 crc kubenswrapper[3562]: E1213 00:25:29.628977 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-event-smartgateway-7b967fcf7b-w9fzm_service-telemetry(94d04030-7e00-4f80-8d77-df7d10806ed8)\"" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7b967fcf7b-w9fzm" podUID="94d04030-7e00-4f80-8d77-df7d10806ed8" Dec 13 00:25:38 crc kubenswrapper[3562]: I1213 00:25:38.874738 3562 scope.go:117] "RemoveContainer" containerID="53917da509155d64eea5094187fe3aeec7b29cc8a2545ec0d2b5f81c050d9071" Dec 13 00:25:40 crc kubenswrapper[3562]: I1213 00:25:40.704705 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7db4658b87-fxcl5" event={"ID":"287cbd43-c207-4358-969d-3ccbc72f0186","Type":"ContainerStarted","Data":"b9613d68345a3622de08fce9ef1909332ca0f8098697c367a3bc8d7a7ce354d2"} Dec 13 00:25:42 crc kubenswrapper[3562]: I1213 00:25:42.875031 3562 scope.go:117] "RemoveContainer" containerID="d1fa38ce9e8ab87c8e3f99fb56711f2f9fb5cbeb500216ff74caa2ae85ffc73b" Dec 13 00:25:43 crc kubenswrapper[3562]: I1213 00:25:43.052441 3562 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 00:25:43 crc kubenswrapper[3562]: I1213 00:25:43.052508 3562 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 00:25:43 crc kubenswrapper[3562]: I1213 00:25:43.052542 3562 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Dec 13 00:25:43 crc kubenswrapper[3562]: I1213 00:25:43.053449 3562 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9a472952244bcda71b7ed2e29a85cbb61dc9bc7aa5fa043fe4877ed29fb76b92"} pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 13 00:25:43 crc kubenswrapper[3562]: I1213 00:25:43.053624 3562 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" containerID="cri-o://9a472952244bcda71b7ed2e29a85cbb61dc9bc7aa5fa043fe4877ed29fb76b92" gracePeriod=600 Dec 13 00:25:43 crc kubenswrapper[3562]: I1213 00:25:43.722524 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7b967fcf7b-w9fzm" event={"ID":"94d04030-7e00-4f80-8d77-df7d10806ed8","Type":"ContainerStarted","Data":"a67073ab329ec0b16d28d95d256f5ab249cb057213aed21b0c698224ed893926"} Dec 13 00:25:43 crc kubenswrapper[3562]: I1213 00:25:43.725601 3562 generic.go:334] "Generic (PLEG): container finished" podID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerID="9a472952244bcda71b7ed2e29a85cbb61dc9bc7aa5fa043fe4877ed29fb76b92" exitCode=0 Dec 13 00:25:43 crc kubenswrapper[3562]: I1213 00:25:43.725633 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerDied","Data":"9a472952244bcda71b7ed2e29a85cbb61dc9bc7aa5fa043fe4877ed29fb76b92"} Dec 13 00:25:43 crc kubenswrapper[3562]: I1213 00:25:43.725655 3562 scope.go:117] "RemoveContainer" containerID="4ad64da496031e88d35a286745296696d921692888dc942adb2368e3bb1eb23a" Dec 13 00:25:44 crc kubenswrapper[3562]: I1213 00:25:44.733783 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerStarted","Data":"f455be034ac89502e36b1c159caf9a6a34f19bdcc892c35ff1575878b6abc592"} Dec 13 00:25:54 crc kubenswrapper[3562]: I1213 00:25:54.255045 3562 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/qdr-test"] Dec 13 00:25:54 crc kubenswrapper[3562]: I1213 00:25:54.255748 3562 topology_manager.go:215] "Topology Admit Handler" podUID="7f658d1d-6f3e-4481-9c91-e7d15b724e2d" podNamespace="service-telemetry" podName="qdr-test" Dec 13 00:25:54 crc kubenswrapper[3562]: I1213 00:25:54.256465 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Dec 13 00:25:54 crc kubenswrapper[3562]: I1213 00:25:54.261171 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"qdr-test-config" Dec 13 00:25:54 crc kubenswrapper[3562]: I1213 00:25:54.261171 3562 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-selfsigned" Dec 13 00:25:54 crc kubenswrapper[3562]: I1213 00:25:54.263446 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Dec 13 00:25:54 crc kubenswrapper[3562]: I1213 00:25:54.425372 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/7f658d1d-6f3e-4481-9c91-e7d15b724e2d-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"7f658d1d-6f3e-4481-9c91-e7d15b724e2d\") " pod="service-telemetry/qdr-test" Dec 13 00:25:54 crc kubenswrapper[3562]: I1213 00:25:54.425702 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzwlr\" (UniqueName: \"kubernetes.io/projected/7f658d1d-6f3e-4481-9c91-e7d15b724e2d-kube-api-access-nzwlr\") pod \"qdr-test\" (UID: \"7f658d1d-6f3e-4481-9c91-e7d15b724e2d\") " pod="service-telemetry/qdr-test" Dec 13 00:25:54 crc kubenswrapper[3562]: I1213 00:25:54.425735 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/7f658d1d-6f3e-4481-9c91-e7d15b724e2d-qdr-test-config\") pod \"qdr-test\" (UID: \"7f658d1d-6f3e-4481-9c91-e7d15b724e2d\") " pod="service-telemetry/qdr-test" Dec 13 00:25:54 crc kubenswrapper[3562]: I1213 00:25:54.527186 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/7f658d1d-6f3e-4481-9c91-e7d15b724e2d-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"7f658d1d-6f3e-4481-9c91-e7d15b724e2d\") " pod="service-telemetry/qdr-test" Dec 13 00:25:54 crc kubenswrapper[3562]: I1213 00:25:54.527580 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nzwlr\" (UniqueName: \"kubernetes.io/projected/7f658d1d-6f3e-4481-9c91-e7d15b724e2d-kube-api-access-nzwlr\") pod \"qdr-test\" (UID: \"7f658d1d-6f3e-4481-9c91-e7d15b724e2d\") " pod="service-telemetry/qdr-test" Dec 13 00:25:54 crc kubenswrapper[3562]: I1213 00:25:54.527694 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/7f658d1d-6f3e-4481-9c91-e7d15b724e2d-qdr-test-config\") pod \"qdr-test\" (UID: \"7f658d1d-6f3e-4481-9c91-e7d15b724e2d\") " pod="service-telemetry/qdr-test" Dec 13 00:25:54 crc kubenswrapper[3562]: I1213 00:25:54.528591 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/7f658d1d-6f3e-4481-9c91-e7d15b724e2d-qdr-test-config\") pod \"qdr-test\" (UID: \"7f658d1d-6f3e-4481-9c91-e7d15b724e2d\") " pod="service-telemetry/qdr-test" Dec 13 00:25:54 crc kubenswrapper[3562]: I1213 00:25:54.538734 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/7f658d1d-6f3e-4481-9c91-e7d15b724e2d-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"7f658d1d-6f3e-4481-9c91-e7d15b724e2d\") " pod="service-telemetry/qdr-test" Dec 13 00:25:54 crc kubenswrapper[3562]: I1213 00:25:54.553067 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzwlr\" (UniqueName: \"kubernetes.io/projected/7f658d1d-6f3e-4481-9c91-e7d15b724e2d-kube-api-access-nzwlr\") pod \"qdr-test\" (UID: \"7f658d1d-6f3e-4481-9c91-e7d15b724e2d\") " pod="service-telemetry/qdr-test" Dec 13 00:25:54 crc kubenswrapper[3562]: I1213 00:25:54.615051 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Dec 13 00:25:55 crc kubenswrapper[3562]: I1213 00:25:55.063849 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Dec 13 00:25:55 crc kubenswrapper[3562]: W1213 00:25:55.070153 3562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f658d1d_6f3e_4481_9c91_e7d15b724e2d.slice/crio-904608d3bbbc17e64800a315c0e9440432053f54e223c024bc0bff94ea26500f WatchSource:0}: Error finding container 904608d3bbbc17e64800a315c0e9440432053f54e223c024bc0bff94ea26500f: Status 404 returned error can't find the container with id 904608d3bbbc17e64800a315c0e9440432053f54e223c024bc0bff94ea26500f Dec 13 00:25:55 crc kubenswrapper[3562]: I1213 00:25:55.792977 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"7f658d1d-6f3e-4481-9c91-e7d15b724e2d","Type":"ContainerStarted","Data":"904608d3bbbc17e64800a315c0e9440432053f54e223c024bc0bff94ea26500f"} Dec 13 00:26:03 crc kubenswrapper[3562]: I1213 00:26:03.751367 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 13 00:26:03 crc kubenswrapper[3562]: I1213 00:26:03.751924 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 13 00:26:03 crc kubenswrapper[3562]: I1213 00:26:03.751947 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 13 00:26:03 crc kubenswrapper[3562]: I1213 00:26:03.751967 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 13 00:26:03 crc kubenswrapper[3562]: I1213 00:26:03.751996 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 13 00:26:04 crc kubenswrapper[3562]: I1213 00:26:04.853079 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"7f658d1d-6f3e-4481-9c91-e7d15b724e2d","Type":"ContainerStarted","Data":"75a0ebe8d1c82fc1483e33a6ca3075b84abbc7f9e320ac6eefa14f6ab3b01b0e"} Dec 13 00:26:04 crc kubenswrapper[3562]: I1213 00:26:04.867128 3562 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/qdr-test" podStartSLOduration=1.598920155 podStartE2EDuration="10.867089599s" podCreationTimestamp="2025-12-13 00:25:54 +0000 UTC" firstStartedPulling="2025-12-13 00:25:55.072278907 +0000 UTC m=+1191.555509430" lastFinishedPulling="2025-12-13 00:26:04.340448361 +0000 UTC m=+1200.823678874" observedRunningTime="2025-12-13 00:26:04.867011356 +0000 UTC m=+1201.350241889" watchObservedRunningTime="2025-12-13 00:26:04.867089599 +0000 UTC m=+1201.350320122" Dec 13 00:26:05 crc kubenswrapper[3562]: I1213 00:26:05.253542 3562 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/stf-smoketest-smoke1-8m4t4"] Dec 13 00:26:05 crc kubenswrapper[3562]: I1213 00:26:05.254281 3562 topology_manager.go:215] "Topology Admit Handler" podUID="98d7c0ba-8dc2-4a9e-bc96-512facab2925" podNamespace="service-telemetry" podName="stf-smoketest-smoke1-8m4t4" Dec 13 00:26:05 crc kubenswrapper[3562]: I1213 00:26:05.255319 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-8m4t4" Dec 13 00:26:05 crc kubenswrapper[3562]: I1213 00:26:05.257310 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-publisher" Dec 13 00:26:05 crc kubenswrapper[3562]: I1213 00:26:05.257803 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-entrypoint-script" Dec 13 00:26:05 crc kubenswrapper[3562]: I1213 00:26:05.258263 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-sensubility-config" Dec 13 00:26:05 crc kubenswrapper[3562]: I1213 00:26:05.259255 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-entrypoint-script" Dec 13 00:26:05 crc kubenswrapper[3562]: I1213 00:26:05.259419 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-healthcheck-log" Dec 13 00:26:05 crc kubenswrapper[3562]: I1213 00:26:05.262239 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-config" Dec 13 00:26:05 crc kubenswrapper[3562]: I1213 00:26:05.277119 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-8m4t4"] Dec 13 00:26:05 crc kubenswrapper[3562]: I1213 00:26:05.398951 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/98d7c0ba-8dc2-4a9e-bc96-512facab2925-sensubility-config\") pod \"stf-smoketest-smoke1-8m4t4\" (UID: \"98d7c0ba-8dc2-4a9e-bc96-512facab2925\") " pod="service-telemetry/stf-smoketest-smoke1-8m4t4" Dec 13 00:26:05 crc kubenswrapper[3562]: I1213 00:26:05.399017 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlpkm\" (UniqueName: \"kubernetes.io/projected/98d7c0ba-8dc2-4a9e-bc96-512facab2925-kube-api-access-xlpkm\") pod \"stf-smoketest-smoke1-8m4t4\" (UID: \"98d7c0ba-8dc2-4a9e-bc96-512facab2925\") " pod="service-telemetry/stf-smoketest-smoke1-8m4t4" Dec 13 00:26:05 crc kubenswrapper[3562]: I1213 00:26:05.399309 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/98d7c0ba-8dc2-4a9e-bc96-512facab2925-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-8m4t4\" (UID: \"98d7c0ba-8dc2-4a9e-bc96-512facab2925\") " pod="service-telemetry/stf-smoketest-smoke1-8m4t4" Dec 13 00:26:05 crc kubenswrapper[3562]: I1213 00:26:05.399421 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/98d7c0ba-8dc2-4a9e-bc96-512facab2925-healthcheck-log\") pod \"stf-smoketest-smoke1-8m4t4\" (UID: \"98d7c0ba-8dc2-4a9e-bc96-512facab2925\") " pod="service-telemetry/stf-smoketest-smoke1-8m4t4" Dec 13 00:26:05 crc kubenswrapper[3562]: I1213 00:26:05.399471 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/98d7c0ba-8dc2-4a9e-bc96-512facab2925-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-8m4t4\" (UID: \"98d7c0ba-8dc2-4a9e-bc96-512facab2925\") " pod="service-telemetry/stf-smoketest-smoke1-8m4t4" Dec 13 00:26:05 crc kubenswrapper[3562]: I1213 00:26:05.399590 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/98d7c0ba-8dc2-4a9e-bc96-512facab2925-collectd-config\") pod \"stf-smoketest-smoke1-8m4t4\" (UID: \"98d7c0ba-8dc2-4a9e-bc96-512facab2925\") " pod="service-telemetry/stf-smoketest-smoke1-8m4t4" Dec 13 00:26:05 crc kubenswrapper[3562]: I1213 00:26:05.399682 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/98d7c0ba-8dc2-4a9e-bc96-512facab2925-ceilometer-publisher\") pod \"stf-smoketest-smoke1-8m4t4\" (UID: \"98d7c0ba-8dc2-4a9e-bc96-512facab2925\") " pod="service-telemetry/stf-smoketest-smoke1-8m4t4" Dec 13 00:26:05 crc kubenswrapper[3562]: I1213 00:26:05.501256 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/98d7c0ba-8dc2-4a9e-bc96-512facab2925-ceilometer-publisher\") pod \"stf-smoketest-smoke1-8m4t4\" (UID: \"98d7c0ba-8dc2-4a9e-bc96-512facab2925\") " pod="service-telemetry/stf-smoketest-smoke1-8m4t4" Dec 13 00:26:05 crc kubenswrapper[3562]: I1213 00:26:05.501343 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/98d7c0ba-8dc2-4a9e-bc96-512facab2925-sensubility-config\") pod \"stf-smoketest-smoke1-8m4t4\" (UID: \"98d7c0ba-8dc2-4a9e-bc96-512facab2925\") " pod="service-telemetry/stf-smoketest-smoke1-8m4t4" Dec 13 00:26:05 crc kubenswrapper[3562]: I1213 00:26:05.501369 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-xlpkm\" (UniqueName: \"kubernetes.io/projected/98d7c0ba-8dc2-4a9e-bc96-512facab2925-kube-api-access-xlpkm\") pod \"stf-smoketest-smoke1-8m4t4\" (UID: \"98d7c0ba-8dc2-4a9e-bc96-512facab2925\") " pod="service-telemetry/stf-smoketest-smoke1-8m4t4" Dec 13 00:26:05 crc kubenswrapper[3562]: I1213 00:26:05.501404 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/98d7c0ba-8dc2-4a9e-bc96-512facab2925-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-8m4t4\" (UID: \"98d7c0ba-8dc2-4a9e-bc96-512facab2925\") " pod="service-telemetry/stf-smoketest-smoke1-8m4t4" Dec 13 00:26:05 crc kubenswrapper[3562]: I1213 00:26:05.501434 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/98d7c0ba-8dc2-4a9e-bc96-512facab2925-healthcheck-log\") pod \"stf-smoketest-smoke1-8m4t4\" (UID: \"98d7c0ba-8dc2-4a9e-bc96-512facab2925\") " pod="service-telemetry/stf-smoketest-smoke1-8m4t4" Dec 13 00:26:05 crc kubenswrapper[3562]: I1213 00:26:05.501454 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/98d7c0ba-8dc2-4a9e-bc96-512facab2925-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-8m4t4\" (UID: \"98d7c0ba-8dc2-4a9e-bc96-512facab2925\") " pod="service-telemetry/stf-smoketest-smoke1-8m4t4" Dec 13 00:26:05 crc kubenswrapper[3562]: I1213 00:26:05.501479 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/98d7c0ba-8dc2-4a9e-bc96-512facab2925-collectd-config\") pod \"stf-smoketest-smoke1-8m4t4\" (UID: \"98d7c0ba-8dc2-4a9e-bc96-512facab2925\") " pod="service-telemetry/stf-smoketest-smoke1-8m4t4" Dec 13 00:26:05 crc kubenswrapper[3562]: I1213 00:26:05.503035 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/98d7c0ba-8dc2-4a9e-bc96-512facab2925-ceilometer-publisher\") pod \"stf-smoketest-smoke1-8m4t4\" (UID: \"98d7c0ba-8dc2-4a9e-bc96-512facab2925\") " pod="service-telemetry/stf-smoketest-smoke1-8m4t4" Dec 13 00:26:05 crc kubenswrapper[3562]: I1213 00:26:05.503108 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/98d7c0ba-8dc2-4a9e-bc96-512facab2925-healthcheck-log\") pod \"stf-smoketest-smoke1-8m4t4\" (UID: \"98d7c0ba-8dc2-4a9e-bc96-512facab2925\") " pod="service-telemetry/stf-smoketest-smoke1-8m4t4" Dec 13 00:26:05 crc kubenswrapper[3562]: I1213 00:26:05.503142 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/98d7c0ba-8dc2-4a9e-bc96-512facab2925-sensubility-config\") pod \"stf-smoketest-smoke1-8m4t4\" (UID: \"98d7c0ba-8dc2-4a9e-bc96-512facab2925\") " pod="service-telemetry/stf-smoketest-smoke1-8m4t4" Dec 13 00:26:05 crc kubenswrapper[3562]: I1213 00:26:05.503367 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/98d7c0ba-8dc2-4a9e-bc96-512facab2925-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-8m4t4\" (UID: \"98d7c0ba-8dc2-4a9e-bc96-512facab2925\") " pod="service-telemetry/stf-smoketest-smoke1-8m4t4" Dec 13 00:26:05 crc kubenswrapper[3562]: I1213 00:26:05.503386 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/98d7c0ba-8dc2-4a9e-bc96-512facab2925-collectd-config\") pod \"stf-smoketest-smoke1-8m4t4\" (UID: \"98d7c0ba-8dc2-4a9e-bc96-512facab2925\") " pod="service-telemetry/stf-smoketest-smoke1-8m4t4" Dec 13 00:26:05 crc kubenswrapper[3562]: I1213 00:26:05.504122 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/98d7c0ba-8dc2-4a9e-bc96-512facab2925-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-8m4t4\" (UID: \"98d7c0ba-8dc2-4a9e-bc96-512facab2925\") " pod="service-telemetry/stf-smoketest-smoke1-8m4t4" Dec 13 00:26:05 crc kubenswrapper[3562]: I1213 00:26:05.528383 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlpkm\" (UniqueName: \"kubernetes.io/projected/98d7c0ba-8dc2-4a9e-bc96-512facab2925-kube-api-access-xlpkm\") pod \"stf-smoketest-smoke1-8m4t4\" (UID: \"98d7c0ba-8dc2-4a9e-bc96-512facab2925\") " pod="service-telemetry/stf-smoketest-smoke1-8m4t4" Dec 13 00:26:05 crc kubenswrapper[3562]: I1213 00:26:05.571053 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-8m4t4" Dec 13 00:26:05 crc kubenswrapper[3562]: I1213 00:26:05.609793 3562 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/curl"] Dec 13 00:26:05 crc kubenswrapper[3562]: I1213 00:26:05.610262 3562 topology_manager.go:215] "Topology Admit Handler" podUID="f575d316-4148-4b10-a12e-792dd2880483" podNamespace="service-telemetry" podName="curl" Dec 13 00:26:05 crc kubenswrapper[3562]: I1213 00:26:05.611080 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Dec 13 00:26:05 crc kubenswrapper[3562]: I1213 00:26:05.622560 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Dec 13 00:26:05 crc kubenswrapper[3562]: I1213 00:26:05.805944 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lv27\" (UniqueName: \"kubernetes.io/projected/f575d316-4148-4b10-a12e-792dd2880483-kube-api-access-8lv27\") pod \"curl\" (UID: \"f575d316-4148-4b10-a12e-792dd2880483\") " pod="service-telemetry/curl" Dec 13 00:26:05 crc kubenswrapper[3562]: I1213 00:26:05.907264 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8lv27\" (UniqueName: \"kubernetes.io/projected/f575d316-4148-4b10-a12e-792dd2880483-kube-api-access-8lv27\") pod \"curl\" (UID: \"f575d316-4148-4b10-a12e-792dd2880483\") " pod="service-telemetry/curl" Dec 13 00:26:05 crc kubenswrapper[3562]: I1213 00:26:05.925735 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lv27\" (UniqueName: \"kubernetes.io/projected/f575d316-4148-4b10-a12e-792dd2880483-kube-api-access-8lv27\") pod \"curl\" (UID: \"f575d316-4148-4b10-a12e-792dd2880483\") " pod="service-telemetry/curl" Dec 13 00:26:05 crc kubenswrapper[3562]: I1213 00:26:05.974436 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Dec 13 00:26:06 crc kubenswrapper[3562]: I1213 00:26:06.024389 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-8m4t4"] Dec 13 00:26:06 crc kubenswrapper[3562]: W1213 00:26:06.029687 3562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98d7c0ba_8dc2_4a9e_bc96_512facab2925.slice/crio-1fac24cb3ecfb2baf2276e7dcac7b284c95e8f49c11acfc2475b4fbf9cde71d7 WatchSource:0}: Error finding container 1fac24cb3ecfb2baf2276e7dcac7b284c95e8f49c11acfc2475b4fbf9cde71d7: Status 404 returned error can't find the container with id 1fac24cb3ecfb2baf2276e7dcac7b284c95e8f49c11acfc2475b4fbf9cde71d7 Dec 13 00:26:06 crc kubenswrapper[3562]: I1213 00:26:06.173170 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Dec 13 00:26:06 crc kubenswrapper[3562]: W1213 00:26:06.176842 3562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf575d316_4148_4b10_a12e_792dd2880483.slice/crio-0d8334d1946c5163ca4183a64011944180ded13670462bccb66225d42456c64b WatchSource:0}: Error finding container 0d8334d1946c5163ca4183a64011944180ded13670462bccb66225d42456c64b: Status 404 returned error can't find the container with id 0d8334d1946c5163ca4183a64011944180ded13670462bccb66225d42456c64b Dec 13 00:26:06 crc kubenswrapper[3562]: I1213 00:26:06.866725 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"f575d316-4148-4b10-a12e-792dd2880483","Type":"ContainerStarted","Data":"0d8334d1946c5163ca4183a64011944180ded13670462bccb66225d42456c64b"} Dec 13 00:26:06 crc kubenswrapper[3562]: I1213 00:26:06.868176 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-8m4t4" event={"ID":"98d7c0ba-8dc2-4a9e-bc96-512facab2925","Type":"ContainerStarted","Data":"1fac24cb3ecfb2baf2276e7dcac7b284c95e8f49c11acfc2475b4fbf9cde71d7"} Dec 13 00:26:08 crc kubenswrapper[3562]: I1213 00:26:08.879291 3562 generic.go:334] "Generic (PLEG): container finished" podID="f575d316-4148-4b10-a12e-792dd2880483" containerID="e2902187c5e4577e5b47d76d5b241560fbbc648eb9a26cc7a2400f14dd3aa06a" exitCode=0 Dec 13 00:26:08 crc kubenswrapper[3562]: I1213 00:26:08.881016 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"f575d316-4148-4b10-a12e-792dd2880483","Type":"ContainerDied","Data":"e2902187c5e4577e5b47d76d5b241560fbbc648eb9a26cc7a2400f14dd3aa06a"} Dec 13 00:26:10 crc kubenswrapper[3562]: I1213 00:26:10.187126 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Dec 13 00:26:10 crc kubenswrapper[3562]: I1213 00:26:10.285026 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lv27\" (UniqueName: \"kubernetes.io/projected/f575d316-4148-4b10-a12e-792dd2880483-kube-api-access-8lv27\") pod \"f575d316-4148-4b10-a12e-792dd2880483\" (UID: \"f575d316-4148-4b10-a12e-792dd2880483\") " Dec 13 00:26:10 crc kubenswrapper[3562]: I1213 00:26:10.293483 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f575d316-4148-4b10-a12e-792dd2880483-kube-api-access-8lv27" (OuterVolumeSpecName: "kube-api-access-8lv27") pod "f575d316-4148-4b10-a12e-792dd2880483" (UID: "f575d316-4148-4b10-a12e-792dd2880483"). InnerVolumeSpecName "kube-api-access-8lv27". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 00:26:10 crc kubenswrapper[3562]: I1213 00:26:10.341298 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_curl_f575d316-4148-4b10-a12e-792dd2880483/curl/0.log" Dec 13 00:26:10 crc kubenswrapper[3562]: I1213 00:26:10.386396 3562 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-8lv27\" (UniqueName: \"kubernetes.io/projected/f575d316-4148-4b10-a12e-792dd2880483-kube-api-access-8lv27\") on node \"crc\" DevicePath \"\"" Dec 13 00:26:10 crc kubenswrapper[3562]: I1213 00:26:10.586776 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-789c959b9-hx8c4_0ece4845-0358-42bb-a4b5-a2d319c728ae/prometheus-webhook-snmp/0.log" Dec 13 00:26:10 crc kubenswrapper[3562]: I1213 00:26:10.903366 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"f575d316-4148-4b10-a12e-792dd2880483","Type":"ContainerDied","Data":"0d8334d1946c5163ca4183a64011944180ded13670462bccb66225d42456c64b"} Dec 13 00:26:10 crc kubenswrapper[3562]: I1213 00:26:10.903406 3562 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d8334d1946c5163ca4183a64011944180ded13670462bccb66225d42456c64b" Dec 13 00:26:10 crc kubenswrapper[3562]: I1213 00:26:10.903510 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Dec 13 00:26:20 crc kubenswrapper[3562]: I1213 00:26:20.977494 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-8m4t4" event={"ID":"98d7c0ba-8dc2-4a9e-bc96-512facab2925","Type":"ContainerStarted","Data":"da07103013120d1a91499d1afac12de5e537bb7520c6d9d2a02ca01ea1702a08"} Dec 13 00:26:28 crc kubenswrapper[3562]: I1213 00:26:28.028675 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-8m4t4" event={"ID":"98d7c0ba-8dc2-4a9e-bc96-512facab2925","Type":"ContainerStarted","Data":"8699656ee04a2f1132eebc8a15e855cf90200433aabfd9bdd47121d044cf1c31"} Dec 13 00:26:28 crc kubenswrapper[3562]: I1213 00:26:28.050343 3562 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/stf-smoketest-smoke1-8m4t4" podStartSLOduration=1.8515040630000001 podStartE2EDuration="23.050283731s" podCreationTimestamp="2025-12-13 00:26:05 +0000 UTC" firstStartedPulling="2025-12-13 00:26:06.035468122 +0000 UTC m=+1202.518698645" lastFinishedPulling="2025-12-13 00:26:27.23424778 +0000 UTC m=+1223.717478313" observedRunningTime="2025-12-13 00:26:28.048247123 +0000 UTC m=+1224.531477666" watchObservedRunningTime="2025-12-13 00:26:28.050283731 +0000 UTC m=+1224.533514254" Dec 13 00:26:40 crc kubenswrapper[3562]: I1213 00:26:40.717269 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-789c959b9-hx8c4_0ece4845-0358-42bb-a4b5-a2d319c728ae/prometheus-webhook-snmp/0.log" Dec 13 00:26:55 crc kubenswrapper[3562]: I1213 00:26:55.191074 3562 generic.go:334] "Generic (PLEG): container finished" podID="98d7c0ba-8dc2-4a9e-bc96-512facab2925" containerID="da07103013120d1a91499d1afac12de5e537bb7520c6d9d2a02ca01ea1702a08" exitCode=0 Dec 13 00:26:55 crc kubenswrapper[3562]: I1213 00:26:55.191562 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-8m4t4" event={"ID":"98d7c0ba-8dc2-4a9e-bc96-512facab2925","Type":"ContainerDied","Data":"da07103013120d1a91499d1afac12de5e537bb7520c6d9d2a02ca01ea1702a08"} Dec 13 00:26:55 crc kubenswrapper[3562]: I1213 00:26:55.192150 3562 scope.go:117] "RemoveContainer" containerID="da07103013120d1a91499d1afac12de5e537bb7520c6d9d2a02ca01ea1702a08" Dec 13 00:27:00 crc kubenswrapper[3562]: I1213 00:27:00.239942 3562 generic.go:334] "Generic (PLEG): container finished" podID="98d7c0ba-8dc2-4a9e-bc96-512facab2925" containerID="8699656ee04a2f1132eebc8a15e855cf90200433aabfd9bdd47121d044cf1c31" exitCode=0 Dec 13 00:27:00 crc kubenswrapper[3562]: I1213 00:27:00.240554 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-8m4t4" event={"ID":"98d7c0ba-8dc2-4a9e-bc96-512facab2925","Type":"ContainerDied","Data":"8699656ee04a2f1132eebc8a15e855cf90200433aabfd9bdd47121d044cf1c31"} Dec 13 00:27:01 crc kubenswrapper[3562]: I1213 00:27:01.623794 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-8m4t4" Dec 13 00:27:01 crc kubenswrapper[3562]: I1213 00:27:01.802082 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/98d7c0ba-8dc2-4a9e-bc96-512facab2925-ceilometer-entrypoint-script\") pod \"98d7c0ba-8dc2-4a9e-bc96-512facab2925\" (UID: \"98d7c0ba-8dc2-4a9e-bc96-512facab2925\") " Dec 13 00:27:01 crc kubenswrapper[3562]: I1213 00:27:01.802133 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/98d7c0ba-8dc2-4a9e-bc96-512facab2925-healthcheck-log\") pod \"98d7c0ba-8dc2-4a9e-bc96-512facab2925\" (UID: \"98d7c0ba-8dc2-4a9e-bc96-512facab2925\") " Dec 13 00:27:01 crc kubenswrapper[3562]: I1213 00:27:01.802177 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/98d7c0ba-8dc2-4a9e-bc96-512facab2925-sensubility-config\") pod \"98d7c0ba-8dc2-4a9e-bc96-512facab2925\" (UID: \"98d7c0ba-8dc2-4a9e-bc96-512facab2925\") " Dec 13 00:27:01 crc kubenswrapper[3562]: I1213 00:27:01.802244 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/98d7c0ba-8dc2-4a9e-bc96-512facab2925-collectd-entrypoint-script\") pod \"98d7c0ba-8dc2-4a9e-bc96-512facab2925\" (UID: \"98d7c0ba-8dc2-4a9e-bc96-512facab2925\") " Dec 13 00:27:01 crc kubenswrapper[3562]: I1213 00:27:01.802276 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/98d7c0ba-8dc2-4a9e-bc96-512facab2925-collectd-config\") pod \"98d7c0ba-8dc2-4a9e-bc96-512facab2925\" (UID: \"98d7c0ba-8dc2-4a9e-bc96-512facab2925\") " Dec 13 00:27:01 crc kubenswrapper[3562]: I1213 00:27:01.802382 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/98d7c0ba-8dc2-4a9e-bc96-512facab2925-ceilometer-publisher\") pod \"98d7c0ba-8dc2-4a9e-bc96-512facab2925\" (UID: \"98d7c0ba-8dc2-4a9e-bc96-512facab2925\") " Dec 13 00:27:01 crc kubenswrapper[3562]: I1213 00:27:01.802421 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlpkm\" (UniqueName: \"kubernetes.io/projected/98d7c0ba-8dc2-4a9e-bc96-512facab2925-kube-api-access-xlpkm\") pod \"98d7c0ba-8dc2-4a9e-bc96-512facab2925\" (UID: \"98d7c0ba-8dc2-4a9e-bc96-512facab2925\") " Dec 13 00:27:01 crc kubenswrapper[3562]: I1213 00:27:01.809160 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98d7c0ba-8dc2-4a9e-bc96-512facab2925-kube-api-access-xlpkm" (OuterVolumeSpecName: "kube-api-access-xlpkm") pod "98d7c0ba-8dc2-4a9e-bc96-512facab2925" (UID: "98d7c0ba-8dc2-4a9e-bc96-512facab2925"). InnerVolumeSpecName "kube-api-access-xlpkm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 00:27:01 crc kubenswrapper[3562]: I1213 00:27:01.819939 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98d7c0ba-8dc2-4a9e-bc96-512facab2925-collectd-config" (OuterVolumeSpecName: "collectd-config") pod "98d7c0ba-8dc2-4a9e-bc96-512facab2925" (UID: "98d7c0ba-8dc2-4a9e-bc96-512facab2925"). InnerVolumeSpecName "collectd-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 00:27:01 crc kubenswrapper[3562]: I1213 00:27:01.827412 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98d7c0ba-8dc2-4a9e-bc96-512facab2925-sensubility-config" (OuterVolumeSpecName: "sensubility-config") pod "98d7c0ba-8dc2-4a9e-bc96-512facab2925" (UID: "98d7c0ba-8dc2-4a9e-bc96-512facab2925"). InnerVolumeSpecName "sensubility-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 00:27:01 crc kubenswrapper[3562]: I1213 00:27:01.839446 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98d7c0ba-8dc2-4a9e-bc96-512facab2925-healthcheck-log" (OuterVolumeSpecName: "healthcheck-log") pod "98d7c0ba-8dc2-4a9e-bc96-512facab2925" (UID: "98d7c0ba-8dc2-4a9e-bc96-512facab2925"). InnerVolumeSpecName "healthcheck-log". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 00:27:01 crc kubenswrapper[3562]: I1213 00:27:01.841218 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98d7c0ba-8dc2-4a9e-bc96-512facab2925-collectd-entrypoint-script" (OuterVolumeSpecName: "collectd-entrypoint-script") pod "98d7c0ba-8dc2-4a9e-bc96-512facab2925" (UID: "98d7c0ba-8dc2-4a9e-bc96-512facab2925"). InnerVolumeSpecName "collectd-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 00:27:01 crc kubenswrapper[3562]: I1213 00:27:01.843193 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98d7c0ba-8dc2-4a9e-bc96-512facab2925-ceilometer-entrypoint-script" (OuterVolumeSpecName: "ceilometer-entrypoint-script") pod "98d7c0ba-8dc2-4a9e-bc96-512facab2925" (UID: "98d7c0ba-8dc2-4a9e-bc96-512facab2925"). InnerVolumeSpecName "ceilometer-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 00:27:01 crc kubenswrapper[3562]: I1213 00:27:01.843754 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98d7c0ba-8dc2-4a9e-bc96-512facab2925-ceilometer-publisher" (OuterVolumeSpecName: "ceilometer-publisher") pod "98d7c0ba-8dc2-4a9e-bc96-512facab2925" (UID: "98d7c0ba-8dc2-4a9e-bc96-512facab2925"). InnerVolumeSpecName "ceilometer-publisher". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 00:27:01 crc kubenswrapper[3562]: I1213 00:27:01.904437 3562 reconciler_common.go:300] "Volume detached for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/98d7c0ba-8dc2-4a9e-bc96-512facab2925-ceilometer-entrypoint-script\") on node \"crc\" DevicePath \"\"" Dec 13 00:27:01 crc kubenswrapper[3562]: I1213 00:27:01.904469 3562 reconciler_common.go:300] "Volume detached for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/98d7c0ba-8dc2-4a9e-bc96-512facab2925-healthcheck-log\") on node \"crc\" DevicePath \"\"" Dec 13 00:27:01 crc kubenswrapper[3562]: I1213 00:27:01.904496 3562 reconciler_common.go:300] "Volume detached for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/98d7c0ba-8dc2-4a9e-bc96-512facab2925-sensubility-config\") on node \"crc\" DevicePath \"\"" Dec 13 00:27:01 crc kubenswrapper[3562]: I1213 00:27:01.904508 3562 reconciler_common.go:300] "Volume detached for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/98d7c0ba-8dc2-4a9e-bc96-512facab2925-collectd-entrypoint-script\") on node \"crc\" DevicePath \"\"" Dec 13 00:27:01 crc kubenswrapper[3562]: I1213 00:27:01.904519 3562 reconciler_common.go:300] "Volume detached for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/98d7c0ba-8dc2-4a9e-bc96-512facab2925-collectd-config\") on node \"crc\" DevicePath \"\"" Dec 13 00:27:01 crc kubenswrapper[3562]: I1213 00:27:01.904528 3562 reconciler_common.go:300] "Volume detached for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/98d7c0ba-8dc2-4a9e-bc96-512facab2925-ceilometer-publisher\") on node \"crc\" DevicePath \"\"" Dec 13 00:27:01 crc kubenswrapper[3562]: I1213 00:27:01.904538 3562 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-xlpkm\" (UniqueName: \"kubernetes.io/projected/98d7c0ba-8dc2-4a9e-bc96-512facab2925-kube-api-access-xlpkm\") on node \"crc\" DevicePath \"\"" Dec 13 00:27:02 crc kubenswrapper[3562]: I1213 00:27:02.252018 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-8m4t4" event={"ID":"98d7c0ba-8dc2-4a9e-bc96-512facab2925","Type":"ContainerDied","Data":"1fac24cb3ecfb2baf2276e7dcac7b284c95e8f49c11acfc2475b4fbf9cde71d7"} Dec 13 00:27:02 crc kubenswrapper[3562]: I1213 00:27:02.252058 3562 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fac24cb3ecfb2baf2276e7dcac7b284c95e8f49c11acfc2475b4fbf9cde71d7" Dec 13 00:27:02 crc kubenswrapper[3562]: I1213 00:27:02.252066 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-8m4t4" Dec 13 00:27:03 crc kubenswrapper[3562]: I1213 00:27:03.507079 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-8m4t4_98d7c0ba-8dc2-4a9e-bc96-512facab2925/smoketest-collectd/0.log" Dec 13 00:27:03 crc kubenswrapper[3562]: I1213 00:27:03.752744 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 13 00:27:03 crc kubenswrapper[3562]: I1213 00:27:03.752825 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 13 00:27:03 crc kubenswrapper[3562]: I1213 00:27:03.752844 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 13 00:27:03 crc kubenswrapper[3562]: I1213 00:27:03.752862 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 13 00:27:03 crc kubenswrapper[3562]: I1213 00:27:03.752897 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 13 00:27:03 crc kubenswrapper[3562]: I1213 00:27:03.791331 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-8m4t4_98d7c0ba-8dc2-4a9e-bc96-512facab2925/smoketest-ceilometer/0.log" Dec 13 00:27:04 crc kubenswrapper[3562]: I1213 00:27:04.068391 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-interconnect-84dbc59cb8-5xww6_afc29778-9317-41fc-8041-68516abc5994/default-interconnect/0.log" Dec 13 00:27:04 crc kubenswrapper[3562]: I1213 00:27:04.323742 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-777b58f8fb-gzckp_150cc378-bfa8-4da4-95e4-2f3565086ece/bridge/1.log" Dec 13 00:27:04 crc kubenswrapper[3562]: I1213 00:27:04.580151 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-777b58f8fb-gzckp_150cc378-bfa8-4da4-95e4-2f3565086ece/sg-core/0.log" Dec 13 00:27:04 crc kubenswrapper[3562]: I1213 00:27:04.798053 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-644cdd7cd-dtrrk_23d6b871-3611-4291-91d7-f444af791b10/bridge/1.log" Dec 13 00:27:05 crc kubenswrapper[3562]: I1213 00:27:05.018576 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-644cdd7cd-dtrrk_23d6b871-3611-4291-91d7-f444af791b10/sg-core/0.log" Dec 13 00:27:05 crc kubenswrapper[3562]: I1213 00:27:05.238699 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-7db4658b87-fxcl5_287cbd43-c207-4358-969d-3ccbc72f0186/bridge/2.log" Dec 13 00:27:05 crc kubenswrapper[3562]: I1213 00:27:05.527136 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-7db4658b87-fxcl5_287cbd43-c207-4358-969d-3ccbc72f0186/sg-core/0.log" Dec 13 00:27:05 crc kubenswrapper[3562]: I1213 00:27:05.749189 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-7b967fcf7b-w9fzm_94d04030-7e00-4f80-8d77-df7d10806ed8/bridge/2.log" Dec 13 00:27:05 crc kubenswrapper[3562]: I1213 00:27:05.976578 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-7b967fcf7b-w9fzm_94d04030-7e00-4f80-8d77-df7d10806ed8/sg-core/0.log" Dec 13 00:27:06 crc kubenswrapper[3562]: I1213 00:27:06.219877 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-5c844d475c-tbx6l_41908f98-a23e-41a8-bde3-404d2f858c1f/bridge/1.log" Dec 13 00:27:06 crc kubenswrapper[3562]: I1213 00:27:06.520300 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-5c844d475c-tbx6l_41908f98-a23e-41a8-bde3-404d2f858c1f/sg-core/0.log" Dec 13 00:27:08 crc kubenswrapper[3562]: I1213 00:27:08.082266 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-547794674-77bhh_fd9afdce-76cc-4b1a-bbb8-367fa6224635/operator/0.log" Dec 13 00:27:08 crc kubenswrapper[3562]: I1213 00:27:08.346248 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_34d97b33-05e0-49fd-aa10-26387a5a9239/prometheus/0.log" Dec 13 00:27:08 crc kubenswrapper[3562]: I1213 00:27:08.603153 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_fae42e72-8265-42c3-b07a-1426ce5ccfad/elasticsearch/0.log" Dec 13 00:27:08 crc kubenswrapper[3562]: I1213 00:27:08.843957 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-789c959b9-hx8c4_0ece4845-0358-42bb-a4b5-a2d319c728ae/prometheus-webhook-snmp/0.log" Dec 13 00:27:09 crc kubenswrapper[3562]: I1213 00:27:09.101034 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_81bb4d64-c9f8-4d5a-a8c1-e920df8350c9/alertmanager/0.log" Dec 13 00:27:21 crc kubenswrapper[3562]: I1213 00:27:21.732771 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-6cd7d78758-h9chr_2a715853-7b11-4e9c-aa69-fb56733a0ab4/operator/0.log" Dec 13 00:27:23 crc kubenswrapper[3562]: I1213 00:27:23.766645 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-547794674-77bhh_fd9afdce-76cc-4b1a-bbb8-367fa6224635/operator/0.log" Dec 13 00:27:24 crc kubenswrapper[3562]: I1213 00:27:24.031738 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_qdr-test_7f658d1d-6f3e-4481-9c91-e7d15b724e2d/qdr/0.log" Dec 13 00:27:43 crc kubenswrapper[3562]: I1213 00:27:43.053297 3562 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 00:27:43 crc kubenswrapper[3562]: I1213 00:27:43.053921 3562 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 00:27:47 crc kubenswrapper[3562]: I1213 00:27:47.750897 3562 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vhqtw/must-gather-6qgp5"] Dec 13 00:27:47 crc kubenswrapper[3562]: I1213 00:27:47.751627 3562 topology_manager.go:215] "Topology Admit Handler" podUID="bb8686c8-08d0-4b59-9f59-1945982f1902" podNamespace="openshift-must-gather-vhqtw" podName="must-gather-6qgp5" Dec 13 00:27:47 crc kubenswrapper[3562]: E1213 00:27:47.751799 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="98d7c0ba-8dc2-4a9e-bc96-512facab2925" containerName="smoketest-collectd" Dec 13 00:27:47 crc kubenswrapper[3562]: I1213 00:27:47.751809 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="98d7c0ba-8dc2-4a9e-bc96-512facab2925" containerName="smoketest-collectd" Dec 13 00:27:47 crc kubenswrapper[3562]: E1213 00:27:47.751824 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="f575d316-4148-4b10-a12e-792dd2880483" containerName="curl" Dec 13 00:27:47 crc kubenswrapper[3562]: I1213 00:27:47.751831 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="f575d316-4148-4b10-a12e-792dd2880483" containerName="curl" Dec 13 00:27:47 crc kubenswrapper[3562]: E1213 00:27:47.751848 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="98d7c0ba-8dc2-4a9e-bc96-512facab2925" containerName="smoketest-ceilometer" Dec 13 00:27:47 crc kubenswrapper[3562]: I1213 00:27:47.751854 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="98d7c0ba-8dc2-4a9e-bc96-512facab2925" containerName="smoketest-ceilometer" Dec 13 00:27:47 crc kubenswrapper[3562]: I1213 00:27:47.752035 3562 memory_manager.go:354] "RemoveStaleState removing state" podUID="98d7c0ba-8dc2-4a9e-bc96-512facab2925" containerName="smoketest-collectd" Dec 13 00:27:47 crc kubenswrapper[3562]: I1213 00:27:47.752054 3562 memory_manager.go:354] "RemoveStaleState removing state" podUID="98d7c0ba-8dc2-4a9e-bc96-512facab2925" containerName="smoketest-ceilometer" Dec 13 00:27:47 crc kubenswrapper[3562]: I1213 00:27:47.752067 3562 memory_manager.go:354] "RemoveStaleState removing state" podUID="f575d316-4148-4b10-a12e-792dd2880483" containerName="curl" Dec 13 00:27:47 crc kubenswrapper[3562]: I1213 00:27:47.752929 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vhqtw/must-gather-6qgp5" Dec 13 00:27:47 crc kubenswrapper[3562]: I1213 00:27:47.755656 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-vhqtw"/"openshift-service-ca.crt" Dec 13 00:27:47 crc kubenswrapper[3562]: I1213 00:27:47.755883 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-vhqtw"/"kube-root-ca.crt" Dec 13 00:27:47 crc kubenswrapper[3562]: I1213 00:27:47.762598 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-vhqtw/must-gather-6qgp5"] Dec 13 00:27:47 crc kubenswrapper[3562]: I1213 00:27:47.871817 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bb8686c8-08d0-4b59-9f59-1945982f1902-must-gather-output\") pod \"must-gather-6qgp5\" (UID: \"bb8686c8-08d0-4b59-9f59-1945982f1902\") " pod="openshift-must-gather-vhqtw/must-gather-6qgp5" Dec 13 00:27:47 crc kubenswrapper[3562]: I1213 00:27:47.872196 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv9dw\" (UniqueName: \"kubernetes.io/projected/bb8686c8-08d0-4b59-9f59-1945982f1902-kube-api-access-tv9dw\") pod \"must-gather-6qgp5\" (UID: \"bb8686c8-08d0-4b59-9f59-1945982f1902\") " pod="openshift-must-gather-vhqtw/must-gather-6qgp5" Dec 13 00:27:47 crc kubenswrapper[3562]: I1213 00:27:47.974282 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bb8686c8-08d0-4b59-9f59-1945982f1902-must-gather-output\") pod \"must-gather-6qgp5\" (UID: \"bb8686c8-08d0-4b59-9f59-1945982f1902\") " pod="openshift-must-gather-vhqtw/must-gather-6qgp5" Dec 13 00:27:47 crc kubenswrapper[3562]: I1213 00:27:47.974341 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tv9dw\" (UniqueName: \"kubernetes.io/projected/bb8686c8-08d0-4b59-9f59-1945982f1902-kube-api-access-tv9dw\") pod \"must-gather-6qgp5\" (UID: \"bb8686c8-08d0-4b59-9f59-1945982f1902\") " pod="openshift-must-gather-vhqtw/must-gather-6qgp5" Dec 13 00:27:47 crc kubenswrapper[3562]: I1213 00:27:47.975208 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bb8686c8-08d0-4b59-9f59-1945982f1902-must-gather-output\") pod \"must-gather-6qgp5\" (UID: \"bb8686c8-08d0-4b59-9f59-1945982f1902\") " pod="openshift-must-gather-vhqtw/must-gather-6qgp5" Dec 13 00:27:47 crc kubenswrapper[3562]: I1213 00:27:47.997747 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-tv9dw\" (UniqueName: \"kubernetes.io/projected/bb8686c8-08d0-4b59-9f59-1945982f1902-kube-api-access-tv9dw\") pod \"must-gather-6qgp5\" (UID: \"bb8686c8-08d0-4b59-9f59-1945982f1902\") " pod="openshift-must-gather-vhqtw/must-gather-6qgp5" Dec 13 00:27:48 crc kubenswrapper[3562]: I1213 00:27:48.072715 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vhqtw/must-gather-6qgp5" Dec 13 00:27:48 crc kubenswrapper[3562]: I1213 00:27:48.297599 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-vhqtw/must-gather-6qgp5"] Dec 13 00:27:48 crc kubenswrapper[3562]: W1213 00:27:48.310223 3562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb8686c8_08d0_4b59_9f59_1945982f1902.slice/crio-8151cb3d1b5f3b20c7cf428f14baada48273e35ffe308105b18aaf50ffe64e65 WatchSource:0}: Error finding container 8151cb3d1b5f3b20c7cf428f14baada48273e35ffe308105b18aaf50ffe64e65: Status 404 returned error can't find the container with id 8151cb3d1b5f3b20c7cf428f14baada48273e35ffe308105b18aaf50ffe64e65 Dec 13 00:27:48 crc kubenswrapper[3562]: I1213 00:27:48.523132 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vhqtw/must-gather-6qgp5" event={"ID":"bb8686c8-08d0-4b59-9f59-1945982f1902","Type":"ContainerStarted","Data":"8151cb3d1b5f3b20c7cf428f14baada48273e35ffe308105b18aaf50ffe64e65"} Dec 13 00:27:55 crc kubenswrapper[3562]: I1213 00:27:55.566948 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vhqtw/must-gather-6qgp5" event={"ID":"bb8686c8-08d0-4b59-9f59-1945982f1902","Type":"ContainerStarted","Data":"2a9c5e9bd1c3b5483d96a0600d0ca893a1e54bc0a4cf21ea3886998ddbe06ba7"} Dec 13 00:27:55 crc kubenswrapper[3562]: I1213 00:27:55.567480 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vhqtw/must-gather-6qgp5" event={"ID":"bb8686c8-08d0-4b59-9f59-1945982f1902","Type":"ContainerStarted","Data":"a6728e2c61e57b14d9b1d81e2a37a731f53bf0d43942484206dcaf2f07b44ce9"} Dec 13 00:27:55 crc kubenswrapper[3562]: I1213 00:27:55.590265 3562 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-must-gather-vhqtw/must-gather-6qgp5" podStartSLOduration=2.216001421 podStartE2EDuration="8.590202258s" podCreationTimestamp="2025-12-13 00:27:47 +0000 UTC" firstStartedPulling="2025-12-13 00:27:48.31519142 +0000 UTC m=+1304.798421943" lastFinishedPulling="2025-12-13 00:27:54.689392257 +0000 UTC m=+1311.172622780" observedRunningTime="2025-12-13 00:27:55.587105091 +0000 UTC m=+1312.070335624" watchObservedRunningTime="2025-12-13 00:27:55.590202258 +0000 UTC m=+1312.073432781" Dec 13 00:28:03 crc kubenswrapper[3562]: I1213 00:28:03.754096 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 13 00:28:03 crc kubenswrapper[3562]: I1213 00:28:03.754622 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 13 00:28:03 crc kubenswrapper[3562]: I1213 00:28:03.754650 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 13 00:28:03 crc kubenswrapper[3562]: I1213 00:28:03.754692 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 13 00:28:03 crc kubenswrapper[3562]: I1213 00:28:03.754741 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 13 00:28:13 crc kubenswrapper[3562]: I1213 00:28:13.053130 3562 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 00:28:13 crc kubenswrapper[3562]: I1213 00:28:13.053611 3562 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 00:28:34 crc kubenswrapper[3562]: I1213 00:28:34.490157 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f/control-plane-machine-set-operator/3.log" Dec 13 00:28:34 crc kubenswrapper[3562]: I1213 00:28:34.646365 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/kube-rbac-proxy/1.log" Dec 13 00:28:34 crc kubenswrapper[3562]: I1213 00:28:34.726842 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/machine-api-operator/2.log" Dec 13 00:28:43 crc kubenswrapper[3562]: I1213 00:28:43.053079 3562 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 00:28:43 crc kubenswrapper[3562]: I1213 00:28:43.053653 3562 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 00:28:43 crc kubenswrapper[3562]: I1213 00:28:43.053682 3562 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Dec 13 00:28:43 crc kubenswrapper[3562]: I1213 00:28:43.054475 3562 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f455be034ac89502e36b1c159caf9a6a34f19bdcc892c35ff1575878b6abc592"} pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 13 00:28:43 crc kubenswrapper[3562]: I1213 00:28:43.054651 3562 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" containerID="cri-o://f455be034ac89502e36b1c159caf9a6a34f19bdcc892c35ff1575878b6abc592" gracePeriod=600 Dec 13 00:28:43 crc kubenswrapper[3562]: I1213 00:28:43.830367 3562 generic.go:334] "Generic (PLEG): container finished" podID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerID="f455be034ac89502e36b1c159caf9a6a34f19bdcc892c35ff1575878b6abc592" exitCode=0 Dec 13 00:28:43 crc kubenswrapper[3562]: I1213 00:28:43.830520 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerDied","Data":"f455be034ac89502e36b1c159caf9a6a34f19bdcc892c35ff1575878b6abc592"} Dec 13 00:28:43 crc kubenswrapper[3562]: I1213 00:28:43.830956 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerStarted","Data":"aa71ba55810796812b1cd2ff96782d389e5341c0ff849752e5c761c743d772ee"} Dec 13 00:28:43 crc kubenswrapper[3562]: I1213 00:28:43.830978 3562 scope.go:117] "RemoveContainer" containerID="9a472952244bcda71b7ed2e29a85cbb61dc9bc7aa5fa043fe4877ed29fb76b92" Dec 13 00:28:46 crc kubenswrapper[3562]: I1213 00:28:46.327562 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-755d7666d5-4jjtp_fc5e755b-7db7-41f9-9003-bc4641f2320d/cert-manager-controller/0.log" Dec 13 00:28:46 crc kubenswrapper[3562]: I1213 00:28:46.480435 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-6dcc74f67d-kv7qq_84c9ced0-5209-4237-9076-801a80f3b40f/cert-manager-cainjector/0.log" Dec 13 00:28:46 crc kubenswrapper[3562]: I1213 00:28:46.527636 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-58ffc98b58-qprqg_52bc6c73-d725-404a-b570-16dcb9610fdb/cert-manager-webhook/0.log" Dec 13 00:29:02 crc kubenswrapper[3562]: I1213 00:29:02.079750 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69d2hsx_7ac424f6-5cb0-466a-8ca8-c2b58d61c8c3/util/0.log" Dec 13 00:29:02 crc kubenswrapper[3562]: I1213 00:29:02.254929 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69d2hsx_7ac424f6-5cb0-466a-8ca8-c2b58d61c8c3/util/0.log" Dec 13 00:29:02 crc kubenswrapper[3562]: I1213 00:29:02.283895 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69d2hsx_7ac424f6-5cb0-466a-8ca8-c2b58d61c8c3/pull/0.log" Dec 13 00:29:02 crc kubenswrapper[3562]: I1213 00:29:02.294246 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69d2hsx_7ac424f6-5cb0-466a-8ca8-c2b58d61c8c3/pull/0.log" Dec 13 00:29:02 crc kubenswrapper[3562]: I1213 00:29:02.479028 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69d2hsx_7ac424f6-5cb0-466a-8ca8-c2b58d61c8c3/util/0.log" Dec 13 00:29:02 crc kubenswrapper[3562]: I1213 00:29:02.510347 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69d2hsx_7ac424f6-5cb0-466a-8ca8-c2b58d61c8c3/pull/0.log" Dec 13 00:29:02 crc kubenswrapper[3562]: I1213 00:29:02.533918 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69d2hsx_7ac424f6-5cb0-466a-8ca8-c2b58d61c8c3/extract/0.log" Dec 13 00:29:02 crc kubenswrapper[3562]: I1213 00:29:02.677327 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qqgf9_c41a3fdd-e80b-4bc6-b513-912f3503e60f/util/0.log" Dec 13 00:29:02 crc kubenswrapper[3562]: I1213 00:29:02.891712 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qqgf9_c41a3fdd-e80b-4bc6-b513-912f3503e60f/pull/0.log" Dec 13 00:29:02 crc kubenswrapper[3562]: I1213 00:29:02.899146 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qqgf9_c41a3fdd-e80b-4bc6-b513-912f3503e60f/util/0.log" Dec 13 00:29:02 crc kubenswrapper[3562]: I1213 00:29:02.912274 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qqgf9_c41a3fdd-e80b-4bc6-b513-912f3503e60f/pull/0.log" Dec 13 00:29:03 crc kubenswrapper[3562]: I1213 00:29:03.109787 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qqgf9_c41a3fdd-e80b-4bc6-b513-912f3503e60f/util/0.log" Dec 13 00:29:03 crc kubenswrapper[3562]: I1213 00:29:03.110628 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qqgf9_c41a3fdd-e80b-4bc6-b513-912f3503e60f/pull/0.log" Dec 13 00:29:03 crc kubenswrapper[3562]: I1213 00:29:03.161632 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qqgf9_c41a3fdd-e80b-4bc6-b513-912f3503e60f/extract/0.log" Dec 13 00:29:03 crc kubenswrapper[3562]: I1213 00:29:03.422902 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9gbx7_986aa180-17f8-4424-9f97-5fe45ec05dff/util/0.log" Dec 13 00:29:03 crc kubenswrapper[3562]: I1213 00:29:03.660742 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9gbx7_986aa180-17f8-4424-9f97-5fe45ec05dff/pull/0.log" Dec 13 00:29:03 crc kubenswrapper[3562]: I1213 00:29:03.698405 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9gbx7_986aa180-17f8-4424-9f97-5fe45ec05dff/util/0.log" Dec 13 00:29:03 crc kubenswrapper[3562]: I1213 00:29:03.713542 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9gbx7_986aa180-17f8-4424-9f97-5fe45ec05dff/pull/0.log" Dec 13 00:29:03 crc kubenswrapper[3562]: I1213 00:29:03.755640 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 13 00:29:03 crc kubenswrapper[3562]: I1213 00:29:03.755714 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 13 00:29:03 crc kubenswrapper[3562]: I1213 00:29:03.755731 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 13 00:29:03 crc kubenswrapper[3562]: I1213 00:29:03.755758 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 13 00:29:03 crc kubenswrapper[3562]: I1213 00:29:03.755793 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 13 00:29:03 crc kubenswrapper[3562]: I1213 00:29:03.923199 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9gbx7_986aa180-17f8-4424-9f97-5fe45ec05dff/util/0.log" Dec 13 00:29:03 crc kubenswrapper[3562]: I1213 00:29:03.932296 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9gbx7_986aa180-17f8-4424-9f97-5fe45ec05dff/extract/0.log" Dec 13 00:29:03 crc kubenswrapper[3562]: I1213 00:29:03.971883 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9gbx7_986aa180-17f8-4424-9f97-5fe45ec05dff/pull/0.log" Dec 13 00:29:04 crc kubenswrapper[3562]: I1213 00:29:04.111355 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5esrsvj_02a21e18-3551-4536-a03a-23f7d60b531d/util/0.log" Dec 13 00:29:04 crc kubenswrapper[3562]: I1213 00:29:04.310425 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5esrsvj_02a21e18-3551-4536-a03a-23f7d60b531d/util/0.log" Dec 13 00:29:04 crc kubenswrapper[3562]: I1213 00:29:04.313332 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5esrsvj_02a21e18-3551-4536-a03a-23f7d60b531d/pull/0.log" Dec 13 00:29:04 crc kubenswrapper[3562]: I1213 00:29:04.337458 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5esrsvj_02a21e18-3551-4536-a03a-23f7d60b531d/pull/0.log" Dec 13 00:29:04 crc kubenswrapper[3562]: I1213 00:29:04.483873 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5esrsvj_02a21e18-3551-4536-a03a-23f7d60b531d/util/0.log" Dec 13 00:29:04 crc kubenswrapper[3562]: I1213 00:29:04.571066 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5esrsvj_02a21e18-3551-4536-a03a-23f7d60b531d/pull/0.log" Dec 13 00:29:04 crc kubenswrapper[3562]: I1213 00:29:04.640383 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5esrsvj_02a21e18-3551-4536-a03a-23f7d60b531d/extract/0.log" Dec 13 00:29:04 crc kubenswrapper[3562]: I1213 00:29:04.723406 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kk2tn_9f237253-7db9-43eb-99b5-b6bd778581d8/extract-utilities/0.log" Dec 13 00:29:05 crc kubenswrapper[3562]: I1213 00:29:05.104506 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kk2tn_9f237253-7db9-43eb-99b5-b6bd778581d8/extract-utilities/0.log" Dec 13 00:29:05 crc kubenswrapper[3562]: I1213 00:29:05.124616 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kk2tn_9f237253-7db9-43eb-99b5-b6bd778581d8/extract-content/0.log" Dec 13 00:29:05 crc kubenswrapper[3562]: I1213 00:29:05.156478 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kk2tn_9f237253-7db9-43eb-99b5-b6bd778581d8/extract-content/0.log" Dec 13 00:29:05 crc kubenswrapper[3562]: I1213 00:29:05.523975 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kk2tn_9f237253-7db9-43eb-99b5-b6bd778581d8/extract-utilities/0.log" Dec 13 00:29:05 crc kubenswrapper[3562]: I1213 00:29:05.622882 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kk2tn_9f237253-7db9-43eb-99b5-b6bd778581d8/registry-server/0.log" Dec 13 00:29:05 crc kubenswrapper[3562]: I1213 00:29:05.660307 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kk2tn_9f237253-7db9-43eb-99b5-b6bd778581d8/extract-content/0.log" Dec 13 00:29:05 crc kubenswrapper[3562]: I1213 00:29:05.798374 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jrtnd_182f4041-a9bc-4b35-98c6-fa2baa049a4d/extract-utilities/0.log" Dec 13 00:29:05 crc kubenswrapper[3562]: I1213 00:29:05.955202 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jrtnd_182f4041-a9bc-4b35-98c6-fa2baa049a4d/extract-content/0.log" Dec 13 00:29:05 crc kubenswrapper[3562]: I1213 00:29:05.965011 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jrtnd_182f4041-a9bc-4b35-98c6-fa2baa049a4d/extract-utilities/0.log" Dec 13 00:29:05 crc kubenswrapper[3562]: I1213 00:29:05.994225 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jrtnd_182f4041-a9bc-4b35-98c6-fa2baa049a4d/extract-content/0.log" Dec 13 00:29:06 crc kubenswrapper[3562]: I1213 00:29:06.185676 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jrtnd_182f4041-a9bc-4b35-98c6-fa2baa049a4d/extract-utilities/0.log" Dec 13 00:29:06 crc kubenswrapper[3562]: I1213 00:29:06.190311 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jrtnd_182f4041-a9bc-4b35-98c6-fa2baa049a4d/registry-server/0.log" Dec 13 00:29:06 crc kubenswrapper[3562]: I1213 00:29:06.207064 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jrtnd_182f4041-a9bc-4b35-98c6-fa2baa049a4d/extract-content/0.log" Dec 13 00:29:06 crc kubenswrapper[3562]: I1213 00:29:06.220520 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-8b455464d-cpx94_39e4a9ce-79cc-4ea3-b65e-fd5f4eb81c2b/marketplace-operator/1.log" Dec 13 00:29:06 crc kubenswrapper[3562]: I1213 00:29:06.400841 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-sfhjq_7d00c5c9-f08d-49fd-bd4f-ffd4772b86f7/extract-utilities/0.log" Dec 13 00:29:06 crc kubenswrapper[3562]: I1213 00:29:06.405218 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-8b455464d-cpx94_39e4a9ce-79cc-4ea3-b65e-fd5f4eb81c2b/marketplace-operator/0.log" Dec 13 00:29:06 crc kubenswrapper[3562]: I1213 00:29:06.613760 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-sfhjq_7d00c5c9-f08d-49fd-bd4f-ffd4772b86f7/extract-utilities/0.log" Dec 13 00:29:06 crc kubenswrapper[3562]: I1213 00:29:06.636752 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-sfhjq_7d00c5c9-f08d-49fd-bd4f-ffd4772b86f7/extract-content/0.log" Dec 13 00:29:06 crc kubenswrapper[3562]: I1213 00:29:06.650878 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-sfhjq_7d00c5c9-f08d-49fd-bd4f-ffd4772b86f7/extract-content/0.log" Dec 13 00:29:06 crc kubenswrapper[3562]: I1213 00:29:06.803575 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-sfhjq_7d00c5c9-f08d-49fd-bd4f-ffd4772b86f7/extract-utilities/0.log" Dec 13 00:29:06 crc kubenswrapper[3562]: I1213 00:29:06.877096 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-sfhjq_7d00c5c9-f08d-49fd-bd4f-ffd4772b86f7/registry-server/0.log" Dec 13 00:29:06 crc kubenswrapper[3562]: I1213 00:29:06.887180 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-sfhjq_7d00c5c9-f08d-49fd-bd4f-ffd4772b86f7/extract-content/0.log" Dec 13 00:29:19 crc kubenswrapper[3562]: I1213 00:29:19.159757 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-864b67f9b9-qzllq_c5fda24c-de7c-4868-a327-defef18d0941/prometheus-operator/0.log" Dec 13 00:29:19 crc kubenswrapper[3562]: I1213 00:29:19.238836 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-84d9fcf5cc-4nvf8_b1f675e1-dac7-4bc6-9d92-4b5ef234ebf0/prometheus-operator-admission-webhook/0.log" Dec 13 00:29:19 crc kubenswrapper[3562]: I1213 00:29:19.390075 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-84d9fcf5cc-x8nr9_e6c8b1e6-ab4b-40e7-b71b-2d36d472d126/prometheus-operator-admission-webhook/0.log" Dec 13 00:29:19 crc kubenswrapper[3562]: I1213 00:29:19.458050 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-65df589ff7-t58nh_4d1602d6-ff5e-41fe-b6e0-e35295a45d7a/operator/0.log" Dec 13 00:29:19 crc kubenswrapper[3562]: I1213 00:29:19.594118 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-574fd8d65d-2l28k_f8ed63a0-9d2d-4db9-8d62-0d69963cc480/perses-operator/0.log" Dec 13 00:30:00 crc kubenswrapper[3562]: I1213 00:30:00.191880 3562 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29426430-q4lpb"] Dec 13 00:30:00 crc kubenswrapper[3562]: I1213 00:30:00.192924 3562 topology_manager.go:215] "Topology Admit Handler" podUID="2a558fe5-8218-415b-b7f1-467634742eca" podNamespace="openshift-operator-lifecycle-manager" podName="collect-profiles-29426430-q4lpb" Dec 13 00:30:00 crc kubenswrapper[3562]: I1213 00:30:00.194086 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29426430-q4lpb" Dec 13 00:30:00 crc kubenswrapper[3562]: I1213 00:30:00.196656 3562 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 13 00:30:00 crc kubenswrapper[3562]: I1213 00:30:00.197066 3562 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-45g9d" Dec 13 00:30:00 crc kubenswrapper[3562]: I1213 00:30:00.197991 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29426430-q4lpb"] Dec 13 00:30:00 crc kubenswrapper[3562]: I1213 00:30:00.272733 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a558fe5-8218-415b-b7f1-467634742eca-config-volume\") pod \"collect-profiles-29426430-q4lpb\" (UID: \"2a558fe5-8218-415b-b7f1-467634742eca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426430-q4lpb" Dec 13 00:30:00 crc kubenswrapper[3562]: I1213 00:30:00.272854 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vpq9\" (UniqueName: \"kubernetes.io/projected/2a558fe5-8218-415b-b7f1-467634742eca-kube-api-access-5vpq9\") pod \"collect-profiles-29426430-q4lpb\" (UID: \"2a558fe5-8218-415b-b7f1-467634742eca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426430-q4lpb" Dec 13 00:30:00 crc kubenswrapper[3562]: I1213 00:30:00.272964 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a558fe5-8218-415b-b7f1-467634742eca-secret-volume\") pod \"collect-profiles-29426430-q4lpb\" (UID: \"2a558fe5-8218-415b-b7f1-467634742eca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426430-q4lpb" Dec 13 00:30:00 crc kubenswrapper[3562]: I1213 00:30:00.374860 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a558fe5-8218-415b-b7f1-467634742eca-secret-volume\") pod \"collect-profiles-29426430-q4lpb\" (UID: \"2a558fe5-8218-415b-b7f1-467634742eca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426430-q4lpb" Dec 13 00:30:00 crc kubenswrapper[3562]: I1213 00:30:00.375002 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a558fe5-8218-415b-b7f1-467634742eca-config-volume\") pod \"collect-profiles-29426430-q4lpb\" (UID: \"2a558fe5-8218-415b-b7f1-467634742eca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426430-q4lpb" Dec 13 00:30:00 crc kubenswrapper[3562]: I1213 00:30:00.375042 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5vpq9\" (UniqueName: \"kubernetes.io/projected/2a558fe5-8218-415b-b7f1-467634742eca-kube-api-access-5vpq9\") pod \"collect-profiles-29426430-q4lpb\" (UID: \"2a558fe5-8218-415b-b7f1-467634742eca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426430-q4lpb" Dec 13 00:30:00 crc kubenswrapper[3562]: I1213 00:30:00.376405 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a558fe5-8218-415b-b7f1-467634742eca-config-volume\") pod \"collect-profiles-29426430-q4lpb\" (UID: \"2a558fe5-8218-415b-b7f1-467634742eca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426430-q4lpb" Dec 13 00:30:00 crc kubenswrapper[3562]: I1213 00:30:00.381771 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a558fe5-8218-415b-b7f1-467634742eca-secret-volume\") pod \"collect-profiles-29426430-q4lpb\" (UID: \"2a558fe5-8218-415b-b7f1-467634742eca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426430-q4lpb" Dec 13 00:30:00 crc kubenswrapper[3562]: I1213 00:30:00.408569 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vpq9\" (UniqueName: \"kubernetes.io/projected/2a558fe5-8218-415b-b7f1-467634742eca-kube-api-access-5vpq9\") pod \"collect-profiles-29426430-q4lpb\" (UID: \"2a558fe5-8218-415b-b7f1-467634742eca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426430-q4lpb" Dec 13 00:30:00 crc kubenswrapper[3562]: I1213 00:30:00.511590 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29426430-q4lpb" Dec 13 00:30:00 crc kubenswrapper[3562]: I1213 00:30:00.712163 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29426430-q4lpb"] Dec 13 00:30:01 crc kubenswrapper[3562]: I1213 00:30:01.284034 3562 generic.go:334] "Generic (PLEG): container finished" podID="2a558fe5-8218-415b-b7f1-467634742eca" containerID="c7ff9b239b83ea2ad0eb8e5d2f90b32f1259c78e817a374d8b7374460ce5d123" exitCode=0 Dec 13 00:30:01 crc kubenswrapper[3562]: I1213 00:30:01.284144 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29426430-q4lpb" event={"ID":"2a558fe5-8218-415b-b7f1-467634742eca","Type":"ContainerDied","Data":"c7ff9b239b83ea2ad0eb8e5d2f90b32f1259c78e817a374d8b7374460ce5d123"} Dec 13 00:30:01 crc kubenswrapper[3562]: I1213 00:30:01.284347 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29426430-q4lpb" event={"ID":"2a558fe5-8218-415b-b7f1-467634742eca","Type":"ContainerStarted","Data":"730e1108cd5c08e4fa32317a882dacf0a2e3e459e91eb4f9f2e5f7107e9b576e"} Dec 13 00:30:02 crc kubenswrapper[3562]: I1213 00:30:02.547837 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29426430-q4lpb" Dec 13 00:30:02 crc kubenswrapper[3562]: I1213 00:30:02.619518 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a558fe5-8218-415b-b7f1-467634742eca-config-volume\") pod \"2a558fe5-8218-415b-b7f1-467634742eca\" (UID: \"2a558fe5-8218-415b-b7f1-467634742eca\") " Dec 13 00:30:02 crc kubenswrapper[3562]: I1213 00:30:02.620079 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vpq9\" (UniqueName: \"kubernetes.io/projected/2a558fe5-8218-415b-b7f1-467634742eca-kube-api-access-5vpq9\") pod \"2a558fe5-8218-415b-b7f1-467634742eca\" (UID: \"2a558fe5-8218-415b-b7f1-467634742eca\") " Dec 13 00:30:02 crc kubenswrapper[3562]: I1213 00:30:02.620102 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a558fe5-8218-415b-b7f1-467634742eca-secret-volume\") pod \"2a558fe5-8218-415b-b7f1-467634742eca\" (UID: \"2a558fe5-8218-415b-b7f1-467634742eca\") " Dec 13 00:30:02 crc kubenswrapper[3562]: I1213 00:30:02.620840 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a558fe5-8218-415b-b7f1-467634742eca-config-volume" (OuterVolumeSpecName: "config-volume") pod "2a558fe5-8218-415b-b7f1-467634742eca" (UID: "2a558fe5-8218-415b-b7f1-467634742eca"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 00:30:02 crc kubenswrapper[3562]: I1213 00:30:02.626216 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a558fe5-8218-415b-b7f1-467634742eca-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2a558fe5-8218-415b-b7f1-467634742eca" (UID: "2a558fe5-8218-415b-b7f1-467634742eca"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 00:30:02 crc kubenswrapper[3562]: I1213 00:30:02.627270 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a558fe5-8218-415b-b7f1-467634742eca-kube-api-access-5vpq9" (OuterVolumeSpecName: "kube-api-access-5vpq9") pod "2a558fe5-8218-415b-b7f1-467634742eca" (UID: "2a558fe5-8218-415b-b7f1-467634742eca"). InnerVolumeSpecName "kube-api-access-5vpq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 00:30:02 crc kubenswrapper[3562]: I1213 00:30:02.721765 3562 reconciler_common.go:300] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a558fe5-8218-415b-b7f1-467634742eca-config-volume\") on node \"crc\" DevicePath \"\"" Dec 13 00:30:02 crc kubenswrapper[3562]: I1213 00:30:02.721811 3562 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-5vpq9\" (UniqueName: \"kubernetes.io/projected/2a558fe5-8218-415b-b7f1-467634742eca-kube-api-access-5vpq9\") on node \"crc\" DevicePath \"\"" Dec 13 00:30:02 crc kubenswrapper[3562]: I1213 00:30:02.721826 3562 reconciler_common.go:300] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a558fe5-8218-415b-b7f1-467634742eca-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 13 00:30:03 crc kubenswrapper[3562]: I1213 00:30:03.298839 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29426430-q4lpb" event={"ID":"2a558fe5-8218-415b-b7f1-467634742eca","Type":"ContainerDied","Data":"730e1108cd5c08e4fa32317a882dacf0a2e3e459e91eb4f9f2e5f7107e9b576e"} Dec 13 00:30:03 crc kubenswrapper[3562]: I1213 00:30:03.299291 3562 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="730e1108cd5c08e4fa32317a882dacf0a2e3e459e91eb4f9f2e5f7107e9b576e" Dec 13 00:30:03 crc kubenswrapper[3562]: I1213 00:30:03.298864 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29426430-q4lpb" Dec 13 00:30:03 crc kubenswrapper[3562]: I1213 00:30:03.617495 3562 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd"] Dec 13 00:30:03 crc kubenswrapper[3562]: I1213 00:30:03.624372 3562 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd"] Dec 13 00:30:03 crc kubenswrapper[3562]: I1213 00:30:03.756010 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 13 00:30:03 crc kubenswrapper[3562]: I1213 00:30:03.756365 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 13 00:30:03 crc kubenswrapper[3562]: I1213 00:30:03.756478 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 13 00:30:03 crc kubenswrapper[3562]: I1213 00:30:03.756560 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 13 00:30:03 crc kubenswrapper[3562]: I1213 00:30:03.756651 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 13 00:30:03 crc kubenswrapper[3562]: I1213 00:30:03.882611 3562 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad171c4b-8408-4370-8e86-502999788ddb" path="/var/lib/kubelet/pods/ad171c4b-8408-4370-8e86-502999788ddb/volumes" Dec 13 00:30:04 crc kubenswrapper[3562]: E1213 00:30:04.679536 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67968268b9681a78ea8ff7d1d622336aeef2dd395719c809f7d90fd4229e2e89\": container with ID starting with 67968268b9681a78ea8ff7d1d622336aeef2dd395719c809f7d90fd4229e2e89 not found: ID does not exist" containerID="67968268b9681a78ea8ff7d1d622336aeef2dd395719c809f7d90fd4229e2e89" Dec 13 00:30:04 crc kubenswrapper[3562]: I1213 00:30:04.679837 3562 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="67968268b9681a78ea8ff7d1d622336aeef2dd395719c809f7d90fd4229e2e89" err="rpc error: code = NotFound desc = could not find container \"67968268b9681a78ea8ff7d1d622336aeef2dd395719c809f7d90fd4229e2e89\": container with ID starting with 67968268b9681a78ea8ff7d1d622336aeef2dd395719c809f7d90fd4229e2e89 not found: ID does not exist" Dec 13 00:30:10 crc kubenswrapper[3562]: I1213 00:30:10.336968 3562 generic.go:334] "Generic (PLEG): container finished" podID="bb8686c8-08d0-4b59-9f59-1945982f1902" containerID="a6728e2c61e57b14d9b1d81e2a37a731f53bf0d43942484206dcaf2f07b44ce9" exitCode=0 Dec 13 00:30:10 crc kubenswrapper[3562]: I1213 00:30:10.337062 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vhqtw/must-gather-6qgp5" event={"ID":"bb8686c8-08d0-4b59-9f59-1945982f1902","Type":"ContainerDied","Data":"a6728e2c61e57b14d9b1d81e2a37a731f53bf0d43942484206dcaf2f07b44ce9"} Dec 13 00:30:10 crc kubenswrapper[3562]: I1213 00:30:10.337872 3562 scope.go:117] "RemoveContainer" containerID="a6728e2c61e57b14d9b1d81e2a37a731f53bf0d43942484206dcaf2f07b44ce9" Dec 13 00:30:10 crc kubenswrapper[3562]: I1213 00:30:10.509955 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vhqtw_must-gather-6qgp5_bb8686c8-08d0-4b59-9f59-1945982f1902/gather/0.log" Dec 13 00:30:17 crc kubenswrapper[3562]: I1213 00:30:17.024150 3562 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vhqtw/must-gather-6qgp5"] Dec 13 00:30:17 crc kubenswrapper[3562]: I1213 00:30:17.024930 3562 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-must-gather-vhqtw/must-gather-6qgp5" podUID="bb8686c8-08d0-4b59-9f59-1945982f1902" containerName="copy" containerID="cri-o://2a9c5e9bd1c3b5483d96a0600d0ca893a1e54bc0a4cf21ea3886998ddbe06ba7" gracePeriod=2 Dec 13 00:30:17 crc kubenswrapper[3562]: I1213 00:30:17.032101 3562 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vhqtw/must-gather-6qgp5"] Dec 13 00:30:17 crc kubenswrapper[3562]: I1213 00:30:17.378410 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vhqtw_must-gather-6qgp5_bb8686c8-08d0-4b59-9f59-1945982f1902/copy/0.log" Dec 13 00:30:17 crc kubenswrapper[3562]: I1213 00:30:17.378995 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vhqtw/must-gather-6qgp5" Dec 13 00:30:17 crc kubenswrapper[3562]: I1213 00:30:17.379295 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vhqtw_must-gather-6qgp5_bb8686c8-08d0-4b59-9f59-1945982f1902/copy/0.log" Dec 13 00:30:17 crc kubenswrapper[3562]: I1213 00:30:17.379691 3562 generic.go:334] "Generic (PLEG): container finished" podID="bb8686c8-08d0-4b59-9f59-1945982f1902" containerID="2a9c5e9bd1c3b5483d96a0600d0ca893a1e54bc0a4cf21ea3886998ddbe06ba7" exitCode=143 Dec 13 00:30:17 crc kubenswrapper[3562]: I1213 00:30:17.379730 3562 scope.go:117] "RemoveContainer" containerID="2a9c5e9bd1c3b5483d96a0600d0ca893a1e54bc0a4cf21ea3886998ddbe06ba7" Dec 13 00:30:17 crc kubenswrapper[3562]: I1213 00:30:17.409228 3562 scope.go:117] "RemoveContainer" containerID="a6728e2c61e57b14d9b1d81e2a37a731f53bf0d43942484206dcaf2f07b44ce9" Dec 13 00:30:17 crc kubenswrapper[3562]: I1213 00:30:17.444396 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bb8686c8-08d0-4b59-9f59-1945982f1902-must-gather-output\") pod \"bb8686c8-08d0-4b59-9f59-1945982f1902\" (UID: \"bb8686c8-08d0-4b59-9f59-1945982f1902\") " Dec 13 00:30:17 crc kubenswrapper[3562]: I1213 00:30:17.444616 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tv9dw\" (UniqueName: \"kubernetes.io/projected/bb8686c8-08d0-4b59-9f59-1945982f1902-kube-api-access-tv9dw\") pod \"bb8686c8-08d0-4b59-9f59-1945982f1902\" (UID: \"bb8686c8-08d0-4b59-9f59-1945982f1902\") " Dec 13 00:30:17 crc kubenswrapper[3562]: I1213 00:30:17.452107 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb8686c8-08d0-4b59-9f59-1945982f1902-kube-api-access-tv9dw" (OuterVolumeSpecName: "kube-api-access-tv9dw") pod "bb8686c8-08d0-4b59-9f59-1945982f1902" (UID: "bb8686c8-08d0-4b59-9f59-1945982f1902"). InnerVolumeSpecName "kube-api-access-tv9dw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 00:30:17 crc kubenswrapper[3562]: I1213 00:30:17.471350 3562 scope.go:117] "RemoveContainer" containerID="2a9c5e9bd1c3b5483d96a0600d0ca893a1e54bc0a4cf21ea3886998ddbe06ba7" Dec 13 00:30:17 crc kubenswrapper[3562]: E1213 00:30:17.472105 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a9c5e9bd1c3b5483d96a0600d0ca893a1e54bc0a4cf21ea3886998ddbe06ba7\": container with ID starting with 2a9c5e9bd1c3b5483d96a0600d0ca893a1e54bc0a4cf21ea3886998ddbe06ba7 not found: ID does not exist" containerID="2a9c5e9bd1c3b5483d96a0600d0ca893a1e54bc0a4cf21ea3886998ddbe06ba7" Dec 13 00:30:17 crc kubenswrapper[3562]: I1213 00:30:17.472219 3562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a9c5e9bd1c3b5483d96a0600d0ca893a1e54bc0a4cf21ea3886998ddbe06ba7"} err="failed to get container status \"2a9c5e9bd1c3b5483d96a0600d0ca893a1e54bc0a4cf21ea3886998ddbe06ba7\": rpc error: code = NotFound desc = could not find container \"2a9c5e9bd1c3b5483d96a0600d0ca893a1e54bc0a4cf21ea3886998ddbe06ba7\": container with ID starting with 2a9c5e9bd1c3b5483d96a0600d0ca893a1e54bc0a4cf21ea3886998ddbe06ba7 not found: ID does not exist" Dec 13 00:30:17 crc kubenswrapper[3562]: I1213 00:30:17.472244 3562 scope.go:117] "RemoveContainer" containerID="a6728e2c61e57b14d9b1d81e2a37a731f53bf0d43942484206dcaf2f07b44ce9" Dec 13 00:30:17 crc kubenswrapper[3562]: E1213 00:30:17.477135 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6728e2c61e57b14d9b1d81e2a37a731f53bf0d43942484206dcaf2f07b44ce9\": container with ID starting with a6728e2c61e57b14d9b1d81e2a37a731f53bf0d43942484206dcaf2f07b44ce9 not found: ID does not exist" containerID="a6728e2c61e57b14d9b1d81e2a37a731f53bf0d43942484206dcaf2f07b44ce9" Dec 13 00:30:17 crc kubenswrapper[3562]: I1213 00:30:17.477205 3562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6728e2c61e57b14d9b1d81e2a37a731f53bf0d43942484206dcaf2f07b44ce9"} err="failed to get container status \"a6728e2c61e57b14d9b1d81e2a37a731f53bf0d43942484206dcaf2f07b44ce9\": rpc error: code = NotFound desc = could not find container \"a6728e2c61e57b14d9b1d81e2a37a731f53bf0d43942484206dcaf2f07b44ce9\": container with ID starting with a6728e2c61e57b14d9b1d81e2a37a731f53bf0d43942484206dcaf2f07b44ce9 not found: ID does not exist" Dec 13 00:30:17 crc kubenswrapper[3562]: I1213 00:30:17.521328 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb8686c8-08d0-4b59-9f59-1945982f1902-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "bb8686c8-08d0-4b59-9f59-1945982f1902" (UID: "bb8686c8-08d0-4b59-9f59-1945982f1902"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 00:30:17 crc kubenswrapper[3562]: I1213 00:30:17.547296 3562 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-tv9dw\" (UniqueName: \"kubernetes.io/projected/bb8686c8-08d0-4b59-9f59-1945982f1902-kube-api-access-tv9dw\") on node \"crc\" DevicePath \"\"" Dec 13 00:30:17 crc kubenswrapper[3562]: I1213 00:30:17.547425 3562 reconciler_common.go:300] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bb8686c8-08d0-4b59-9f59-1945982f1902-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 13 00:30:17 crc kubenswrapper[3562]: I1213 00:30:17.882861 3562 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb8686c8-08d0-4b59-9f59-1945982f1902" path="/var/lib/kubelet/pods/bb8686c8-08d0-4b59-9f59-1945982f1902/volumes" Dec 13 00:30:18 crc kubenswrapper[3562]: I1213 00:30:18.386348 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vhqtw/must-gather-6qgp5" Dec 13 00:30:43 crc kubenswrapper[3562]: I1213 00:30:43.052524 3562 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 00:30:43 crc kubenswrapper[3562]: I1213 00:30:43.054751 3562 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 00:30:51 crc kubenswrapper[3562]: I1213 00:30:51.963391 3562 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-szd25"] Dec 13 00:30:51 crc kubenswrapper[3562]: I1213 00:30:51.964145 3562 topology_manager.go:215] "Topology Admit Handler" podUID="ca308a67-c7f9-4ef5-b5c5-66427bfe1d33" podNamespace="openshift-marketplace" podName="redhat-operators-szd25" Dec 13 00:30:51 crc kubenswrapper[3562]: E1213 00:30:51.964370 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="2a558fe5-8218-415b-b7f1-467634742eca" containerName="collect-profiles" Dec 13 00:30:51 crc kubenswrapper[3562]: I1213 00:30:51.964385 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a558fe5-8218-415b-b7f1-467634742eca" containerName="collect-profiles" Dec 13 00:30:51 crc kubenswrapper[3562]: E1213 00:30:51.964397 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="bb8686c8-08d0-4b59-9f59-1945982f1902" containerName="copy" Dec 13 00:30:51 crc kubenswrapper[3562]: I1213 00:30:51.964406 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb8686c8-08d0-4b59-9f59-1945982f1902" containerName="copy" Dec 13 00:30:51 crc kubenswrapper[3562]: E1213 00:30:51.964419 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="bb8686c8-08d0-4b59-9f59-1945982f1902" containerName="gather" Dec 13 00:30:51 crc kubenswrapper[3562]: I1213 00:30:51.964427 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb8686c8-08d0-4b59-9f59-1945982f1902" containerName="gather" Dec 13 00:30:51 crc kubenswrapper[3562]: I1213 00:30:51.964599 3562 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb8686c8-08d0-4b59-9f59-1945982f1902" containerName="gather" Dec 13 00:30:51 crc kubenswrapper[3562]: I1213 00:30:51.964618 3562 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a558fe5-8218-415b-b7f1-467634742eca" containerName="collect-profiles" Dec 13 00:30:51 crc kubenswrapper[3562]: I1213 00:30:51.964628 3562 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb8686c8-08d0-4b59-9f59-1945982f1902" containerName="copy" Dec 13 00:30:51 crc kubenswrapper[3562]: I1213 00:30:51.965768 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-szd25" Dec 13 00:30:51 crc kubenswrapper[3562]: I1213 00:30:51.974862 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-szd25"] Dec 13 00:30:52 crc kubenswrapper[3562]: I1213 00:30:52.103815 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca308a67-c7f9-4ef5-b5c5-66427bfe1d33-catalog-content\") pod \"redhat-operators-szd25\" (UID: \"ca308a67-c7f9-4ef5-b5c5-66427bfe1d33\") " pod="openshift-marketplace/redhat-operators-szd25" Dec 13 00:30:52 crc kubenswrapper[3562]: I1213 00:30:52.103880 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca308a67-c7f9-4ef5-b5c5-66427bfe1d33-utilities\") pod \"redhat-operators-szd25\" (UID: \"ca308a67-c7f9-4ef5-b5c5-66427bfe1d33\") " pod="openshift-marketplace/redhat-operators-szd25" Dec 13 00:30:52 crc kubenswrapper[3562]: I1213 00:30:52.104278 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjbdb\" (UniqueName: \"kubernetes.io/projected/ca308a67-c7f9-4ef5-b5c5-66427bfe1d33-kube-api-access-fjbdb\") pod \"redhat-operators-szd25\" (UID: \"ca308a67-c7f9-4ef5-b5c5-66427bfe1d33\") " pod="openshift-marketplace/redhat-operators-szd25" Dec 13 00:30:52 crc kubenswrapper[3562]: I1213 00:30:52.205869 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca308a67-c7f9-4ef5-b5c5-66427bfe1d33-utilities\") pod \"redhat-operators-szd25\" (UID: \"ca308a67-c7f9-4ef5-b5c5-66427bfe1d33\") " pod="openshift-marketplace/redhat-operators-szd25" Dec 13 00:30:52 crc kubenswrapper[3562]: I1213 00:30:52.206019 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-fjbdb\" (UniqueName: \"kubernetes.io/projected/ca308a67-c7f9-4ef5-b5c5-66427bfe1d33-kube-api-access-fjbdb\") pod \"redhat-operators-szd25\" (UID: \"ca308a67-c7f9-4ef5-b5c5-66427bfe1d33\") " pod="openshift-marketplace/redhat-operators-szd25" Dec 13 00:30:52 crc kubenswrapper[3562]: I1213 00:30:52.206206 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca308a67-c7f9-4ef5-b5c5-66427bfe1d33-catalog-content\") pod \"redhat-operators-szd25\" (UID: \"ca308a67-c7f9-4ef5-b5c5-66427bfe1d33\") " pod="openshift-marketplace/redhat-operators-szd25" Dec 13 00:30:52 crc kubenswrapper[3562]: I1213 00:30:52.207118 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca308a67-c7f9-4ef5-b5c5-66427bfe1d33-catalog-content\") pod \"redhat-operators-szd25\" (UID: \"ca308a67-c7f9-4ef5-b5c5-66427bfe1d33\") " pod="openshift-marketplace/redhat-operators-szd25" Dec 13 00:30:52 crc kubenswrapper[3562]: I1213 00:30:52.207328 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca308a67-c7f9-4ef5-b5c5-66427bfe1d33-utilities\") pod \"redhat-operators-szd25\" (UID: \"ca308a67-c7f9-4ef5-b5c5-66427bfe1d33\") " pod="openshift-marketplace/redhat-operators-szd25" Dec 13 00:30:52 crc kubenswrapper[3562]: I1213 00:30:52.235055 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjbdb\" (UniqueName: \"kubernetes.io/projected/ca308a67-c7f9-4ef5-b5c5-66427bfe1d33-kube-api-access-fjbdb\") pod \"redhat-operators-szd25\" (UID: \"ca308a67-c7f9-4ef5-b5c5-66427bfe1d33\") " pod="openshift-marketplace/redhat-operators-szd25" Dec 13 00:30:52 crc kubenswrapper[3562]: I1213 00:30:52.325490 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-szd25" Dec 13 00:30:52 crc kubenswrapper[3562]: I1213 00:30:52.772326 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-szd25"] Dec 13 00:30:53 crc kubenswrapper[3562]: I1213 00:30:53.597510 3562 generic.go:334] "Generic (PLEG): container finished" podID="ca308a67-c7f9-4ef5-b5c5-66427bfe1d33" containerID="7a3034009be4bbdf456759898d008337354352789b165ff5e67f3396092c2026" exitCode=0 Dec 13 00:30:53 crc kubenswrapper[3562]: I1213 00:30:53.597596 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-szd25" event={"ID":"ca308a67-c7f9-4ef5-b5c5-66427bfe1d33","Type":"ContainerDied","Data":"7a3034009be4bbdf456759898d008337354352789b165ff5e67f3396092c2026"} Dec 13 00:30:53 crc kubenswrapper[3562]: I1213 00:30:53.597936 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-szd25" event={"ID":"ca308a67-c7f9-4ef5-b5c5-66427bfe1d33","Type":"ContainerStarted","Data":"c2bb940c2bea7dc4406a03998726ad9c0ee4aa4f5bdd5c297de566d62f2255c8"} Dec 13 00:30:53 crc kubenswrapper[3562]: I1213 00:30:53.598980 3562 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 13 00:30:54 crc kubenswrapper[3562]: I1213 00:30:54.608169 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-szd25" event={"ID":"ca308a67-c7f9-4ef5-b5c5-66427bfe1d33","Type":"ContainerStarted","Data":"7478f876dab890e2a7fbb4affda9bdea6cd4cae4f75c199ed5f2bf757bc7a9fe"} Dec 13 00:31:04 crc kubenswrapper[3562]: I1213 00:31:04.418558 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 13 00:31:04 crc kubenswrapper[3562]: I1213 00:31:04.419064 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 13 00:31:04 crc kubenswrapper[3562]: I1213 00:31:04.419125 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 13 00:31:04 crc kubenswrapper[3562]: I1213 00:31:04.419203 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 13 00:31:04 crc kubenswrapper[3562]: I1213 00:31:04.419306 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 13 00:31:09 crc kubenswrapper[3562]: I1213 00:31:09.457698 3562 generic.go:334] "Generic (PLEG): container finished" podID="ca308a67-c7f9-4ef5-b5c5-66427bfe1d33" containerID="7478f876dab890e2a7fbb4affda9bdea6cd4cae4f75c199ed5f2bf757bc7a9fe" exitCode=0 Dec 13 00:31:09 crc kubenswrapper[3562]: I1213 00:31:09.457773 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-szd25" event={"ID":"ca308a67-c7f9-4ef5-b5c5-66427bfe1d33","Type":"ContainerDied","Data":"7478f876dab890e2a7fbb4affda9bdea6cd4cae4f75c199ed5f2bf757bc7a9fe"} Dec 13 00:31:10 crc kubenswrapper[3562]: I1213 00:31:10.466004 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-szd25" event={"ID":"ca308a67-c7f9-4ef5-b5c5-66427bfe1d33","Type":"ContainerStarted","Data":"5b8ad1c643619668d56f1e4484320a1e254c794c5e08bb553740ebe9a6d27917"} Dec 13 00:31:10 crc kubenswrapper[3562]: I1213 00:31:10.484875 3562 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-szd25" podStartSLOduration=3.232194069 podStartE2EDuration="19.484826431s" podCreationTimestamp="2025-12-13 00:30:51 +0000 UTC" firstStartedPulling="2025-12-13 00:30:53.598739412 +0000 UTC m=+1490.081969935" lastFinishedPulling="2025-12-13 00:31:09.851371784 +0000 UTC m=+1506.334602297" observedRunningTime="2025-12-13 00:31:10.482589738 +0000 UTC m=+1506.965820281" watchObservedRunningTime="2025-12-13 00:31:10.484826431 +0000 UTC m=+1506.968056954" Dec 13 00:31:12 crc kubenswrapper[3562]: I1213 00:31:12.326849 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-szd25" Dec 13 00:31:12 crc kubenswrapper[3562]: I1213 00:31:12.326932 3562 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-szd25" Dec 13 00:31:13 crc kubenswrapper[3562]: I1213 00:31:13.052524 3562 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 00:31:13 crc kubenswrapper[3562]: I1213 00:31:13.052635 3562 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 00:31:13 crc kubenswrapper[3562]: I1213 00:31:13.446954 3562 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-szd25" podUID="ca308a67-c7f9-4ef5-b5c5-66427bfe1d33" containerName="registry-server" probeResult="failure" output=< Dec 13 00:31:13 crc kubenswrapper[3562]: timeout: failed to connect service ":50051" within 1s Dec 13 00:31:13 crc kubenswrapper[3562]: > Dec 13 00:31:22 crc kubenswrapper[3562]: I1213 00:31:22.410177 3562 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-szd25" Dec 13 00:31:22 crc kubenswrapper[3562]: I1213 00:31:22.515410 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-szd25" Dec 13 00:31:22 crc kubenswrapper[3562]: I1213 00:31:22.561979 3562 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-szd25"] Dec 13 00:31:23 crc kubenswrapper[3562]: I1213 00:31:23.594792 3562 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-szd25" podUID="ca308a67-c7f9-4ef5-b5c5-66427bfe1d33" containerName="registry-server" containerID="cri-o://5b8ad1c643619668d56f1e4484320a1e254c794c5e08bb553740ebe9a6d27917" gracePeriod=2 Dec 13 00:31:24 crc kubenswrapper[3562]: I1213 00:31:24.434569 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-szd25" Dec 13 00:31:24 crc kubenswrapper[3562]: I1213 00:31:24.602353 3562 generic.go:334] "Generic (PLEG): container finished" podID="ca308a67-c7f9-4ef5-b5c5-66427bfe1d33" containerID="5b8ad1c643619668d56f1e4484320a1e254c794c5e08bb553740ebe9a6d27917" exitCode=0 Dec 13 00:31:24 crc kubenswrapper[3562]: I1213 00:31:24.602392 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-szd25" event={"ID":"ca308a67-c7f9-4ef5-b5c5-66427bfe1d33","Type":"ContainerDied","Data":"5b8ad1c643619668d56f1e4484320a1e254c794c5e08bb553740ebe9a6d27917"} Dec 13 00:31:24 crc kubenswrapper[3562]: I1213 00:31:24.602392 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-szd25" Dec 13 00:31:24 crc kubenswrapper[3562]: I1213 00:31:24.602415 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-szd25" event={"ID":"ca308a67-c7f9-4ef5-b5c5-66427bfe1d33","Type":"ContainerDied","Data":"c2bb940c2bea7dc4406a03998726ad9c0ee4aa4f5bdd5c297de566d62f2255c8"} Dec 13 00:31:24 crc kubenswrapper[3562]: I1213 00:31:24.602433 3562 scope.go:117] "RemoveContainer" containerID="5b8ad1c643619668d56f1e4484320a1e254c794c5e08bb553740ebe9a6d27917" Dec 13 00:31:24 crc kubenswrapper[3562]: I1213 00:31:24.628930 3562 scope.go:117] "RemoveContainer" containerID="7478f876dab890e2a7fbb4affda9bdea6cd4cae4f75c199ed5f2bf757bc7a9fe" Dec 13 00:31:24 crc kubenswrapper[3562]: I1213 00:31:24.635400 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjbdb\" (UniqueName: \"kubernetes.io/projected/ca308a67-c7f9-4ef5-b5c5-66427bfe1d33-kube-api-access-fjbdb\") pod \"ca308a67-c7f9-4ef5-b5c5-66427bfe1d33\" (UID: \"ca308a67-c7f9-4ef5-b5c5-66427bfe1d33\") " Dec 13 00:31:24 crc kubenswrapper[3562]: I1213 00:31:24.635563 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca308a67-c7f9-4ef5-b5c5-66427bfe1d33-utilities\") pod \"ca308a67-c7f9-4ef5-b5c5-66427bfe1d33\" (UID: \"ca308a67-c7f9-4ef5-b5c5-66427bfe1d33\") " Dec 13 00:31:24 crc kubenswrapper[3562]: I1213 00:31:24.635702 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca308a67-c7f9-4ef5-b5c5-66427bfe1d33-catalog-content\") pod \"ca308a67-c7f9-4ef5-b5c5-66427bfe1d33\" (UID: \"ca308a67-c7f9-4ef5-b5c5-66427bfe1d33\") " Dec 13 00:31:24 crc kubenswrapper[3562]: I1213 00:31:24.636272 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca308a67-c7f9-4ef5-b5c5-66427bfe1d33-utilities" (OuterVolumeSpecName: "utilities") pod "ca308a67-c7f9-4ef5-b5c5-66427bfe1d33" (UID: "ca308a67-c7f9-4ef5-b5c5-66427bfe1d33"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 00:31:24 crc kubenswrapper[3562]: I1213 00:31:24.646140 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca308a67-c7f9-4ef5-b5c5-66427bfe1d33-kube-api-access-fjbdb" (OuterVolumeSpecName: "kube-api-access-fjbdb") pod "ca308a67-c7f9-4ef5-b5c5-66427bfe1d33" (UID: "ca308a67-c7f9-4ef5-b5c5-66427bfe1d33"). InnerVolumeSpecName "kube-api-access-fjbdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 00:31:24 crc kubenswrapper[3562]: I1213 00:31:24.719353 3562 scope.go:117] "RemoveContainer" containerID="7a3034009be4bbdf456759898d008337354352789b165ff5e67f3396092c2026" Dec 13 00:31:24 crc kubenswrapper[3562]: I1213 00:31:24.737206 3562 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-fjbdb\" (UniqueName: \"kubernetes.io/projected/ca308a67-c7f9-4ef5-b5c5-66427bfe1d33-kube-api-access-fjbdb\") on node \"crc\" DevicePath \"\"" Dec 13 00:31:24 crc kubenswrapper[3562]: I1213 00:31:24.737254 3562 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca308a67-c7f9-4ef5-b5c5-66427bfe1d33-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 00:31:24 crc kubenswrapper[3562]: I1213 00:31:24.756277 3562 scope.go:117] "RemoveContainer" containerID="5b8ad1c643619668d56f1e4484320a1e254c794c5e08bb553740ebe9a6d27917" Dec 13 00:31:24 crc kubenswrapper[3562]: E1213 00:31:24.762436 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b8ad1c643619668d56f1e4484320a1e254c794c5e08bb553740ebe9a6d27917\": container with ID starting with 5b8ad1c643619668d56f1e4484320a1e254c794c5e08bb553740ebe9a6d27917 not found: ID does not exist" containerID="5b8ad1c643619668d56f1e4484320a1e254c794c5e08bb553740ebe9a6d27917" Dec 13 00:31:24 crc kubenswrapper[3562]: I1213 00:31:24.762484 3562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b8ad1c643619668d56f1e4484320a1e254c794c5e08bb553740ebe9a6d27917"} err="failed to get container status \"5b8ad1c643619668d56f1e4484320a1e254c794c5e08bb553740ebe9a6d27917\": rpc error: code = NotFound desc = could not find container \"5b8ad1c643619668d56f1e4484320a1e254c794c5e08bb553740ebe9a6d27917\": container with ID starting with 5b8ad1c643619668d56f1e4484320a1e254c794c5e08bb553740ebe9a6d27917 not found: ID does not exist" Dec 13 00:31:24 crc kubenswrapper[3562]: I1213 00:31:24.762499 3562 scope.go:117] "RemoveContainer" containerID="7478f876dab890e2a7fbb4affda9bdea6cd4cae4f75c199ed5f2bf757bc7a9fe" Dec 13 00:31:24 crc kubenswrapper[3562]: E1213 00:31:24.763720 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7478f876dab890e2a7fbb4affda9bdea6cd4cae4f75c199ed5f2bf757bc7a9fe\": container with ID starting with 7478f876dab890e2a7fbb4affda9bdea6cd4cae4f75c199ed5f2bf757bc7a9fe not found: ID does not exist" containerID="7478f876dab890e2a7fbb4affda9bdea6cd4cae4f75c199ed5f2bf757bc7a9fe" Dec 13 00:31:24 crc kubenswrapper[3562]: I1213 00:31:24.763756 3562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7478f876dab890e2a7fbb4affda9bdea6cd4cae4f75c199ed5f2bf757bc7a9fe"} err="failed to get container status \"7478f876dab890e2a7fbb4affda9bdea6cd4cae4f75c199ed5f2bf757bc7a9fe\": rpc error: code = NotFound desc = could not find container \"7478f876dab890e2a7fbb4affda9bdea6cd4cae4f75c199ed5f2bf757bc7a9fe\": container with ID starting with 7478f876dab890e2a7fbb4affda9bdea6cd4cae4f75c199ed5f2bf757bc7a9fe not found: ID does not exist" Dec 13 00:31:24 crc kubenswrapper[3562]: I1213 00:31:24.763767 3562 scope.go:117] "RemoveContainer" containerID="7a3034009be4bbdf456759898d008337354352789b165ff5e67f3396092c2026" Dec 13 00:31:24 crc kubenswrapper[3562]: E1213 00:31:24.768096 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a3034009be4bbdf456759898d008337354352789b165ff5e67f3396092c2026\": container with ID starting with 7a3034009be4bbdf456759898d008337354352789b165ff5e67f3396092c2026 not found: ID does not exist" containerID="7a3034009be4bbdf456759898d008337354352789b165ff5e67f3396092c2026" Dec 13 00:31:24 crc kubenswrapper[3562]: I1213 00:31:24.768122 3562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a3034009be4bbdf456759898d008337354352789b165ff5e67f3396092c2026"} err="failed to get container status \"7a3034009be4bbdf456759898d008337354352789b165ff5e67f3396092c2026\": rpc error: code = NotFound desc = could not find container \"7a3034009be4bbdf456759898d008337354352789b165ff5e67f3396092c2026\": container with ID starting with 7a3034009be4bbdf456759898d008337354352789b165ff5e67f3396092c2026 not found: ID does not exist" Dec 13 00:31:25 crc kubenswrapper[3562]: I1213 00:31:25.481668 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca308a67-c7f9-4ef5-b5c5-66427bfe1d33-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ca308a67-c7f9-4ef5-b5c5-66427bfe1d33" (UID: "ca308a67-c7f9-4ef5-b5c5-66427bfe1d33"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 00:31:25 crc kubenswrapper[3562]: I1213 00:31:25.532644 3562 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-szd25"] Dec 13 00:31:25 crc kubenswrapper[3562]: I1213 00:31:25.537812 3562 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-szd25"] Dec 13 00:31:25 crc kubenswrapper[3562]: I1213 00:31:25.549217 3562 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca308a67-c7f9-4ef5-b5c5-66427bfe1d33-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 00:31:25 crc kubenswrapper[3562]: I1213 00:31:25.883867 3562 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca308a67-c7f9-4ef5-b5c5-66427bfe1d33" path="/var/lib/kubelet/pods/ca308a67-c7f9-4ef5-b5c5-66427bfe1d33/volumes" Dec 13 00:31:37 crc kubenswrapper[3562]: I1213 00:31:37.549117 3562 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nqsv2"] Dec 13 00:31:37 crc kubenswrapper[3562]: I1213 00:31:37.549762 3562 topology_manager.go:215] "Topology Admit Handler" podUID="6716e8f1-3890-4a2d-863a-40c3842804b9" podNamespace="openshift-marketplace" podName="certified-operators-nqsv2" Dec 13 00:31:37 crc kubenswrapper[3562]: E1213 00:31:37.549966 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="ca308a67-c7f9-4ef5-b5c5-66427bfe1d33" containerName="extract-utilities" Dec 13 00:31:37 crc kubenswrapper[3562]: I1213 00:31:37.549977 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca308a67-c7f9-4ef5-b5c5-66427bfe1d33" containerName="extract-utilities" Dec 13 00:31:37 crc kubenswrapper[3562]: E1213 00:31:37.549997 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="ca308a67-c7f9-4ef5-b5c5-66427bfe1d33" containerName="extract-content" Dec 13 00:31:37 crc kubenswrapper[3562]: I1213 00:31:37.550009 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca308a67-c7f9-4ef5-b5c5-66427bfe1d33" containerName="extract-content" Dec 13 00:31:37 crc kubenswrapper[3562]: E1213 00:31:37.550035 3562 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="ca308a67-c7f9-4ef5-b5c5-66427bfe1d33" containerName="registry-server" Dec 13 00:31:37 crc kubenswrapper[3562]: I1213 00:31:37.550044 3562 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca308a67-c7f9-4ef5-b5c5-66427bfe1d33" containerName="registry-server" Dec 13 00:31:37 crc kubenswrapper[3562]: I1213 00:31:37.550204 3562 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca308a67-c7f9-4ef5-b5c5-66427bfe1d33" containerName="registry-server" Dec 13 00:31:37 crc kubenswrapper[3562]: I1213 00:31:37.551083 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nqsv2" Dec 13 00:31:37 crc kubenswrapper[3562]: I1213 00:31:37.569965 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nqsv2"] Dec 13 00:31:37 crc kubenswrapper[3562]: I1213 00:31:37.643770 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6716e8f1-3890-4a2d-863a-40c3842804b9-catalog-content\") pod \"certified-operators-nqsv2\" (UID: \"6716e8f1-3890-4a2d-863a-40c3842804b9\") " pod="openshift-marketplace/certified-operators-nqsv2" Dec 13 00:31:37 crc kubenswrapper[3562]: I1213 00:31:37.643999 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2rw6\" (UniqueName: \"kubernetes.io/projected/6716e8f1-3890-4a2d-863a-40c3842804b9-kube-api-access-w2rw6\") pod \"certified-operators-nqsv2\" (UID: \"6716e8f1-3890-4a2d-863a-40c3842804b9\") " pod="openshift-marketplace/certified-operators-nqsv2" Dec 13 00:31:37 crc kubenswrapper[3562]: I1213 00:31:37.644088 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6716e8f1-3890-4a2d-863a-40c3842804b9-utilities\") pod \"certified-operators-nqsv2\" (UID: \"6716e8f1-3890-4a2d-863a-40c3842804b9\") " pod="openshift-marketplace/certified-operators-nqsv2" Dec 13 00:31:37 crc kubenswrapper[3562]: I1213 00:31:37.742553 3562 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2mbk9"] Dec 13 00:31:37 crc kubenswrapper[3562]: I1213 00:31:37.743001 3562 topology_manager.go:215] "Topology Admit Handler" podUID="6391d081-3a29-450d-b394-6565456ca4cf" podNamespace="openshift-marketplace" podName="community-operators-2mbk9" Dec 13 00:31:37 crc kubenswrapper[3562]: I1213 00:31:37.744267 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2mbk9" Dec 13 00:31:37 crc kubenswrapper[3562]: I1213 00:31:37.744988 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-w2rw6\" (UniqueName: \"kubernetes.io/projected/6716e8f1-3890-4a2d-863a-40c3842804b9-kube-api-access-w2rw6\") pod \"certified-operators-nqsv2\" (UID: \"6716e8f1-3890-4a2d-863a-40c3842804b9\") " pod="openshift-marketplace/certified-operators-nqsv2" Dec 13 00:31:37 crc kubenswrapper[3562]: I1213 00:31:37.745032 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6716e8f1-3890-4a2d-863a-40c3842804b9-utilities\") pod \"certified-operators-nqsv2\" (UID: \"6716e8f1-3890-4a2d-863a-40c3842804b9\") " pod="openshift-marketplace/certified-operators-nqsv2" Dec 13 00:31:37 crc kubenswrapper[3562]: I1213 00:31:37.745080 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6716e8f1-3890-4a2d-863a-40c3842804b9-catalog-content\") pod \"certified-operators-nqsv2\" (UID: \"6716e8f1-3890-4a2d-863a-40c3842804b9\") " pod="openshift-marketplace/certified-operators-nqsv2" Dec 13 00:31:37 crc kubenswrapper[3562]: I1213 00:31:37.745735 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6716e8f1-3890-4a2d-863a-40c3842804b9-catalog-content\") pod \"certified-operators-nqsv2\" (UID: \"6716e8f1-3890-4a2d-863a-40c3842804b9\") " pod="openshift-marketplace/certified-operators-nqsv2" Dec 13 00:31:37 crc kubenswrapper[3562]: I1213 00:31:37.745743 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6716e8f1-3890-4a2d-863a-40c3842804b9-utilities\") pod \"certified-operators-nqsv2\" (UID: \"6716e8f1-3890-4a2d-863a-40c3842804b9\") " pod="openshift-marketplace/certified-operators-nqsv2" Dec 13 00:31:37 crc kubenswrapper[3562]: I1213 00:31:37.756142 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2mbk9"] Dec 13 00:31:37 crc kubenswrapper[3562]: I1213 00:31:37.772779 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2rw6\" (UniqueName: \"kubernetes.io/projected/6716e8f1-3890-4a2d-863a-40c3842804b9-kube-api-access-w2rw6\") pod \"certified-operators-nqsv2\" (UID: \"6716e8f1-3890-4a2d-863a-40c3842804b9\") " pod="openshift-marketplace/certified-operators-nqsv2" Dec 13 00:31:37 crc kubenswrapper[3562]: I1213 00:31:37.847044 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlkwg\" (UniqueName: \"kubernetes.io/projected/6391d081-3a29-450d-b394-6565456ca4cf-kube-api-access-nlkwg\") pod \"community-operators-2mbk9\" (UID: \"6391d081-3a29-450d-b394-6565456ca4cf\") " pod="openshift-marketplace/community-operators-2mbk9" Dec 13 00:31:37 crc kubenswrapper[3562]: I1213 00:31:37.847188 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6391d081-3a29-450d-b394-6565456ca4cf-catalog-content\") pod \"community-operators-2mbk9\" (UID: \"6391d081-3a29-450d-b394-6565456ca4cf\") " pod="openshift-marketplace/community-operators-2mbk9" Dec 13 00:31:37 crc kubenswrapper[3562]: I1213 00:31:37.847272 3562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6391d081-3a29-450d-b394-6565456ca4cf-utilities\") pod \"community-operators-2mbk9\" (UID: \"6391d081-3a29-450d-b394-6565456ca4cf\") " pod="openshift-marketplace/community-operators-2mbk9" Dec 13 00:31:37 crc kubenswrapper[3562]: I1213 00:31:37.868448 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nqsv2" Dec 13 00:31:37 crc kubenswrapper[3562]: I1213 00:31:37.948842 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nlkwg\" (UniqueName: \"kubernetes.io/projected/6391d081-3a29-450d-b394-6565456ca4cf-kube-api-access-nlkwg\") pod \"community-operators-2mbk9\" (UID: \"6391d081-3a29-450d-b394-6565456ca4cf\") " pod="openshift-marketplace/community-operators-2mbk9" Dec 13 00:31:37 crc kubenswrapper[3562]: I1213 00:31:37.949257 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6391d081-3a29-450d-b394-6565456ca4cf-catalog-content\") pod \"community-operators-2mbk9\" (UID: \"6391d081-3a29-450d-b394-6565456ca4cf\") " pod="openshift-marketplace/community-operators-2mbk9" Dec 13 00:31:37 crc kubenswrapper[3562]: I1213 00:31:37.949318 3562 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6391d081-3a29-450d-b394-6565456ca4cf-utilities\") pod \"community-operators-2mbk9\" (UID: \"6391d081-3a29-450d-b394-6565456ca4cf\") " pod="openshift-marketplace/community-operators-2mbk9" Dec 13 00:31:37 crc kubenswrapper[3562]: I1213 00:31:37.950597 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6391d081-3a29-450d-b394-6565456ca4cf-utilities\") pod \"community-operators-2mbk9\" (UID: \"6391d081-3a29-450d-b394-6565456ca4cf\") " pod="openshift-marketplace/community-operators-2mbk9" Dec 13 00:31:37 crc kubenswrapper[3562]: I1213 00:31:37.950890 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6391d081-3a29-450d-b394-6565456ca4cf-catalog-content\") pod \"community-operators-2mbk9\" (UID: \"6391d081-3a29-450d-b394-6565456ca4cf\") " pod="openshift-marketplace/community-operators-2mbk9" Dec 13 00:31:37 crc kubenswrapper[3562]: I1213 00:31:37.971813 3562 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlkwg\" (UniqueName: \"kubernetes.io/projected/6391d081-3a29-450d-b394-6565456ca4cf-kube-api-access-nlkwg\") pod \"community-operators-2mbk9\" (UID: \"6391d081-3a29-450d-b394-6565456ca4cf\") " pod="openshift-marketplace/community-operators-2mbk9" Dec 13 00:31:38 crc kubenswrapper[3562]: I1213 00:31:38.059944 3562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2mbk9" Dec 13 00:31:38 crc kubenswrapper[3562]: I1213 00:31:38.362680 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2mbk9"] Dec 13 00:31:38 crc kubenswrapper[3562]: I1213 00:31:38.401555 3562 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nqsv2"] Dec 13 00:31:38 crc kubenswrapper[3562]: I1213 00:31:38.689536 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2mbk9" event={"ID":"6391d081-3a29-450d-b394-6565456ca4cf","Type":"ContainerStarted","Data":"3f2608030651a666e02ccfcf187781ce31ba7891a2cf0fb7601dcf1ca9bd8ddd"} Dec 13 00:31:38 crc kubenswrapper[3562]: I1213 00:31:38.691108 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nqsv2" event={"ID":"6716e8f1-3890-4a2d-863a-40c3842804b9","Type":"ContainerStarted","Data":"a03b0a98850dfdfaa30f8622940ba5b984abd74eab7e26f4026a3266f7f7bbf8"} Dec 13 00:31:39 crc kubenswrapper[3562]: I1213 00:31:39.716854 3562 generic.go:334] "Generic (PLEG): container finished" podID="6716e8f1-3890-4a2d-863a-40c3842804b9" containerID="af927fc105b2de25f053536d172add7535ec1af92208afadf26cb70243c4ac7f" exitCode=0 Dec 13 00:31:39 crc kubenswrapper[3562]: I1213 00:31:39.716983 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nqsv2" event={"ID":"6716e8f1-3890-4a2d-863a-40c3842804b9","Type":"ContainerDied","Data":"af927fc105b2de25f053536d172add7535ec1af92208afadf26cb70243c4ac7f"} Dec 13 00:31:39 crc kubenswrapper[3562]: I1213 00:31:39.721651 3562 generic.go:334] "Generic (PLEG): container finished" podID="6391d081-3a29-450d-b394-6565456ca4cf" containerID="545c256c96adf2821df103741f4bafc2d8f5806d172169b58effdd5d845385b6" exitCode=0 Dec 13 00:31:39 crc kubenswrapper[3562]: I1213 00:31:39.722545 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2mbk9" event={"ID":"6391d081-3a29-450d-b394-6565456ca4cf","Type":"ContainerDied","Data":"545c256c96adf2821df103741f4bafc2d8f5806d172169b58effdd5d845385b6"} Dec 13 00:31:40 crc kubenswrapper[3562]: I1213 00:31:40.730442 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nqsv2" event={"ID":"6716e8f1-3890-4a2d-863a-40c3842804b9","Type":"ContainerStarted","Data":"4339a1c2d34e032f5b2313afff34ee736944c4ebe26d4722a40ceec85990aa9b"} Dec 13 00:31:40 crc kubenswrapper[3562]: I1213 00:31:40.733066 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2mbk9" event={"ID":"6391d081-3a29-450d-b394-6565456ca4cf","Type":"ContainerStarted","Data":"ecb1b9947d11ba519372b28862c51e5a752c1098fa3c2d28ec9b3c87aa7310bc"} Dec 13 00:31:42 crc kubenswrapper[3562]: I1213 00:31:42.748240 3562 generic.go:334] "Generic (PLEG): container finished" podID="6716e8f1-3890-4a2d-863a-40c3842804b9" containerID="4339a1c2d34e032f5b2313afff34ee736944c4ebe26d4722a40ceec85990aa9b" exitCode=0 Dec 13 00:31:42 crc kubenswrapper[3562]: I1213 00:31:42.748277 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nqsv2" event={"ID":"6716e8f1-3890-4a2d-863a-40c3842804b9","Type":"ContainerDied","Data":"4339a1c2d34e032f5b2313afff34ee736944c4ebe26d4722a40ceec85990aa9b"} Dec 13 00:31:43 crc kubenswrapper[3562]: I1213 00:31:43.053290 3562 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 00:31:43 crc kubenswrapper[3562]: I1213 00:31:43.053720 3562 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 00:31:43 crc kubenswrapper[3562]: I1213 00:31:43.053763 3562 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Dec 13 00:31:43 crc kubenswrapper[3562]: I1213 00:31:43.054698 3562 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"aa71ba55810796812b1cd2ff96782d389e5341c0ff849752e5c761c743d772ee"} pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 13 00:31:43 crc kubenswrapper[3562]: I1213 00:31:43.054934 3562 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" containerID="cri-o://aa71ba55810796812b1cd2ff96782d389e5341c0ff849752e5c761c743d772ee" gracePeriod=600 Dec 13 00:31:44 crc kubenswrapper[3562]: E1213 00:31:44.758262 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zpnhg_openshift-machine-config-operator(9d0dcce3-d96e-48cb-9b9f-362105911589)\"" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" Dec 13 00:31:44 crc kubenswrapper[3562]: I1213 00:31:44.762791 3562 generic.go:334] "Generic (PLEG): container finished" podID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerID="aa71ba55810796812b1cd2ff96782d389e5341c0ff849752e5c761c743d772ee" exitCode=0 Dec 13 00:31:44 crc kubenswrapper[3562]: I1213 00:31:44.762837 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerDied","Data":"aa71ba55810796812b1cd2ff96782d389e5341c0ff849752e5c761c743d772ee"} Dec 13 00:31:44 crc kubenswrapper[3562]: I1213 00:31:44.762921 3562 scope.go:117] "RemoveContainer" containerID="f455be034ac89502e36b1c159caf9a6a34f19bdcc892c35ff1575878b6abc592" Dec 13 00:31:44 crc kubenswrapper[3562]: I1213 00:31:44.763338 3562 scope.go:117] "RemoveContainer" containerID="aa71ba55810796812b1cd2ff96782d389e5341c0ff849752e5c761c743d772ee" Dec 13 00:31:44 crc kubenswrapper[3562]: E1213 00:31:44.763765 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zpnhg_openshift-machine-config-operator(9d0dcce3-d96e-48cb-9b9f-362105911589)\"" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" Dec 13 00:31:44 crc kubenswrapper[3562]: I1213 00:31:44.765817 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nqsv2" event={"ID":"6716e8f1-3890-4a2d-863a-40c3842804b9","Type":"ContainerStarted","Data":"d85ace963b43497241d240cc319200a0e9628238e545bd478faf9be22947b983"} Dec 13 00:31:44 crc kubenswrapper[3562]: I1213 00:31:44.768488 3562 generic.go:334] "Generic (PLEG): container finished" podID="6391d081-3a29-450d-b394-6565456ca4cf" containerID="ecb1b9947d11ba519372b28862c51e5a752c1098fa3c2d28ec9b3c87aa7310bc" exitCode=0 Dec 13 00:31:44 crc kubenswrapper[3562]: I1213 00:31:44.768517 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2mbk9" event={"ID":"6391d081-3a29-450d-b394-6565456ca4cf","Type":"ContainerDied","Data":"ecb1b9947d11ba519372b28862c51e5a752c1098fa3c2d28ec9b3c87aa7310bc"} Dec 13 00:31:46 crc kubenswrapper[3562]: I1213 00:31:46.782939 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2mbk9" event={"ID":"6391d081-3a29-450d-b394-6565456ca4cf","Type":"ContainerStarted","Data":"0f1e5790080790cfd17f3a66c0c7467c932088f37c71b6a6799d5f10db26b8fe"} Dec 13 00:31:47 crc kubenswrapper[3562]: I1213 00:31:47.869522 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nqsv2" Dec 13 00:31:47 crc kubenswrapper[3562]: I1213 00:31:47.869588 3562 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nqsv2" Dec 13 00:31:47 crc kubenswrapper[3562]: I1213 00:31:47.984827 3562 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nqsv2" Dec 13 00:31:48 crc kubenswrapper[3562]: I1213 00:31:48.003203 3562 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nqsv2" podStartSLOduration=7.667541944 podStartE2EDuration="11.003129947s" podCreationTimestamp="2025-12-13 00:31:37 +0000 UTC" firstStartedPulling="2025-12-13 00:31:39.72126159 +0000 UTC m=+1536.204492113" lastFinishedPulling="2025-12-13 00:31:43.056849593 +0000 UTC m=+1539.540080116" observedRunningTime="2025-12-13 00:31:44.834090135 +0000 UTC m=+1541.317320658" watchObservedRunningTime="2025-12-13 00:31:48.003129947 +0000 UTC m=+1544.486360480" Dec 13 00:31:48 crc kubenswrapper[3562]: I1213 00:31:48.876011 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nqsv2" Dec 13 00:31:48 crc kubenswrapper[3562]: I1213 00:31:48.919217 3562 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nqsv2"] Dec 13 00:31:50 crc kubenswrapper[3562]: I1213 00:31:50.802620 3562 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nqsv2" podUID="6716e8f1-3890-4a2d-863a-40c3842804b9" containerName="registry-server" containerID="cri-o://d85ace963b43497241d240cc319200a0e9628238e545bd478faf9be22947b983" gracePeriod=2 Dec 13 00:31:58 crc kubenswrapper[3562]: I1213 00:31:58.642786 3562 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nqsv2_6716e8f1-3890-4a2d-863a-40c3842804b9/registry-server/0.log" Dec 13 00:31:58 crc kubenswrapper[3562]: I1213 00:31:58.645938 3562 generic.go:334] "Generic (PLEG): container finished" podID="6716e8f1-3890-4a2d-863a-40c3842804b9" containerID="d85ace963b43497241d240cc319200a0e9628238e545bd478faf9be22947b983" exitCode=-1 Dec 13 00:31:58 crc kubenswrapper[3562]: I1213 00:31:58.645978 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nqsv2" event={"ID":"6716e8f1-3890-4a2d-863a-40c3842804b9","Type":"ContainerDied","Data":"d85ace963b43497241d240cc319200a0e9628238e545bd478faf9be22947b983"} Dec 13 00:31:59 crc kubenswrapper[3562]: I1213 00:31:59.625060 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nqsv2" Dec 13 00:31:59 crc kubenswrapper[3562]: I1213 00:31:59.653914 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nqsv2" event={"ID":"6716e8f1-3890-4a2d-863a-40c3842804b9","Type":"ContainerDied","Data":"a03b0a98850dfdfaa30f8622940ba5b984abd74eab7e26f4026a3266f7f7bbf8"} Dec 13 00:31:59 crc kubenswrapper[3562]: I1213 00:31:59.653956 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nqsv2" Dec 13 00:31:59 crc kubenswrapper[3562]: I1213 00:31:59.653969 3562 scope.go:117] "RemoveContainer" containerID="d85ace963b43497241d240cc319200a0e9628238e545bd478faf9be22947b983" Dec 13 00:31:59 crc kubenswrapper[3562]: I1213 00:31:59.688792 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6716e8f1-3890-4a2d-863a-40c3842804b9-utilities\") pod \"6716e8f1-3890-4a2d-863a-40c3842804b9\" (UID: \"6716e8f1-3890-4a2d-863a-40c3842804b9\") " Dec 13 00:31:59 crc kubenswrapper[3562]: I1213 00:31:59.688954 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6716e8f1-3890-4a2d-863a-40c3842804b9-catalog-content\") pod \"6716e8f1-3890-4a2d-863a-40c3842804b9\" (UID: \"6716e8f1-3890-4a2d-863a-40c3842804b9\") " Dec 13 00:31:59 crc kubenswrapper[3562]: I1213 00:31:59.689080 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2rw6\" (UniqueName: \"kubernetes.io/projected/6716e8f1-3890-4a2d-863a-40c3842804b9-kube-api-access-w2rw6\") pod \"6716e8f1-3890-4a2d-863a-40c3842804b9\" (UID: \"6716e8f1-3890-4a2d-863a-40c3842804b9\") " Dec 13 00:31:59 crc kubenswrapper[3562]: I1213 00:31:59.691313 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6716e8f1-3890-4a2d-863a-40c3842804b9-utilities" (OuterVolumeSpecName: "utilities") pod "6716e8f1-3890-4a2d-863a-40c3842804b9" (UID: "6716e8f1-3890-4a2d-863a-40c3842804b9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 00:31:59 crc kubenswrapper[3562]: I1213 00:31:59.697360 3562 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2mbk9" podStartSLOduration=17.233706316 podStartE2EDuration="22.697318831s" podCreationTimestamp="2025-12-13 00:31:37 +0000 UTC" firstStartedPulling="2025-12-13 00:31:39.724527931 +0000 UTC m=+1536.207758454" lastFinishedPulling="2025-12-13 00:31:45.188140446 +0000 UTC m=+1541.671370969" observedRunningTime="2025-12-13 00:31:59.694442032 +0000 UTC m=+1556.177672565" watchObservedRunningTime="2025-12-13 00:31:59.697318831 +0000 UTC m=+1556.180549354" Dec 13 00:31:59 crc kubenswrapper[3562]: I1213 00:31:59.699212 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6716e8f1-3890-4a2d-863a-40c3842804b9-kube-api-access-w2rw6" (OuterVolumeSpecName: "kube-api-access-w2rw6") pod "6716e8f1-3890-4a2d-863a-40c3842804b9" (UID: "6716e8f1-3890-4a2d-863a-40c3842804b9"). InnerVolumeSpecName "kube-api-access-w2rw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 00:31:59 crc kubenswrapper[3562]: I1213 00:31:59.711021 3562 scope.go:117] "RemoveContainer" containerID="4339a1c2d34e032f5b2313afff34ee736944c4ebe26d4722a40ceec85990aa9b" Dec 13 00:31:59 crc kubenswrapper[3562]: I1213 00:31:59.755005 3562 scope.go:117] "RemoveContainer" containerID="af927fc105b2de25f053536d172add7535ec1af92208afadf26cb70243c4ac7f" Dec 13 00:31:59 crc kubenswrapper[3562]: I1213 00:31:59.791282 3562 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-w2rw6\" (UniqueName: \"kubernetes.io/projected/6716e8f1-3890-4a2d-863a-40c3842804b9-kube-api-access-w2rw6\") on node \"crc\" DevicePath \"\"" Dec 13 00:31:59 crc kubenswrapper[3562]: I1213 00:31:59.791327 3562 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6716e8f1-3890-4a2d-863a-40c3842804b9-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 00:31:59 crc kubenswrapper[3562]: I1213 00:31:59.875645 3562 scope.go:117] "RemoveContainer" containerID="aa71ba55810796812b1cd2ff96782d389e5341c0ff849752e5c761c743d772ee" Dec 13 00:31:59 crc kubenswrapper[3562]: E1213 00:31:59.876167 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zpnhg_openshift-machine-config-operator(9d0dcce3-d96e-48cb-9b9f-362105911589)\"" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" Dec 13 00:31:59 crc kubenswrapper[3562]: I1213 00:31:59.958103 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6716e8f1-3890-4a2d-863a-40c3842804b9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6716e8f1-3890-4a2d-863a-40c3842804b9" (UID: "6716e8f1-3890-4a2d-863a-40c3842804b9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 00:31:59 crc kubenswrapper[3562]: I1213 00:31:59.996843 3562 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6716e8f1-3890-4a2d-863a-40c3842804b9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 00:32:00 crc kubenswrapper[3562]: I1213 00:32:00.281457 3562 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nqsv2"] Dec 13 00:32:00 crc kubenswrapper[3562]: I1213 00:32:00.287259 3562 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nqsv2"] Dec 13 00:32:01 crc kubenswrapper[3562]: I1213 00:32:01.883185 3562 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6716e8f1-3890-4a2d-863a-40c3842804b9" path="/var/lib/kubelet/pods/6716e8f1-3890-4a2d-863a-40c3842804b9/volumes" Dec 13 00:32:04 crc kubenswrapper[3562]: I1213 00:32:04.419560 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 13 00:32:04 crc kubenswrapper[3562]: I1213 00:32:04.419892 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 13 00:32:04 crc kubenswrapper[3562]: I1213 00:32:04.419928 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 13 00:32:04 crc kubenswrapper[3562]: I1213 00:32:04.419948 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 13 00:32:04 crc kubenswrapper[3562]: I1213 00:32:04.420009 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 13 00:32:08 crc kubenswrapper[3562]: I1213 00:32:08.063535 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2mbk9" Dec 13 00:32:08 crc kubenswrapper[3562]: I1213 00:32:08.064179 3562 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2mbk9" Dec 13 00:32:08 crc kubenswrapper[3562]: I1213 00:32:08.143187 3562 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2mbk9" Dec 13 00:32:08 crc kubenswrapper[3562]: I1213 00:32:08.806676 3562 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2mbk9" Dec 13 00:32:08 crc kubenswrapper[3562]: I1213 00:32:08.849400 3562 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2mbk9"] Dec 13 00:32:10 crc kubenswrapper[3562]: I1213 00:32:10.711097 3562 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2mbk9" podUID="6391d081-3a29-450d-b394-6565456ca4cf" containerName="registry-server" containerID="cri-o://0f1e5790080790cfd17f3a66c0c7467c932088f37c71b6a6799d5f10db26b8fe" gracePeriod=2 Dec 13 00:32:10 crc kubenswrapper[3562]: I1213 00:32:10.876330 3562 scope.go:117] "RemoveContainer" containerID="aa71ba55810796812b1cd2ff96782d389e5341c0ff849752e5c761c743d772ee" Dec 13 00:32:10 crc kubenswrapper[3562]: E1213 00:32:10.876867 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zpnhg_openshift-machine-config-operator(9d0dcce3-d96e-48cb-9b9f-362105911589)\"" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" Dec 13 00:32:11 crc kubenswrapper[3562]: I1213 00:32:11.078719 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2mbk9" Dec 13 00:32:11 crc kubenswrapper[3562]: I1213 00:32:11.256000 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlkwg\" (UniqueName: \"kubernetes.io/projected/6391d081-3a29-450d-b394-6565456ca4cf-kube-api-access-nlkwg\") pod \"6391d081-3a29-450d-b394-6565456ca4cf\" (UID: \"6391d081-3a29-450d-b394-6565456ca4cf\") " Dec 13 00:32:11 crc kubenswrapper[3562]: I1213 00:32:11.256269 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6391d081-3a29-450d-b394-6565456ca4cf-catalog-content\") pod \"6391d081-3a29-450d-b394-6565456ca4cf\" (UID: \"6391d081-3a29-450d-b394-6565456ca4cf\") " Dec 13 00:32:11 crc kubenswrapper[3562]: I1213 00:32:11.256328 3562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6391d081-3a29-450d-b394-6565456ca4cf-utilities\") pod \"6391d081-3a29-450d-b394-6565456ca4cf\" (UID: \"6391d081-3a29-450d-b394-6565456ca4cf\") " Dec 13 00:32:11 crc kubenswrapper[3562]: I1213 00:32:11.257159 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6391d081-3a29-450d-b394-6565456ca4cf-utilities" (OuterVolumeSpecName: "utilities") pod "6391d081-3a29-450d-b394-6565456ca4cf" (UID: "6391d081-3a29-450d-b394-6565456ca4cf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 00:32:11 crc kubenswrapper[3562]: I1213 00:32:11.262031 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6391d081-3a29-450d-b394-6565456ca4cf-kube-api-access-nlkwg" (OuterVolumeSpecName: "kube-api-access-nlkwg") pod "6391d081-3a29-450d-b394-6565456ca4cf" (UID: "6391d081-3a29-450d-b394-6565456ca4cf"). InnerVolumeSpecName "kube-api-access-nlkwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 00:32:11 crc kubenswrapper[3562]: I1213 00:32:11.357483 3562 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6391d081-3a29-450d-b394-6565456ca4cf-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 00:32:11 crc kubenswrapper[3562]: I1213 00:32:11.357514 3562 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-nlkwg\" (UniqueName: \"kubernetes.io/projected/6391d081-3a29-450d-b394-6565456ca4cf-kube-api-access-nlkwg\") on node \"crc\" DevicePath \"\"" Dec 13 00:32:11 crc kubenswrapper[3562]: I1213 00:32:11.719983 3562 generic.go:334] "Generic (PLEG): container finished" podID="6391d081-3a29-450d-b394-6565456ca4cf" containerID="0f1e5790080790cfd17f3a66c0c7467c932088f37c71b6a6799d5f10db26b8fe" exitCode=0 Dec 13 00:32:11 crc kubenswrapper[3562]: I1213 00:32:11.720024 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2mbk9" event={"ID":"6391d081-3a29-450d-b394-6565456ca4cf","Type":"ContainerDied","Data":"0f1e5790080790cfd17f3a66c0c7467c932088f37c71b6a6799d5f10db26b8fe"} Dec 13 00:32:11 crc kubenswrapper[3562]: I1213 00:32:11.720043 3562 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2mbk9" event={"ID":"6391d081-3a29-450d-b394-6565456ca4cf","Type":"ContainerDied","Data":"3f2608030651a666e02ccfcf187781ce31ba7891a2cf0fb7601dcf1ca9bd8ddd"} Dec 13 00:32:11 crc kubenswrapper[3562]: I1213 00:32:11.720040 3562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2mbk9" Dec 13 00:32:11 crc kubenswrapper[3562]: I1213 00:32:11.720076 3562 scope.go:117] "RemoveContainer" containerID="0f1e5790080790cfd17f3a66c0c7467c932088f37c71b6a6799d5f10db26b8fe" Dec 13 00:32:11 crc kubenswrapper[3562]: I1213 00:32:11.763216 3562 scope.go:117] "RemoveContainer" containerID="ecb1b9947d11ba519372b28862c51e5a752c1098fa3c2d28ec9b3c87aa7310bc" Dec 13 00:32:11 crc kubenswrapper[3562]: I1213 00:32:11.810181 3562 scope.go:117] "RemoveContainer" containerID="545c256c96adf2821df103741f4bafc2d8f5806d172169b58effdd5d845385b6" Dec 13 00:32:11 crc kubenswrapper[3562]: I1213 00:32:11.811705 3562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6391d081-3a29-450d-b394-6565456ca4cf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6391d081-3a29-450d-b394-6565456ca4cf" (UID: "6391d081-3a29-450d-b394-6565456ca4cf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 00:32:11 crc kubenswrapper[3562]: I1213 00:32:11.854413 3562 scope.go:117] "RemoveContainer" containerID="0f1e5790080790cfd17f3a66c0c7467c932088f37c71b6a6799d5f10db26b8fe" Dec 13 00:32:11 crc kubenswrapper[3562]: E1213 00:32:11.854936 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f1e5790080790cfd17f3a66c0c7467c932088f37c71b6a6799d5f10db26b8fe\": container with ID starting with 0f1e5790080790cfd17f3a66c0c7467c932088f37c71b6a6799d5f10db26b8fe not found: ID does not exist" containerID="0f1e5790080790cfd17f3a66c0c7467c932088f37c71b6a6799d5f10db26b8fe" Dec 13 00:32:11 crc kubenswrapper[3562]: I1213 00:32:11.854994 3562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f1e5790080790cfd17f3a66c0c7467c932088f37c71b6a6799d5f10db26b8fe"} err="failed to get container status \"0f1e5790080790cfd17f3a66c0c7467c932088f37c71b6a6799d5f10db26b8fe\": rpc error: code = NotFound desc = could not find container \"0f1e5790080790cfd17f3a66c0c7467c932088f37c71b6a6799d5f10db26b8fe\": container with ID starting with 0f1e5790080790cfd17f3a66c0c7467c932088f37c71b6a6799d5f10db26b8fe not found: ID does not exist" Dec 13 00:32:11 crc kubenswrapper[3562]: I1213 00:32:11.855008 3562 scope.go:117] "RemoveContainer" containerID="ecb1b9947d11ba519372b28862c51e5a752c1098fa3c2d28ec9b3c87aa7310bc" Dec 13 00:32:11 crc kubenswrapper[3562]: E1213 00:32:11.858580 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecb1b9947d11ba519372b28862c51e5a752c1098fa3c2d28ec9b3c87aa7310bc\": container with ID starting with ecb1b9947d11ba519372b28862c51e5a752c1098fa3c2d28ec9b3c87aa7310bc not found: ID does not exist" containerID="ecb1b9947d11ba519372b28862c51e5a752c1098fa3c2d28ec9b3c87aa7310bc" Dec 13 00:32:11 crc kubenswrapper[3562]: I1213 00:32:11.858644 3562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecb1b9947d11ba519372b28862c51e5a752c1098fa3c2d28ec9b3c87aa7310bc"} err="failed to get container status \"ecb1b9947d11ba519372b28862c51e5a752c1098fa3c2d28ec9b3c87aa7310bc\": rpc error: code = NotFound desc = could not find container \"ecb1b9947d11ba519372b28862c51e5a752c1098fa3c2d28ec9b3c87aa7310bc\": container with ID starting with ecb1b9947d11ba519372b28862c51e5a752c1098fa3c2d28ec9b3c87aa7310bc not found: ID does not exist" Dec 13 00:32:11 crc kubenswrapper[3562]: I1213 00:32:11.858660 3562 scope.go:117] "RemoveContainer" containerID="545c256c96adf2821df103741f4bafc2d8f5806d172169b58effdd5d845385b6" Dec 13 00:32:11 crc kubenswrapper[3562]: E1213 00:32:11.859469 3562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"545c256c96adf2821df103741f4bafc2d8f5806d172169b58effdd5d845385b6\": container with ID starting with 545c256c96adf2821df103741f4bafc2d8f5806d172169b58effdd5d845385b6 not found: ID does not exist" containerID="545c256c96adf2821df103741f4bafc2d8f5806d172169b58effdd5d845385b6" Dec 13 00:32:11 crc kubenswrapper[3562]: I1213 00:32:11.859518 3562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"545c256c96adf2821df103741f4bafc2d8f5806d172169b58effdd5d845385b6"} err="failed to get container status \"545c256c96adf2821df103741f4bafc2d8f5806d172169b58effdd5d845385b6\": rpc error: code = NotFound desc = could not find container \"545c256c96adf2821df103741f4bafc2d8f5806d172169b58effdd5d845385b6\": container with ID starting with 545c256c96adf2821df103741f4bafc2d8f5806d172169b58effdd5d845385b6 not found: ID does not exist" Dec 13 00:32:11 crc kubenswrapper[3562]: I1213 00:32:11.863344 3562 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6391d081-3a29-450d-b394-6565456ca4cf-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 00:32:12 crc kubenswrapper[3562]: I1213 00:32:12.043980 3562 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2mbk9"] Dec 13 00:32:12 crc kubenswrapper[3562]: I1213 00:32:12.050838 3562 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2mbk9"] Dec 13 00:32:13 crc kubenswrapper[3562]: I1213 00:32:13.882651 3562 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6391d081-3a29-450d-b394-6565456ca4cf" path="/var/lib/kubelet/pods/6391d081-3a29-450d-b394-6565456ca4cf/volumes" Dec 13 00:32:21 crc kubenswrapper[3562]: I1213 00:32:21.875234 3562 scope.go:117] "RemoveContainer" containerID="aa71ba55810796812b1cd2ff96782d389e5341c0ff849752e5c761c743d772ee" Dec 13 00:32:21 crc kubenswrapper[3562]: E1213 00:32:21.876347 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zpnhg_openshift-machine-config-operator(9d0dcce3-d96e-48cb-9b9f-362105911589)\"" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" Dec 13 00:32:36 crc kubenswrapper[3562]: I1213 00:32:36.874895 3562 scope.go:117] "RemoveContainer" containerID="aa71ba55810796812b1cd2ff96782d389e5341c0ff849752e5c761c743d772ee" Dec 13 00:32:36 crc kubenswrapper[3562]: E1213 00:32:36.875754 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zpnhg_openshift-machine-config-operator(9d0dcce3-d96e-48cb-9b9f-362105911589)\"" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" Dec 13 00:32:47 crc kubenswrapper[3562]: I1213 00:32:47.875532 3562 scope.go:117] "RemoveContainer" containerID="aa71ba55810796812b1cd2ff96782d389e5341c0ff849752e5c761c743d772ee" Dec 13 00:32:47 crc kubenswrapper[3562]: E1213 00:32:47.877218 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zpnhg_openshift-machine-config-operator(9d0dcce3-d96e-48cb-9b9f-362105911589)\"" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" Dec 13 00:33:00 crc kubenswrapper[3562]: I1213 00:33:00.875295 3562 scope.go:117] "RemoveContainer" containerID="aa71ba55810796812b1cd2ff96782d389e5341c0ff849752e5c761c743d772ee" Dec 13 00:33:00 crc kubenswrapper[3562]: E1213 00:33:00.877406 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zpnhg_openshift-machine-config-operator(9d0dcce3-d96e-48cb-9b9f-362105911589)\"" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" Dec 13 00:33:04 crc kubenswrapper[3562]: I1213 00:33:04.420226 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Dec 13 00:33:04 crc kubenswrapper[3562]: I1213 00:33:04.421000 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Dec 13 00:33:04 crc kubenswrapper[3562]: I1213 00:33:04.421049 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Dec 13 00:33:04 crc kubenswrapper[3562]: I1213 00:33:04.421101 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Dec 13 00:33:04 crc kubenswrapper[3562]: I1213 00:33:04.421177 3562 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Dec 13 00:33:14 crc kubenswrapper[3562]: I1213 00:33:14.877183 3562 scope.go:117] "RemoveContainer" containerID="aa71ba55810796812b1cd2ff96782d389e5341c0ff849752e5c761c743d772ee" Dec 13 00:33:14 crc kubenswrapper[3562]: E1213 00:33:14.879105 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zpnhg_openshift-machine-config-operator(9d0dcce3-d96e-48cb-9b9f-362105911589)\"" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" Dec 13 00:33:25 crc kubenswrapper[3562]: I1213 00:33:25.875629 3562 scope.go:117] "RemoveContainer" containerID="aa71ba55810796812b1cd2ff96782d389e5341c0ff849752e5c761c743d772ee" Dec 13 00:33:25 crc kubenswrapper[3562]: E1213 00:33:25.876416 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zpnhg_openshift-machine-config-operator(9d0dcce3-d96e-48cb-9b9f-362105911589)\"" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" Dec 13 00:33:40 crc kubenswrapper[3562]: I1213 00:33:40.874665 3562 scope.go:117] "RemoveContainer" containerID="aa71ba55810796812b1cd2ff96782d389e5341c0ff849752e5c761c743d772ee" Dec 13 00:33:40 crc kubenswrapper[3562]: E1213 00:33:40.875717 3562 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zpnhg_openshift-machine-config-operator(9d0dcce3-d96e-48cb-9b9f-362105911589)\"" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515117132155024446 0ustar coreroot‹íÁ  ÷Om7 €7šÞ'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015117132156017364 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015117126401016503 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015117126401015453 5ustar corecore